Documents

Вхід

Components of complex SCIT

SCIT - 4

28-nodes cluster on multi-core processors Intel Xeon E5-2600

Cluster platform based on the latest HP ProLiant Gen8 BladeSystems and has the following characteristics:

The supercomputer is composed of the classic nodes of CPUs and hybrid nodes with graphic accelerators.

Each node has 16 cores (32 mode HyperThreading), 64 GB of RAM. Hybrid nodes additionally have 3 accelerators NVidia Tesla M2075.

In general, the cluster has the following performance characteristics:

SCIT-4 is integrated with high-performance data storage of capacity 120 TB based on parallel file system Lustre.

SCIT - 3

scit3

127-processor of cluster on multicore processors (75 nodes on duo core processors Intel Xeon of 5160 and 52 nodes on quad core processors Xeon 5345)

Peak cluster productivity7500 GFlops
Real confirmed productivity (125 nodes)5317 GFlops
Processors
Dual Core Intel Xeon 5160 3,0 GHz 4Mb cache
Quad Core Intel Xeon 5345 with EM64T 2,2 GHz 4Mb cache
Number of processors in one node2
Number of cores in one node4 and 8
Operative memory
one core2 Gbytes
one node8 and 16 Gbytes
Data storage system typeRAID 5
File systemLustre
Total storage capacity30 Tbytes
General power consumption60 kVA 380 V

Computing means of cluster (computing nodes and the operating node) represent a file of the servers connected among themselves by two LAN - high-speed network InfiniBand (throughput of the channel to 900 Mb per second) and network Gigabit Ethernet (throughput of the channel to 1000 Mbit/c).

Network InfiniBand is intended for a high-speed exchange between nodes during calculations. At data exchange between two nodes on network InfiniBand with use of reports MPI throughput of 850 Mb per second can be reached.

Network Gigabit Ethernet is intended for connection of all computing nodes of cluster with operating node and a file-server.

SCIT - 2

scit2

32-processor cluster on single core microprocessors Intel Itanium2

Peak cluster productivity is 360 Gflops, real productivity is 280 Gflops

Processor type – one core 64-bits Intel Itanium2 (clock frequency - 1,4 GHz, a cache - 3 Mb, power consumption of 135 W, number of processors in node of cluster - 2)

Operative memory of node - 2 GBytes

Number of processors cores in node - 2

Data storage system - type RAID5, global file system in total amount of 30 Tbytes

The general power consumption - 20 кVА from a network 380 V

Computing means of cluster (computing nodes and the operating node) represent a file of the servers connected among themselves by two LAN - high-speed network SCI (throughput of the channel of 350 Mb per second) and network Gigabit Ethernet (throughput of the channel to 1000 Mbit/c).

Network SCI is intended for a high-speed exchange between nodes during calculations. At data exchange between two nodes on network SCI with use of reports MPI throughput of 345 Mb per second can be reached.

Network Gigabit Ethernet is intended for connection of all computing nodes of cluster with operating knot and a file-server.

SCIT - 1

scit1

24-processor cluster on single core microprocessors Intel Xeon

Peak cluster productivity is 255 Gflops (1 Gflops = 1 billion operations from a floating comma per second), real productivity is 189 Gflops

Processor type – one core 32-bit Intel Xeon (clock frequency - 2,67 GHz, a cache - 512 KBytes, power consumption of 60-100 W, number of processors in node cluster - 2)

Number of processors core in node - 2

Operative memory of node - 2 GBytes

The data storage system - serves all clusters, type RAID5, global file system in total amount of 30 TBytes

The general power consumption - 8 кVА from a network 380 V

Computing means of clusters (computing nodes and the operating node) represent a file of the servers connected among themselves by two LAN - high-speed network Infiniband (throughput of the channel of 800 Mb per second) and network Gigabit Ethernet (throughput of the channel to 1000 Mbit/c).

Network Infiniband is intended for a high-speed exchange between nodes during calculations. At data exchange between two nodes on network Infiniband with use of reports MPI throughput of 750 Mb per second can be reached.

Network Gigabit Ethernet is intended for connection of all computing nodes of cluster with operating node and a file-server.

вычисления на суперкомпьютере, сверхбыстрые вычисления, рендеринг, фитнес клубы, спортивные клубы