TCM
UoC crest

TCM's Clusters: Hardware

A brief description of the hardware of the clusters run by TCM.

RJN's/BM's cluster

Name: cluster.tcm.phy.private.cam.ac.uk.

Nine compute nodes totalling 12.3 TFLOPS theoretical peak and 1.8TB.

Compute Nodes 1 to 4

Dual socket Xeon Gold 6126 (12 core 2.6GHz with 19.25MB of LLC) (Intel S2600BPB). Nodes 1 and 2 have 384GB, nodes 3 and 4 just 96GB. Memory bus 48 bytes per socket, 2666MHz. Theoretical peak performance 1.75TFLOPS, 256GB/s. Measured performance 1.15TFLOPS, 160GB/s. Purchased 2017.

Four nodes in total (7.0 TFLOPS theoretical peak).

Compute Node 8

Dual socket Xeon Gold 6226 (12 core 2.7GHz with 19.25MB of LLC). 4TB /scratch space, mirrored. Purchased 2019.

(2.0 TFLOPS theoretical peak).

Compute Nodes 9 to 12

Dual socket Xeon E5-2660 v3 (ten core 2.60GHz `Haswell' with 25MB of cache) with 128GB of memory (Intel S2600KP), 2TB 3.5" SATA drive and small SSD for OS. Memory bus 32 bytes per socket, 2133MHz. Theoretical peak performance 832 GFLOPS, 136GB/s. Measured performance, 638 GFLOPS (Linpack 50k MKL), 108GB/s. Purchased 2015.

Four nodes in total (3.32 TFLOPS theoretical peak).

Head Node

Single socket Xeon E3-1220v6 (quad core 3GHz Kaby Lake) with 32GB of memory. 4TB of home directory using software mirror.

Misc

Two separate 1GBit/s ethernet networks with the intention of separating MPI and NFS traffic. Cluster is connected to TCM's network via a 1GBit/s link.

This cluster originally consisted of eight Sun X2270 nodes purchased in 2009. (Dual socket Xeon X5550 quad core 2.67GHz `Nehalem' with 24GB DDR3/1066, 8MB of cache and 1TB disk drive. Theoretical peak performance 85 GFLOPS, 51GB/s.) In 2017 four of these nodes were decommissioned, which enabled three of the remaining nodes to be expanded to 48GB, and provided space for the new nodes 1 to 4. In 2019 the remaining orginal nodes were decommissioned and the new node 8 was added.

Winton cluster

Name: cluster2.tcm.phy.cam.ac.uk.

Twenty three compute nodes totalling 13.6 TFLOPS theoretical peak and 1280GB.

Ownership: nodes 9 to 11, Dr Chin. The rest, Dr Morris.

Compute Nodes 1 to 11

Dual socket Xeon E5-2670 (eight core 2.60GHz `Sandy Bridge' with 20MB of cache) with 32GB of memory. Memory bus 32 bytes per socket, 1600MHz. Theoretical peak performance 332 GFLOPS, 102GB/s. Measured performance, 280GFLOPS (Linpack 20k MKL), 70GB/s.

Eleven nodes in total (3.66 TFLOPS theoretical peak).

Compute Nodes 12 to 15

Identical to nodes 9 to 12 of cluster.tcm above. Four nodes in total (3.32 TFLOPS theoretical peak).

Compute Nodes 16 to 23

As nodes 12 to 15, but only 64GB of RAM. Eight nodes in total (6.64 TFLOPS theoretical peak).

Head Node

Single socket Xeon E5-2640 (six core 2.5GHz `Sandy Bridge') with 32GB of memory. 2.7TB of home directory using software 3-way mirror.

Misc

Head node has a 10GBit/s uplink to internal network. Cluster is connected to the CUDN via a 1GBit/s link.

The RMMs are accessible by running a browser on the head node and pointing it at 172.31.2.N for node N.