Hardware of SuperMUC-NG Phase 2
System in Installation
Please note that the SuperMUC-NG Phase 2 system is currently in the installation and testing phase.
Currenly, it is not accessible. General user operation is expected in Q4/2024.
Nodes | ||
---|---|---|
Processor | ||
CPUs per Node | 2 | |
Cores per Node | 112 | |
Memory per Node | 512 GByte DDR5 | |
GPUs | Intel Ponte Vecchio | |
GPUs per Node | 4 | |
Memory per GPU | 128 GByte HBM2e | |
Number of Nodes | 240 (incl. 4 login nodes and 2 spare nodes) | |
Total CPU Cores | 26,880 | |
Total Memory | 122.88 TByte DDR5 | |
Total GPUs | 960 | |
Total GPU Memory | 122.88 TByte HBM2e | |
PEAK (fp64; PFlop/s) | 27.96 PFlop/s | |
Linpack (fp64; PFlop/s) | 17.19 PFlop/s | |
Compute network | ||
Fabric | NVIDIA/Mellnox HDR Infiniband (200 GBit/s) | |
Topology | fat tree | |
Interconnects per Node | 2 | |
Number of Islands | 1 | |
Filesystems | ||
HPPFS (same as Phase 1) | 50 PB @ 500 GByte/s | |
DSS (same as Phase 1) | 20 PB @ 70 GByte/s | |
Home Filesystem | 256 TByte | |
DAOS | 1 PB @ 750 GByte/s | |
Infrastructure | ||
Cooling | Direct warm water cooling | |
Software | ||
Operating System | Suse Linux (SLES) | |
Batch Scheduling System | SLURM | |
High Performance Parallel Filesystem (HPPFS) | IBM Spectrum Scale (GPFS) | |
Programming Environment | Intel OneAPI | |
Message Passing | Intel MPI, (OpenMPI) |