Nodes
ALL
- A head/login node licca-li-01 with 2x64 core AMD EPYC-7713 2.0 GHz CPU-s, 1 TByte of memory, 800GB local storage, and 1PB global storage;
- 42 compute nodes, each with 2x64 core AMD EPYC-7713 2.0 GHz CPU-s, 1 TByte of memory, 800GB local scratch storage;
- 4 high-memory compute nodes, each with 2x64 core AMD EPYC-7713 2.0 GHz CPU-s, 4 TByte of memory, 800GB local scratch storage;
- 8 Graphics Processing Unit (GPU) nodes, each with 3×Nvidia A100 80GB PCIe, 2x64 core AMD EPYC-7713 2.0 GHz CPU-s, 1 TByte of memory, 800GB local scratch storage;
- 1 Graphics Processing Unit (GPU) node, with 4×Nvidia A100-SXM-80GB PCIe, 2x64 core AMD EPYC-7713 2.0 GHz CPU-s, 2 TByte of memory, 800GB local scratch storage;
- All nodes are connected by a 100 GB/sec Nvidia Mellanox;
- SLURM is used for resource allocation and job scheduling;
- All nodes run Ubuntu Linux 22.04 LTS.
Login, Interactive, and Data Transfer Nodes
A head/login node licca-li-01.rz.uni-augsburg.de:
- Processor type AMD EPYC-7713;
- Processor base frequency 2.0 GHz CPU-s;
- Cores per node: 2x64;
- Main memory (RAM) per node 1 TByte;
- 8 NUMA domains with 16 physical cores each;
- 800GB local scratch storage.
CPU Nodes
- 41 compute nodes;
- Processor type AMD EPYC-7713;
- Processor base frequency 2.0 GHz CPU-s;
- Cores per node: 2x64;
- Main memory (RAM) per node 1 TByte;
- 8 NUMA domains with 16 physical cores each;
- 800GB local scratch storage.
Large RAM Nodes
- 4 high-memory compute nodes;
- Processor type AMD EPYC-7713;
- Processor base frequency 2.0 GHz CPU-s;
- Cores per node: 2x64;
- Main memory (RAM) per node 4 TByte;
- 8 NUMA domains with 16 physical cores each;
- 800GB local scratch storage.
GPU Nodes
- 8 Graphics Processing Unit (GPU) nodes;
- Processor type AMD EPYC-7713;
- Processor base frequency 2.0 GHz CPU-s;
- Cores per node: 2x64;
- Main memory (RAM) per node 1 TByte;
- 8 NUMA domains with 16 physical cores each;
- 800GB local scratch storage;
- GPU type Nvidia A100 80GB PCIe;
- GPU-s per node: 3;
- Cores 32-47 /dev/nvidia0;
- Cores 112-127 /dev/nvidia1;
- Cores 64-79 /dev/nvidia2.
GPU Nodes with SXM
- 1 Graphics Processing Unit (GPU) nodes;
- Processor type AMD EPYC-7713;
- Processor base frequency 2.0 GHz CPU-s;
- Cores per node: 2x64;
- Main memory (RAM) per node 2 TByte;
- 8 NUMA domains with 16 physical cores each;
- 800GB local scratch storage;
- GPU type Nvidia A100-SXM-80GB;
- GPU-s per node: 4;
- Cores 48-63 /dev/nvidia0;
- Cores 16-31 /dev/nvidia1;
- Cores 112-127 /dev/nvidia2;
- Cores 80-95 /dev/nvidia3.
Interconnect
Mellanox 100 GBit/s Ethernet, ...
I/O Subsystem
- 1 PByte of shared disk space
Queue Partitions
HPC Resource Name |
Partition |
Timelimit |
number of Nodes |
Nodes | Per-Node |
Purpose |
---|
CPU | Cores | RAM | RAM/Core | GPU |
---|
LiCCA-test | test | 2 hours | 1 | licca001 | 2×AMD Epyc-7713 | 2×64 | 1TiB | <8GiB | - | short queue for testing on the login node |
---|
LiCCA-epyc | epyc | 7 days | 41 | licca[002-042] | 2×AMD Epyc-7713 | 2×64 | 1TiB | <8GiB | - | general purpose CPU nodes |
---|
LiCCA-epyc-mem | epyc-mem | 7 days | 4 | licca[043-046] | 2×AMD Epyc-7713 | 2×64 | 4TiB | <32GiB | - | nodes for Jobs with high memory requirements |
---|
LiCCA-epyc-gpu-test | epyc-gpu-test | 6 hours | 1 | licca047 | 2×AMD Epyc-7713 | 2×64 | 1TiB | <8GiB | 3×Nvidia A100 80GB | GPU nodes for development, testing, short calculation(code must use GPUs) |
---|
LiCCA-epyc-gpu | epyc-gpu | 3 days | 7 | licca[048-054] | 2×AMD Epyc-7713 | 2×64 | 1TiB | <8GiB | 3×Nvidia A100 80GB | general purpose GPU nodes (code must use GPUs) |
---|
LiCCA-epyc-gpu-sxm
| epyc-gpu-sxm
| 3 days | 1 | licca055 | 2×AMD Epyc-7713 | 2×64 | 2TiB | <16GiB | 4×Nvidia A100-SXM-80GB | special purpose GPU nodes (code must use multiple GPUs) |
---|
| ∑ | - | 55 | - | - | 7040 | 68Tib | - | 24×Nvidia A100 4×Nvidia A100-SXM-80GB | - |
---|
Special Systems