The current Linux Compute Cluster Augsburg includes:
LiCCA: Front view
Rear view of one of the racks
Front view of one of the racks
Nodes
ALL
- A head/login node licca-li-01 with 2x64 core AMD EPYC-7713 2.0 GHz CPU-s, 1 TByte of memory, 800GB local storage, and 1PB global storage;
- 42 compute nodes, each with 2x64 core AMD EPYC-7713 2.0 GHz CPU-s, 1 TByte of memory, 800GB local scratch storage;
- 4 high-memory compute nodes, each with 2x64 core AMD EPYC-7713 2.0 GHz CPU-s, 4 TByte of memory, 800GB local scratch storage;
- 8 Graphics Processing Unit (GPU) nodes, each with 3×Nvidia A100 80GB PCIe, 2x64 core AMD EPYC-7713 2.0 GHz CPU-s, 1 TByte of memory, 800GB local scratch storage;
- 1 Graphics Processing Unit (GPU) node, with 4×Nvidia A100-SXM-80GB PCIe, 2x64 core AMD EPYC-7713 2.0 GHz CPU-s, 2 TByte of memory, 800GB local scratch storage;
- All nodes are connected by a 100 GB/sec Nvidia Mellanox;
- SLURM is used for resource allocation and job scheduling;
- All nodes run Ubuntu Linux 22.04 LTS.
Login, Interactive, and Data Transfer Nodes
A head/login node licca-li-01.rz.uni-augsburg.de:
- Processor type AMD EPYC-7713;
- Processor base frequency 2.0 GHz CPU-s;
- Cores per node: 2x64;
- Main memory (RAM) per node 1 TByte;
- 8 NUMA domains with 16 physical cores each;
- 800GB local scratch storage.
CPU Nodes
- 41 compute nodes;
- Processor type AMD EPYC-7713;
- Processor base frequency 2.0 GHz CPU-s;
- Cores per node: 2x64;
- Main memory (RAM) per node 1 TByte;
- 8 NUMA domains with 16 physical cores each;
- 800GB local scratch storage.
Large RAM Nodes
- 4 high-memory compute nodes;
- Processor type AMD EPYC-7713;
- Processor base frequency 2.0 GHz CPU-s;
- Cores per node: 2x64;
- Main memory (RAM) per node 4 TByte;
- 8 NUMA domains with 16 physical cores each;
- 800GB local scratch storage.
GPU Nodes
- 8 Graphics Processing Unit (GPU) nodes;
- Processor type AMD EPYC-7713;
- Processor base frequency 2.0 GHz CPU-s;
- Cores per node: 2x64;
- Main memory (RAM) per node 1 TByte;
- 8 NUMA domains with 16 physical cores each;
- 800GB local scratch storage;
- GPU type Nvidia A100 80GB PCIe;
- GPU-s per node: 3;
- Cores 32-47 /dev/nvidia0;
- Cores 112-127 /dev/nvidia1;
- Cores 64-79 /dev/nvidia2.
GPU Nodes with SXM
- 1 Graphics Processing Unit (GPU) nodes;
- Processor type AMD EPYC-7713;
- Processor base frequency 2.0 GHz CPU-s;
- Cores per node: 2x64;
- Main memory (RAM) per node 2 TByte;
- 8 NUMA domains with 16 physical cores each;
- 800GB local scratch storage;
- GPU type Nvidia A100-SXM-80GB PCIe;
- GPU-s per node: 4;
- Cores 48-63 /dev/nvidia0;
- Cores 16-31 /dev/nvidia1;
- Cores 112-127 /dev/nvidia2;
- Cores 80-95 /dev/nvidia3.
Interconnect
Mellanox 100 GBit/s Ethernet, ...
I/O Subsystem
- 1 PByte of shared disk space
Queue Partitions
HPC Resource Name | Partition | Timelimit | number of Nodes | Nodes | Per-Node | Purpose | ||||
---|---|---|---|---|---|---|---|---|---|---|
CPU | Cores | RAM | RAM/Core | GPU | ||||||
LiCCA-test | test | 2 hours | 1 | licca001 | 2×AMD Epyc-7713 | 2×64 | 1TiB | <8GiB | - | short queue for testing on the login node |
LiCCA-epyc | epyc | 7 days | 41 | licca[002-042] | 2×AMD Epyc-7713 | 2×64 | 1TiB | <8GiB | - | general purpose CPU nodes |
LiCCA-epyc-mem | epyc-mem | 7 days | 4 | licca[043-046] | 2×AMD Epyc-7713 | 2×64 | 4TiB | <32GiB | - | nodes for Jobs with high memory requirements |
LiCCA-epyc-gpu | epyc-gpu | 3 days | 8 | licca[047-054] | 2×AMD Epyc-7713 | 2×64 | 1TiB | <8GiB | 3×Nvidia A100 80GB | general purpose GPU nodes (code must use GPUs) |
LiCCA-epyc-gpu-sxm | epyc-gpu-sxm | 3 days | 1 | licca055 | 2×AMD Epyc-7713 | 2×64 | 2TiB | <16GiB | 4×Nvidia A100-SXM-80GB | special purpose GPU nodes (code must use multiple GPUs) |
∑ | - | 55 | - | - | 7040 | 68Tib | - | 24×Nvidia A100 4×Nvidia A100-SXM-80GB | - |
Special Systems
[ Nodes ] [ ALL ] [ Login, Interactive, and Data Transfer Nodes ] [ CPU Nodes ] [ Large RAM Nodes ] [ GPU Nodes ] [ GPU Nodes with SXM ] [ Interconnect ] [ I/O Subsystem ] [ Queue Partitions ] [ Special Systems ]
Augsburg Linux Compute Cluster includes:
LiCCA/ALCC: Front view
GPFS & ALCC nodes; Front view
LiCCA/ALCC Rear view
Nodes
ALL
- A head/login node alcc129 with 2x18 core Intel XEON Skylake-6140 2.3 GHz CPU-s, 384 GByte of memory, 300GB local storage, and 1PB global storage;
- 12 compute nodes, each with 2x14 core Intel XEON E5-2680v4 2.4 GHz CPU-s, 256 GByte of memory, 300GB local scratch storage;
- 3 compute nodes, each with 2x18 core Intel XEON Skylake-6140 2.3 GHz CPU-s, 384 GByte of memory, 300GB local scratch storage;
- 5 compute nodes, each with 2x32 core AMD EPYC-7452 2.35 GHz CPU-s, 512 GByte of memory, 480GB local scratch storage;
- 7 compute nodes, each with 2x64 core AMD EPYC-7742 2.25 GHz CPU-s, 1 TByte of memory, 480GB local scratch storage;
- All nodes are connected by a 25 GB/sec Nvidia Mellanox;
- SLURM is used for resource allocation and job scheduling;
- All nodes run Ubuntu Linux 22.04 LTS.
Login, Interactive, and Data Transfer Nodes
A head/login node alcc129.physik.uni-augsburg.de:
- Processor type Intel XEON Skylake-6140;
- Processor base frequency 2.3 GHz CPU-s;
- Cores per node: 2x18;
- Main memory (RAM) per node 384 GByte;
- 2 NUMA domains with 18 physical cores each;
- 300GB local scratch storage.
Intel XEON Nodes
- 12 compute nodes;
- Processor type Intel XEON E5-2680v4;
- Processor base frequency 2.4 GHz CPU-s;
- Cores per node: 2x14;
- Main memory (RAM) per node 256 GByte;
- 2 NUMA domains with 14 physical cores each;
- 300GB local scratch storage.
- 3 compute nodes;
- Processor type Intel XEON Skylake-6140;
- Processor base frequency 2.3 GHz CPU-s;
- Cores per node: 2x18;
- Main memory (RAM) per node 384 GByte;
- 2 NUMA domains with 18 physical cores each;
- 300GB local scratch storage.
AMD EPYC Nodes
- 5 compute nodes;
- Processor type AMD EPYC-7452;
- Processor base frequency 2.35 GHz CPU-s;
- Cores per node: 2x32;
- Main memory (RAM) per node 512 GByte;
- 2 NUMA domains with 32 physical cores each;
- 480GB local scratch storage.
- 6 compute nodes;
- Processor type AMD EPYC-7742;
- Processor base frequency 2.25 GHz CPU-s;
- Cores per node: 2x64;
- Main memory (RAM) per node 1 TByte;
- 8 NUMA domains with 16 physical cores each;
- 480GB local scratch storage.
- 1 Graphics Processing Unit (GPU) node;
- Processor type AMD EPYC-7742;
- Processor base frequency 2.25 GHz CPU-s;
- Cores per node: 2x64;
- Main memory (RAM) per node 1 TByte;
- 8 NUMA domains with 16 physical cores each;
- 480GB local scratch storage.
- GPU type Nvidia Tesla-V100S-PCIE-32GB;
- GPU-s per node: 1;
I/O Subsystem
- 1 PByte of shared disk space
Queue Partitions
HPC Resource Name | Partition | Timelimit | number of Nodes | Nodes | Per-Node | Purpose | ||||
---|---|---|---|---|---|---|---|---|---|---|
CPU | Cores | RAM | RAM/Core | GPU | ||||||
ALCC-test | test | 6 hours | 1 | alcc129 | 2×Intel XEON Skylake-6140 | 2×18 | 384GiB | <11GiB | - | short queue for testing on the login node |
ALCC-xeon | xeon | 7 days | 12 3 | alcc[114-125] alcc128, alcc[130-131] | 2×Intel XEON E5-2680v4 | 2×14 2×18 | 256GiB 384GiB | 9GiB <11GiB | - | general purpose CPU nodes |
ALCC-epyc | epyc | 7 days | 3 1 3 2 3 | alcc[133-135] alcc136 alcc[137-139] alcc[140-141] alcc[142-144] | 2×AMD Epyc-7452 2×AMD Epyc-7742 2×AMD Epyc-7742 2×AMD Epyc-7452 2×AMD Epyc-7713 | 2×32 2×64 2×64 2×32 2×64 | 512GiB | 8GiB 8GiB 8GiB 8GiB 8GiB | - 1×Nvidia tesla_v100s-pcie-32gb - - - | general purpose CPU nodes |
∑ | - | 28 | - | - | 1696 | 14Tib | - | - |
Special Systems
[ Nodes ] [ ALL ] [ Login, Interactive, and Data Transfer Nodes ] [ Intel XEON Nodes ] [ AMD EPYC Nodes ] [ I/O Subsystem ] [ Queue Partitions ] [ Special Systems ]