The current Linux Compute Cluster Augsburg includes:

LiCCA: Front view

Rear view of one of the racks
Front view of one of the racks

Nodes

ALL

  • A head/login node licca-li-01 with 2x64 core AMD EPYC-7713 2.0 GHz CPU-s, 1 TByte of memory, 800GB local storage, and 1PB global storage;
  • 42 compute nodes, each with 2x64 core AMD EPYC-7713  2.0 GHz CPU-s, 1 TByte of memory, 800GB local scratch storage;
  • 4 high-memory compute nodes, each with 2x64 core AMD EPYC-7713 2.0 GHz CPU-s, 4 TByte of memory, 800GB local scratch storage;
  • 8 Graphics Processing Unit (GPU) nodes, each with 3×Nvidia A100 80GB PCIe, 2x64 core AMD EPYC-7713 2.0 GHz CPU-s, 1 TByte of memory, 800GB local scratch storage;
  • 1 Graphics Processing Unit (GPU) node, with 4×Nvidia A100-SXM-80GB PCIe, 2x64 core AMD EPYC-7713 2.0 GHz CPU-s, 2 TByte of memory, 800GB local scratch storage;
  • All nodes are connected by a 100 GB/sec Nvidia Mellanox;
  • SLURM is used for resource allocation and job scheduling;
  • All nodes run Ubuntu Linux 22.04 LTS.

Login, Interactive, and Data Transfer Nodes

A head/login node licca-li-01.rz.uni-augsburg.de:

  • Processor type AMD EPYC-7713;
  • Processor base frequency 2.0 GHz CPU-s;
  • Cores per node:  2x64;
  • Main memory (RAM) per node 1 TByte;
  • 8 NUMA domains with 16 physical cores each;
  • 800GB local scratch storage.

CPU Nodes

  • 41 compute nodes;
  • Processor type AMD EPYC-7713;
  • Processor base frequency 2.0 GHz CPU-s;
  • Cores per node:  2x64;
  • Main memory (RAM) per node 1 TByte;
  • 8 NUMA domains with 16 physical cores each;
  • 800GB local scratch storage.

Large RAM Nodes

  • 4 high-memory compute nodes;
  • Processor type AMD EPYC-7713;
  • Processor base frequency 2.0 GHz CPU-s;
  • Cores per node:  2x64;
  • Main memory (RAM) per node 4 TByte;
  • 8 NUMA domains with 16 physical cores each;
  • 800GB local scratch storage.

GPU Nodes

  • 8 Graphics Processing Unit (GPU) nodes;
  • Processor type AMD EPYC-7713;
  • Processor base frequency 2.0 GHz CPU-s;
  • Cores per node:  2x64;
  • Main memory (RAM) per node 1 TByte;
  • 8 NUMA domains with 16 physical cores each;
  • 800GB local scratch storage;
  • GPU type Nvidia A100 80GB PCIe;
  • GPU-s per node: 3;
  • Cores 32-47 /dev/nvidia0;
  • Cores 112-127 /dev/nvidia1;
  • Cores 64-79 /dev/nvidia2.

GPU Nodes with SXM

  • 1 Graphics Processing Unit (GPU) nodes;
  • Processor type AMD EPYC-7713;
  • Processor base frequency 2.0 GHz CPU-s;
  • Cores per node:  2x64;
  • Main memory (RAM) per node 2 TByte;
  • 8 NUMA domains with 16 physical cores each;
  • 800GB local scratch storage;
  • GPU type Nvidia A100-SXM-80GB;
  • GPU-s per node: 4;
  • Cores 48-63 /dev/nvidia0;
  • Cores 16-31 /dev/nvidia1;
  • Cores 112-127 /dev/nvidia2;
  • Cores 80-95 /dev/nvidia3.

GPU Nodes with H100-NVL

  • From the Chair of Theoretical Physics III
  • 1 Graphics Processing Unit (GPU) nodes;
  • Processor type Intel XEON Gold 6526Y;
  • Processor base frequency 2.8 GHz CPU-s;
  • Cores per node:  2x16;
  • Main memory (RAM) per node 512 GByte;
  • 4 NUMA domains with 8 physical cores each;
  • 800GB local scratch storage;
  • GPU type Nvidia H100-NVL-94GB;
  • GPU-s per node: 4 (2 × pairs);
  • Pair-1 Cores 8-15 /dev/nvidia0;
  • Pair-1 Cores 0-7 /dev/nvidia1;
  • Pair-2 Cores 24-31 /dev/nvidia2;
  • Pair-2 Cores 16-23 /dev/nvidia3.

Interconnect

Mellanox 100 GBit/s Ethernet, ...

I/O Subsystem

  • 1 PByte of shared disk space

Queue Partitions

HPC Resource

Name


Partition


Timelimit


number of Nodes


Nodes

Per-Node


Purpose

CPUCoresRAMRAM/CoreGPU
LiCCA-testtest2 hours1licca0012×AMD Epyc-77132×641TiB<8GiB-short queue for testing on the login node
LiCCA-epycepyc7 days41licca[002-042]2×AMD Epyc-77132×641TiB<8GiB-general purpose CPU nodes
LiCCA-epyc-memepyc-mem7 days4licca[043-046]2×AMD Epyc-77132×644TiB<32GiB-nodes for Jobs with high memory requirements
LiCCA-epyc-gpu-testepyc-gpu-test6 hours1licca0472×AMD Epyc-77132×641TiB<8GiB3×Nvidia A100 80GBGPU nodes for development, testing, short calculation(code must use GPUs)
LiCCA-epyc-gpuepyc-gpu3 days7licca[048-054]2×AMD Epyc-77132×641TiB<8GiB3×Nvidia A100 80GBgeneral purpose GPU nodes (code must use GPUs)
LiCCA-epyc-gpu-sxm
epyc-gpu-sxm
3 days1licca0552×AMD Epyc-77132×642TiB<16GiB4×Nvidia A100-SXM-80GBspecial purpose GPU nodes (code must use multiple GPUs)
LiCCA-xeon-gpu
xeon-gpu
1 day1lacca0562×Intel Xeon GOLD 6526Y2×16512GiB<16GiB2×2×Nvidia H100-NVL-94GBspecial purpose GPU nodes (code must use GPUs in pairs) from Theoretical Physisc III 
 -56--

7072

68.5Tib-

24×Nvidia A100

4×Nvidia A100-SXM-80GB

2×2×Nvidia N100-NVL-94GB

-

Special Systems


Augsburg Linux Compute Cluster includes:

LiCCA/ALCC: Front view

GPFS & ALCC nodes; Front view
LiCCA/ALCC Rear view

Nodes

ALL

  • A head/login node alcc129 with 2x18 core Intel XEON Skylake-6140 2.3 GHz CPU-s, 384 GByte of memory, 300GB local storage, and 1PB global storage;
  • 12 compute nodes, each with 2x14 core Intel XEON E5-2680v4 2.4 GHz CPU-s, 256 GByte of memory, 300GB local scratch storage;
  • 3 compute nodes, each with 2x18 core Intel XEON Skylake-6140 2.3 GHz CPU-s, 384 GByte of memory, 300GB local scratch storage;
  • 5 compute nodes, each with 2x32 core AMD EPYC-7452  2.35 GHz CPU-s, 512 GByte of memory, 480GB local scratch storage;
  • 7 compute nodes, each with 2x64 core AMD EPYC-7742  2.25 GHz CPU-s, 1 TByte of memory, 480GB local scratch storage;
  • All nodes are connected by a 25 GB/sec Nvidia Mellanox;
  • SLURM is used for resource allocation and job scheduling;
  • All nodes run Ubuntu Linux 22.04 LTS.

Login, Interactive, and Data Transfer Nodes

A head/login node alcc129.physik.uni-augsburg.de:

  • Processor type Intel XEON Skylake-6140;
  • Processor base frequency 2.3 GHz CPU-s;
  • Cores per node:  2x18;
  • Main memory (RAM) per node 384 GByte;
  • 2 NUMA domains with 18 physical cores each;
  • 300GB local scratch storage.

Intel XEON Nodes

  • 12 compute nodes;
  • Processor type Intel XEON E5-2680v4;
  • Processor base frequency 2.4 GHz CPU-s;
  • Cores per node:  2x14;
  • Main memory (RAM) per node 256 GByte;
  • 2 NUMA domains with 14 physical cores each;
  • 300GB local scratch storage.


  • 3 compute nodes;
  • Processor type Intel XEON Skylake-6140;
  • Processor base frequency 2.3 GHz CPU-s;
  • Cores per node:  2x18;
  • Main memory (RAM) per node 384 GByte;
  • 2 NUMA domains with 18 physical cores each;
  • 300GB local scratch storage.

AMD EPYC Nodes

  • 5 compute nodes;
  • Processor type AMD EPYC-7452;
  • Processor base frequency 2.35 GHz CPU-s;
  • Cores per node:  2x32;
  • Main memory (RAM) per node 512 GByte;
  • 2 NUMA domains with 32 physical cores each;
  • 480GB local scratch storage.


  • 6 compute nodes;
  • Processor type AMD EPYC-7742;
  • Processor base frequency 2.25 GHz CPU-s;
  • Cores per node:  2x64;
  • Main memory (RAM) per node 1 TByte;
  • 8 NUMA domains with 16 physical cores each;
  • 480GB local scratch storage.


  • 1 Graphics Processing Unit (GPU) node;
  • Processor type AMD EPYC-7742;
  • Processor base frequency 2.25 GHz CPU-s;
  • Cores per node:  2x64;
  • Main memory (RAM) per node 1 TByte;
  • 8 NUMA domains with 16 physical cores each;
  • 480GB local scratch storage.
  • GPU type Nvidia Tesla-V100S-PCIE-32GB;
  • GPU-s per node: 1;

I/O Subsystem

  • 1 PByte of shared disk space

Queue Partitions

HPC Resource

Name


Partition


Timelimit


number of Nodes


Nodes

Per-Node


Purpose

CPUCoresRAMRAM/CoreGPU
ALCC-testtest6 hours1alcc1292×Intel XEON Skylake-61402×18384GiB<11GiB-short queue for testing on the login node
ALCC-xeonxeon7 days12
3
alcc[114-125]
alcc128, alcc[130-131]

2×Intel XEON E5-2680v4
2×Intel XEON Skylake-6140

2×14
2×18
256GiB
384GiB
9GiB
<11GiB
-general purpose CPU nodes
ALCC-epycepyc7 days3
1
3

2
3
alcc[133-135]
alcc136
alcc[137-139]
alcc[140-141]
alcc[142-144]
2×AMD Epyc-7452
2×AMD Epyc-7742
2×AMD Epyc-7742
2×AMD Epyc-7452
2×AMD Epyc-7713
2×32
2×64
2×64
2×32
2×64

512GiB
1TiB
1TiB
512Gib
1TiB

8GiB
8GiB
8GiB
8GiB
8GiB
-
1×Nvidia tesla_v100s-pcie-32gb
-
-
-
general purpose CPU nodes
 -28--

1696

14Tib-


-

Special Systems