LiCCA: Front view

Rear view of one of the racks
Front view of one of the racks

Nodes

ALL

  • A head/login node licca-li-01 with 2x64 core AMD EPYC-7713 2.0 GHz CPU-s, 1 TByte of memory, 800GB local storage, and 1PB global storage;
  • 42 compute nodes, each with 2x64 core AMD EPYC-7713  2.0 GHz CPU-s, 1 TByte of memory, 800GB local scratch storage;
  • 4 high-memory compute nodes, each with 2x64 core AMD EPYC-7713 2.0 GHz CPU-s, 4 TByte of memory, 800GB local scratch storage;
  • 8 Graphics Processing Unit (GPU) nodes, each with 3×Nvidia A100 80GB PCIe, 2x64 core AMD EPYC-7713 2.0 GHz CPU-s, 1 TByte of memory, 800GB local scratch storage;
  • 1 Graphics Processing Unit (GPU) node, with 4×Nvidia A100-SXM-80GB PCIe, 2x64 core AMD EPYC-7713 2.0 GHz CPU-s, 2 TByte of memory, 800GB local scratch storage;
  • All nodes are connected by a 100 GB/sec Nvidia Mellanox;
  • SLURM is used for resource allocation and job scheduling;
  • All nodes run Ubuntu Linux 22.04 LTS.

Login, Interactive, and Data Transfer Nodes

A head/login node licca-li-01.rz.uni-augsburg.de (licca001):

  • Processor type AMD EPYC-7713;
  • Processor base frequency 2.0 GHz CPU-s;
  • Cores per node:  2x64;
  • Main memory (RAM) per node 1 TByte;
  • 8 NUMA domains with 16 physical cores each;
  • 800GB local scratch storage.

CPU Nodes

  • 41 compute nodes (licca[002-042]);
  • Processor type AMD EPYC-7713;
  • Processor base frequency 2.0 GHz CPU-s;
  • Cores per node:  2x64;
  • Main memory (RAM) per node 1 TByte;
  • 8 NUMA domains with 16 physical cores each;
  • 800GB local scratch storage.

Large RAM Nodes

  • 4 high-memory compute nodes (licca[043-046]);
  • Processor type AMD EPYC-7713;
  • Processor base frequency 2.0 GHz CPU-s;
  • Cores per node:  2x64;
  • Main memory (RAM) per node 4 TByte;
  • 8 NUMA domains with 16 physical cores each;
  • 800GB local scratch storage.

GPU Nodes

  • 8 Graphics Processing Unit (GPU) nodes (licca[047-054]);
  • Processor type AMD EPYC-7713;
  • Processor base frequency 2.0 GHz CPU-s;
  • Cores per node:  2x64;
  • Main memory (RAM) per node 1 TByte;
  • 8 NUMA domains with 16 physical cores each;
  • 800GB local scratch storage;
  • GPU type Nvidia A100 80GB PCIe;
  • GPU-s per node: 3;
  • Cores 32-47 /dev/nvidia0;
  • Cores 112-127 /dev/nvidia1;
  • Cores 64-79 /dev/nvidia2.

GPU Nodes with SXM

  • 1 Graphics Processing Unit (GPU) nodes (licca055);
  • Processor type AMD EPYC-7713;
  • Processor base frequency 2.0 GHz CPU-s;
  • Cores per node:  2x64;
  • Main memory (RAM) per node 2 TByte;
  • 8 NUMA domains with 16 physical cores each;
  • 800GB local scratch storage;
  • GPU type Nvidia A100-SXM-80GB;
  • GPU-s per node: 4;
  • Cores 48-63 /dev/nvidia0;
  • Cores 16-31 /dev/nvidia1;
  • Cores 112-127 /dev/nvidia2;
  • Cores 80-95 /dev/nvidia3.

GPU Nodes with H100-NVL

  • 1 from the Chair of Theoretical Physics III, and 1 from Medicine Informatics and RZ (licca[056-057]);
  • 1 Graphics Processing Unit (GPU) nodes;
  • Processor type Intel XEON Gold 6526Y;
  • Processor base frequency 2.8 GHz CPU-s;
  • Cores per node:  2x16;
  • Main memory (RAM) per node 512 GByte;
  • 4 NUMA domains with 8 physical cores each;
  • 800GB local scratch storage;
  • GPU type Nvidia H100-NVL-94GB;
  • GPU-s per node: 4 (2 × pairs);
  • Pair-1 Cores 8-15 /dev/nvidia0;
  • Pair-1 Cores 0-7 /dev/nvidia1;
  • Pair-2 Cores 24-31 /dev/nvidia2;
  • Pair-2 Cores 16-23 /dev/nvidia3.

Interconnect

Mellanox 100 GBit/s Ethernet, ...

I/O Subsystem

  • 1 PByte of shared disk space

Queue Partitions

HPC Resource

Name


Partition


Timelimit


number of Nodes


Nodes

Per-Node


Purpose

CPUCoresRAMRAM/CoreGPU
LiCCA-testtest2 hours1licca0012×AMD Epyc-77132×641TiB<8GiB-short queue for testing on the login node
LiCCA-epycepyc7 days41licca[002-042]2×AMD Epyc-77132×641TiB<8GiB-general purpose CPU nodes
LiCCA-epyc-memepyc-mem7 days4licca[043-046]2×AMD Epyc-77132×644TiB<32GiB-nodes for Jobs with high memory requirements
LiCCA-epyc-gpu-testepyc-gpu-test6 hours1licca0472×AMD Epyc-77132×641TiB<8GiB3×Nvidia A100 80GBGPU nodes for development, testing, short calculation(code must use GPUs)
LiCCA-epyc-gpuepyc-gpu3 days7licca[048-054]2×AMD Epyc-77132×641TiB<8GiB3×Nvidia A100 80GBgeneral purpose GPU nodes (code must use GPUs)
LiCCA-epyc-gpu-sxm
epyc-gpu-sxm
3 days1licca0552×AMD Epyc-77132×642TiB<16GiB4×Nvidia A100-SXM-80GBspecial purpose GPU nodes (code must use multiple GPUs)
LiCCA-xeon-gpu
xeon-gpu
3 day2licca[056-057]2×Intel Xeon GOLD 6526Y2×16512GiB<16GiB2×2×Nvidia H100-NVL-94GB

special purpose GPU nodes (code must use GPUs in pairs)

1 from Theoretical Physics III, and 1 from from Medicine Informatics and RZ

 -57--

7104

69Tib-

24×Nvidia A100

4×Nvidia A100-SXM-80GB

4×2×Nvidia H100-NVL-94GB

-

Special Systems