LiCCA: Front view

Rear view of one of the racks
Front view of one of the racks

Nodes

ALL

  • A head/login node licca-li-01 with 2x64 core AMD EPYC-7713 2.0 GHz CPU-s, 1 TByte of memory, 800GB local storage, and 1PB global storage;
  • 42 compute nodes, each with 2x64 core AMD EPYC-7713  2.0 GHz CPU-s, 1 TByte of memory, 800GB local scratch storage;
  • 4 high-memory compute nodes, each with 2x64 core AMD EPYC-7713 2.0 GHz CPU-s, 4 TByte of memory, 800GB local scratch storage;
  • 8 Graphics Processing Unit (GPU) nodes, each with 3×Nvidia A100 80GB PCIe, 2x64 core AMD EPYC-7713 2.0 GHz CPU-s, 1 TByte of memory, 800GB local scratch storage;
  • 1 Graphics Processing Unit (GPU) node, with 4×Nvidia A100-SXM-80GB PCIe, 2x64 core AMD EPYC-7713 2.0 GHz CPU-s, 2 TByte of memory, 800GB local scratch storage;
  • All nodes are connected by a 100 GB/sec Nvidia Mellanox;
  • SLURM is used for resource allocation and job scheduling;
  • All nodes run Ubuntu Linux 22.04 LTS.

Login, Interactive, and Data Transfer Nodes

A head/login node licca-li-01.rz.uni-augsburg.de:

  • Processor type AMD EPYC-7713;
  • Processor base frequency 2.0 GHz CPU-s;
  • Cores per node:  2x64;
  • Main memory (RAM) per node 1 TByte;
  • 8 NUMA domains with 16 physical cores each;
  • 800GB local scratch storage.

CPU Nodes

  • 41 compute nodes;
  • Processor type AMD EPYC-7713;
  • Processor base frequency 2.0 GHz CPU-s;
  • Cores per node:  2x64;
  • Main memory (RAM) per node 1 TByte;
  • 8 NUMA domains with 16 physical cores each;
  • 800GB local scratch storage.

Large RAM Nodes

  • 4 high-memory compute nodes;
  • Processor type AMD EPYC-7713;
  • Processor base frequency 2.0 GHz CPU-s;
  • Cores per node:  2x64;
  • Main memory (RAM) per node 4 TByte;
  • 8 NUMA domains with 16 physical cores each;
  • 800GB local scratch storage.

GPU Nodes

  • 8 Graphics Processing Unit (GPU) nodes;
  • Processor type AMD EPYC-7713;
  • Processor base frequency 2.0 GHz CPU-s;
  • Cores per node:  2x64;
  • Main memory (RAM) per node 1 TByte;
  • 8 NUMA domains with 16 physical cores each;
  • 800GB local scratch storage;
  • GPU type Nvidia A100 80GB PCIe;
  • GPU-s per node: 3;
  • Cores 32-47 /dev/nvidia0;
  • Cores 112-127 /dev/nvidia1;
  • Cores 64-79 /dev/nvidia2.

GPU Nodes with SXM

  • 1 Graphics Processing Unit (GPU) nodes;
  • Processor type AMD EPYC-7713;
  • Processor base frequency 2.0 GHz CPU-s;
  • Cores per node:  2x64;
  • Main memory (RAM) per node 2 TByte;
  • 8 NUMA domains with 16 physical cores each;
  • 800GB local scratch storage;
  • GPU type Nvidia A100-SXM-80GB;
  • GPU-s per node: 4;
  • Cores 48-63 /dev/nvidia0;
  • Cores 16-31 /dev/nvidia1;
  • Cores 112-127 /dev/nvidia2;
  • Cores 80-95 /dev/nvidia3.

Interconnect

Mellanox 100 GBit/s Ethernet, ...

I/O Subsystem

  • 1 PByte of shared disk space

Queue Partitions

HPC Resource

Name


Partition


Timelimit


number of Nodes


Nodes

Per-Node


Purpose

CPUCoresRAMRAM/CoreGPU
LiCCA-testtest2 hours1licca0012×AMD Epyc-77132×641TiB<8GiB-short queue for testing on the login node
LiCCA-epycepyc7 days41licca[002-042]2×AMD Epyc-77132×641TiB<8GiB-general purpose CPU nodes
LiCCA-epyc-memepyc-mem7 days4licca[043-046]2×AMD Epyc-77132×644TiB<32GiB-nodes for Jobs with high memory requirements
LiCCA-epyc-gpu-testepyc-gpu-test6 hours1licca0472×AMD Epyc-77132×641TiB<8GiB3×Nvidia A100 80GBGPU nodes for development, testing, short calculation(code must use GPUs)
LiCCA-epyc-gpuepyc-gpu3 days7licca[048-054]2×AMD Epyc-77132×641TiB<8GiB3×Nvidia A100 80GBgeneral purpose GPU nodes (code must use GPUs)
LiCCA-epyc-gpu-sxm
epyc-gpu-sxm
3 days1licca0552×AMD Epyc-77132×642TiB<16GiB4×Nvidia A100-SXM-80GBspecial purpose GPU nodes (code must use multiple GPUs)
 -55--

7040

68Tib-

24×Nvidia A100

4×Nvidia A100-SXM-80GB

-

Special Systems