Main compute nodes
PENDING
Ad-interim solutions
Partition lcg_c2pap
- Partition with increased priority for jobs in C2PAP projects
- 5 nodes
- 2x192 cores AMD EPYC, ~387x2 GB RAM
LRZ CoolMUC-4
- Shared use with a large pool of users (e.g. TUM, LMU)
- 100 nodes cm4_tiny, cm4_std, 6 nodes cm4_inter
- 2x112 Intel Xeon Platinum 8480+, 523 GB RAM
- Detailed description is at https://doku.lrz.de/job-processing-on-the-linux-cluster-10745970.html
Storage
Home
$HOME: /dss/dsshome1/<number>/<user_id>/
The home directories have maximum size of 100 GB and are situated on the LRZ NAS server and connected using NFS, the bandwith to the home directories is only of the order of 10 GB/s, so the connection is much slower compared to GPFS. There is an automatic weekly backup of home directories to LRZ's tape system.
SCRATCH
$SCRATCH: /dss/lxclscratch/<number>/<user_id>/
Data in scratch is available only for a limited time, and removed if needed later. Always transfer critical data somewhere else! Scratch data will be removed without warning when $SCRATCH is full, adopting a fifo principle.
Data on GPFS is RAID6 secured. For every batch of 10 disks, there are two backup disk. So if 1 or 2 disks fail at once, the data can be recovered and replacement disks can be inserted without any data loss. If you need to store large data sets for an extended period of time, you can apply for a tape back up at LRZ. Please contact the staff for details.
Software
The operating system is SuSE Linux Enterprise Server (SLES) 15 SP6. The software stack is available via the module system and just needs to be activated.
OS Containers and CVMFS
An alternative environment can in principle be setup using singularity to load prepared containers (singularity images). This allows for running binary software packages which had been build for other environments, e.g. Scientific Linux or CentOS. This is standard procedure for the ATLAS or Belle2 projects.
Related is access to software packages via cvmfs . The C2PAP cluster provides access to the /cvmfs directory on all nodes.
Please contact c2pap support if you want to use containers or cvmfs for your project.