Slurm
- Slurm 101
- Slurm Queues
- Submitting Serial Jobs
- Submitting Interactive Jobs
- Submitting Parallel Jobs (MPI/OpenMP)
- Submitting GPU Jobs
- Submitting Array Jobs and Chain Jobs
- Handling Jobs running into TIMEOUT
- Accessing Webinterfaces (e.g. Jupyterlab, Ray) via SSH Tunnels
- Exclusive jobs for benchmarking
- Controlling the environment of a Job
FAQ and Troubleshooting
- How do I register myself to use the HPC resources?
- How do I get access to the LiCCA or ALCC resources?
- What kind of resources are available on LiCCA?
- What kind of resources are available on ALCC?
- How do I acknowledge the usage of HPC resources on LiCCA in publications?
- How do I acknowledge the usage of HPC resources on ALCC in publications?
- What Slurm Partitions (Queues) are available on LiCCA?
- What Slurm Partitions (Queues) are available on ALCC?
- What is Slurm?
- How do I use Slurm batch system
- How do I submit the serial calculations?
- How do I run multithreaded calculations?
- How do I run parallel calculations on several nodes?
- How do I run GPU based calculations?
- How do I check Slurm current schedule, queue?
- Is there some kind of Remote Desktop for the cluster?
- If I have a question which is not listed here?
- If I want to report a problem?
- Which version of Python could be used?
- Which Anaconda, Miniconda, Miniforge, Micromamba?
- How do I monitor live CPU/GPU/memory/disk utilization?
- How do I check my GPFS filesystem usage and quota situation?
- Why does the slurm squeue command show (MaxCpuPerAccount), (MaxJobsPerAccount) or (MaxGRESPerAccount) next to the submitted job?
- Why does the slurm squeue command show (QOSMaxJobsPerUserLimit) next to the submitted job?
- Why does the slurm squeue command show (Nodes required for job are DOWN, DRAINED or reserved for jobs in higher priority partitions) next to the submitted job?
- Resources
- Status
- Access
- Data Transfer
- File Systems
- Environment Modules (Lmod)
- Interactive (Debug) Runs (not Slurm)
- Submitting Jobs (Slurm Batch System)
- Slurm
- Slurm 101
- Slurm Queues
- Submitting Serial Jobs
- Submitting Interactive Jobs
- Submitting Parallel Jobs (MPI/OpenMP)
- Submitting GPU Jobs
- Submitting Array Jobs and Chain Jobs
- Handling Jobs running into TIMEOUT
- Accessing Webinterfaces (e.g. Jupyterlab, Ray) via SSH Tunnels
- Exclusive jobs for benchmarking
- Controlling the environment of a Job
- HPC Software and Libraries
- HPC Tuning Guide
- Service and Support
- Origin of the name
- FAQ and Troubleshooting - LiCCA
The December module updates and deprecations have been rolled out today. Please look out for deprecation warnings in your Slurm output.
Please be aware of the following module changes (if default appears at the end, this is the new default!):
New/updated scientific Modules:
====================
elk/10.7.8-impi2021.10-intel2023.2 (default)
gromacs/2025.4-ompi5.0-gcc13.2-mkl2023.2-cuda12.9
nwchem/7.3.1-ompi5.0-cf (default)
octave/10.3.0-cf (default)
orca/6.1.1 (default)
qchem/6.3.1
qchem/6.4.0 (default)
siesta/5.4.1-ompi5.0-cf (default)
New/updated common Modules:
====================
cmake/3.31.10
cmake/4.2.1 (default)
ffmpeg/8.0.1 (default)
meson/1.10.0 (default)
meson/1.9.2
ninja/1.13.2 (default)
anaconda/2025.12 (default)
apptainer/1.4.5 (default)
cudnn/cu11x/9.10.2.21 (default)
cudnn/cu12x/9.17.0.29 (default)
cuquantum/cu11x/25.06.0.10 (default)
cuquantum/cu12x/25.11.1.11 (default)
cutensor/cu11x/2.2.0.0 (default)
cutensor/cu12x/2.4.1.4 (default)
gdrcopy/2.5.1
go/1.24.11
go/1.25.5 (default)
julia/1.10.10
julia/1.12.2 (default)
micromamba/2.4.0 (default)
miniforge/25.11.0 (default)
nccl/cu12.9/2.27.7
nccl/cu12.9/2.28.9 (default)
openjdk/11.0.29+7
openjdk/17.0.17+10
openjdk/21.0.9+10
openjdk/25.0.1+8 (default)
openjdk/8.u472-b08
Deprecated Modules (to be hidden on 15th of January 2026 and removed on 30th of January 2026):
=================================================
cmake/4.0.3: Please use cmake/4.2.1 or higher!
anaconda/2024.06: Please use anaconda/2024.10 or higher!
apptainer/1.3.5: Please use apptainer/1.3.6 or higher!
julia/1.10.8: Please use julia/1.10.10 or higher!
julia/1.11.3: Please use julia/1.12.2 or higher!
elk/10.5.16-impi2021.10-intel2023.2: Please use elk/10.7.8-impi2021.10-intel2023.2 or higher!
elk/10.6.2-impi2021.10-intel2023.2: Please use elk/10.7.8-impi2021.10-intel2023.2 or higher!
ffmpeg/6.1: Please use ffmpeg/7.0.1 or higher!
meson/1.4.2: Please use meson/1.8.4 or higher!
meson/1.5.2: Please use meson/1.8.4 or higher!
meson/1.7.2: Please use meson/1.8.4 or higher!
micromamba/2.0.5: Please use micromamba/2.4.0 or higher!
micromamba/2.2.0: Please use micromamba/2.4.0 or higher!
micromamba/2.3.0: Please use micromamba/2.4.0 or higher!
qchem/6.3.0: Please use qchem/6.3.1 or higher!
octave/8.4.0-cf: Please use octave/9.1.0-cf or higher!
siesta/5.2.0-ompi4.1-cf: Please use siesta/5.4.1-ompi5.0-cf or higher!
siesta/5.4.0-ompi5.0-cf: Please use siesta/5.4.1-ompi5.0-cf or higher!
comsol/6.1: Please use comsol/6.2 or higher!
cuda/11.6.2: Please use cuda/11.8.0 or higher!
cuda/12.1.1: Please use cuda/12.5.1 or higher!
cuda/12.2.2: Please use cuda/12.5.1 or higher!
cuda/12.3.2: Please use cuda/12.5.1 or higher!
cuda/12.4.1: Please use cuda/12.5.1 or higher!
cuda/12.6.2: Please use cuda/12.6.3 or higher!
cuda/12.8.0: Please use cuda/12.8.1 or higher!
cuda-compat/12.9.1: The current cuda driver is already newer!
nccl/cu12.2/2.21.5: Please use nccl/cu12.5/2.21.5 or higher!
nccl/cu12.4/2.21.5: Please use nccl/cu12.5/2.21.5 or higher!
cudnn/cu11x/8.9.7.29: Please use cudnn/cu11x/9.10.2.21 or higher!
cudnn/cu11x/9.0.0.312: Please use cudnn/cu11x/9.10.2.21 or higher!
cudnn/cu11x/9.1.1.17: Please use cudnn/cu11x/9.10.2.21 or higher!
cudnn/cu11x/9.2.1.18: Please use cudnn/cu11x/9.10.2.21 or higher!
cudnn/cu11x/9.3.0.75: Please use cudnn/cu11x/9.10.2.21 or higher!
cudnn/cu11x/9.4.0.58: Please use cudnn/cu11x/9.10.2.21 or higher!
cudnn/cu11x/9.5.1.17: Please use cudnn/cu11x/9.10.2.21 or higher!
cudnn/cu12x/8.9.7.29: Please use cudnn/cu12x/9.17.0.29 or higher!
cudnn/cu12x/9.0.0.312: Please use cudnn/cu12x/9.17.0.29 or higher!
cudnn/cu12x/9.1.1.17: Please use cudnn/cu12x/9.17.0.29 or higher!
cudnn/cu12x/9.2.1.18: Please use cudnn/cu12x/9.17.0.29 or higher!
cudnn/cu12x/9.3.0.75: Please use cudnn/cu12x/9.17.0.29 or higher!
cudnn/cu12x/9.4.0.58: Please use cudnn/cu12x/9.17.0.29 or higher!
cudnn/cu12x/9.5.1.17: Please use cudnn/cu12x/9.17.0.29 or higher!
cutensor/cu11x/2.0.1.2: Please use cutensor/cu11x/2.2.0.0 or higher!
cutensor/cu11x/2.0.2.5: Please use cutensor/cu11x/2.2.0.0 or higher!
cutensor/cu12x/2.0.1.2: Please use cutensor/cu12x/2.4.1.4 or higher!
cutensor/cu12x/2.0.2.5: Please use cutensor/cu12x/2.4.1.4 or higher!
cuquantum/cu11x/24.03.0.4: Please use cuquantum/cu11x/25.06.0.10 or higher!
cuquantum/cu11x/24.08.0.5: Please use cuquantum/cu11x/25.06.0.10 or higher!
cuquantum/cu11x/24.11.0.21: Please use cuquantum/cu11x/25.06.0.10 or higher!
cuquantum/cu12x/24.03.0.4: Please use cuquantum/cu12x/25.11.1.11 or higher!
cuquantum/cu12x/24.08.0.5: Please use cuquantum/cu12x/25.11.1.11 or higher!
cuquantum/cu12x/24.11.0.21: Please use cuquantum/cu12x/25.11.1.11 or higher!
If you experience problems with any module, please let us know!
Both clusters ALCC and LiCCA are back online.
We announced a maintenance window for both clusters
ALCC and LiCCA to update the Slurm version to 25.11.
One of the main reasons are improvements to the
GPU allocation for Slurm jobs,
which is broken in the current version 25.05.
We might still have to adjust the Slurm configuration
for GPU job handling in the days following the update,
meaning eventually draining and restarting Slurm
daemons again.
We will at least temporarily lower the TimeLimit
in the GPU partitions from 3 to 2 days.
This might cause some inconvenience for long time active users,
but will provide a good alternative to cancelling/killing jobs
due to required restarts of the system.
Since the last major upgrade of both clusters ALCC
and LiCCA in July, we observe some problems with
Slurm jobs allocating GPUs, and with our Slurm accounting
database. Recent Slurm updates (Slurm version 25.11) should
fix these problems.
Maintenance schedule:
- Friday, 28.November, 9:00, set all partitions to drain
- Monday, 1.December, 9:00, start of Slurm update
-- GPU partitions drained
-- CPU partitions draining, runnning jobs continue,
job survival not guaranteed
- Monday, 1.December: we plan to resume all partitions till 18:00
- login nodes will not be available for users until
the maintenance is finished.
The July module updates and deprecations have been rolled out today. Please look out for deprecation warnings in your Slurm output.
After the Maintenance and Upgrade to Ubuntu 24.04 there are two major changes:
- The default cuda version is now v12.8, since this is what the nvidia driver natively supports.
- The intel/2023 compilers need a compatible GNU compiler. Unfortunately intel/2023 compilers are not compatible with gcc v13 which is the new Ubuntu default.
=> When loading intel/2023, gcc v11.5 compilers will now be loaded as well (but without overriding CC, CXX or FC environment variables)
Also please be aware of the following module changes, which just have been deployed (if default appears at the end, this is the new default!):
New/updated scientific Modules:
cp2k/2025.2-ompi5.0-cuda12.8-gcc13.2
cp2k/2025.2-ompi5.0-gcc13.2 (default)
comsol/6.3.0.335 (default)
elk/10.5.16-impi2021.10-intel2023.2 (default)
gromacs/2025.2-ompi5.0-gcc13.2-mkl2023.2-cuda12.9
lammps/20240829.4-ompi5.0-cuda12.9-gcc13.2
lammps/20240829.4-ompi5.0-gcc13.2 (default)
lammps/20250722.0-ompi5.0-cuda12.9-gcc13.2
lammps/20250722.0-ompi5.0-gcc13.2
mathematica/14.2.1 (default)
orca/6.1.0 (default)
qchem/6.3.0 (default)
qe/7.4.1-impi2021.10-intel2023.2 (default)
qe/7.4.1-ompi4.1-nvhpc24.1
siesta/5.4.0-ompi5.0-cf (default)
vasp6/6.5.1-impi2021.10-intel2023.2 (default)
vasp6/6.5.1-cuda12.3-ompi4.1-nvhpc24.1
vasp6/python3.12/6.5.1-impi2021.10-intel2023.2
New/updated common Modules:
cmake/3.31.8 (default)
cmake/4.0.3
cuda-compat/12.9.1
cuda/12.8.1 (default, in line with the CUDA level of the Nvidia driver)
cuda/12.9.1
emacs/30.1 (default)
gdrcopy/2.5
meson/1.8.3 (default)
micromamba/2.3.0 (default)
nccl/cu12.8/2.26.2
ninja/1.13.2 (default)
parallel/20250622 (default)
pmix/5.0.7 (default)
R/4.4.3-cf (default)
ucc/cu11x/1.4.4 (default)
ucc/cu12x/1.4.4 (default)
ucx/cu11x/1.19.0 (default)
ucx/cu12x/1.19.0 (default)
New/updated library Modules:
hdf5/1.14.6 (for compilers gcc/9.5, gcc/11.5, gcc/13.2, intel/2021.4, intel/2023.2, intel/2024.2, nvhpc/24.1) (default)
libxc/7.0.0 (for compilers gcc/13.2, intel/2023.2, intel/2024.2) (default)
openblas/lp64/0.3.30 (for compilers gcc/9.5, gcc/11.5, gcc/13.2) (default)
openblas/ilp64/0.3.30 (for compilers gcc/9.5, gcc/11.5, gcc/13.2)
gmp/6.3.0 (for compilers gcc/13.2) (default)
sqlite3/3.50.4 (for compilers gcc/9.5, gcc/11.5, gcc/13.2) (default)
tblite/0.4.0 (for compilers gcc/13.2) (default)
New/updated MPI Modules:
openmpi/4.1.8 (for compilers gcc/9.5, gcc/11.5, gcc/13.2, intel/2021.4, intel/2023.2, intel/2024.2) (default)
openmpi/5.0.8 (for compilers gcc/9.5, gcc/11.5, gcc/13.2, intel/2021.4, intel/2023.2, intel/2024.2) (default)
hdf5/1.14.6 (for compilers gcc/9.5, gcc/11.5, gcc/13.2, intel/2021.4, intel/2023.2, intel/2024.2, nvhpc/24.1) (default)
netcdf/c/4.9.3 (for compilers gcc/13, intel/2023.2, intel/2024.2) (default)
netcdf/fortran/4.6.2 (for compilers gcc/13, intel/2023.2, intel/2024.2) (default)
pnetcdf/1.14.0 (for compilers gcc/13, intel/2023.2, intel/2024.2) (default)
If you experience problems with any module, please let us know!













