HPC Resources available at rz University of Augsburg



HPC Ecosystem

Usage Polices

Access HPC Resources

LiCCA

ALCC

FAQ & Troubleshooting

Mailinglists


Mailinglists

 

As you know, Docker containers cannot be run natively in HPC environments, but they can be easily converted and executed using Apptainer (Documentation). 

Currently an outdated apptainer installation is available without loading any module.

This installation of apptainer will be removed on 30th of November!

Please change your workflows to make use of more recent apptainer modules!

The November module updates and deprecations have been rolled out today. Please look out for deprecation warnings in your Slurm output.

Most notable new modules:

Common

aocc/5.0.0
aocl/ilp64/5.0.0
aocl/lp64/5.0.0
anaconda/2024.10
apptainer/1.3.5
cudnn/cu11x/9.5.1.17
cudnn/cu12x/9.5.1.17
micromamba/2.0.3
openjdk/8.u432-b06
openjdk/11.0.25+9
openjdk/17.0.13+11
openjdk/21.0.5+11

Scientific

gromacs/2024.4-ompi5.0-gcc13.2-mkl2023.2-cuda12.6
orca/6.0.1
siesta/5.2.0-ompi4.1-cf

Software/Module Updates

The September module updates and deprecations have been rolled out yesterday. Please look out for deprecation warnings in your Slurm output.

Most notable new modules:

Quota notification

Since Friday, 30.08.,  a quota notification system is active.  Users will get an E-Mail message when their quota are exceeded. More information in our Knowledege Base: Quota regulations

On LiCCA a separate partition epyc-gpu-test has been created, and node licca047 with its 3 A100 GPUs moved to this partition. The TimeLimit in this partition is 6 hours, to give users the possibility to test with short job runs, while the bigger epyc-gpu is loaded with longer running jobs. If this is not really used, we will move GPUs (partially) back. All projects and users with GPU resources are automatically granted access to this  partition.

COMSOL 6.2

COMSOL 6.2 has been installed in the HPC cluster filesystem.

login node shell> ml load comsol/6.2
Loading Comsol 6.2

6.1 will still be the default for a few days, please give feedback if 6.2 installation still

has problems. 6.2 will switch to be default 1 or 2 weeks.


Downtime on May, 14

The HPC clusters LiCCA and ALCC are used with increasing
intensity and the load on the power distribution rises.

Stronger power cables have been laid for the power
infrastructure supporting the HPC clusters.

A shutdown of all compute nodes is scheduled for:

   Tuesday, May 14, 7:00

We will start to drain the queues on Saturday, May 11.

To the best of our knowledge the work on the
electrical system will be finished the same day,
so the clusters should be back on Wednesday, May 15.

Slurm database migration


We migrate the Slurm database instance (serving both the ALCC and the LiCCA cluster) to a different system starting today.

Slurm operation is planned to stay up during this time.

This should speed up Slurm operations after the migration, but

is also needed in preparation of the ALCC upgrade from Ubuntu 20.04 to 22.04, which will happen in the next weeks. 

Welcome to LiCCA


we are proud to announce the availability of LiCCA, a compute resource focused on research, open for members of the University of Augsburg.

Access to LiCCA is possible after registration of your chair or working group for a HPC project. The complete application workflow is described in the HPC Knowledge Base, as well as the cluster hardware and setup.

Questions and problems, which are not solved within the HPC Knowledge Base, can be addressed to the Service-Desk at the Service- & Supportportal or by E-Mail.


Happy computing, the RZ HPC team

Content


List of Labels/Keywords:

  1. A
  2. B
  3. C
  4. D
  5. E
  6. F-G
  7. H
  8. I-K
  9. L
  10. M
  11. N-O
  12. P
  13. Q
  14. R
  15. S
  16. T-U
  17. V-Z


Bereichsindex

Gesamtzahl der Seiten: 133

0-9 ... 0 A ... 13 B ... 2 C ... 12 D ... 5 E ... 9
F ... 5 G ... 4 H ... 11 I ... 4 J ... 1 K ... 1
L ... 8 M ... 3 N ... 5 O ... 2 P ... 6 Q ... 4
R ... 4 S ... 25 T ... 2 U ... 1 V ... 2 W ... 3
X ... 1 Y ... 0 Z ... 0 !@#$ ... 0    

0-9

A

Seite: Access
Seite: Access - ALCC
Seite: Access HPC Resources
Seite: Accessing Webinterfaces (e.g. Jupyterlab, Ray) via SSH Tunnels
Seite: Acknowledgement
Seite: Acknowledgement - ALCC
Seite: ALCC Mailinglists
Seite: AMD AOCC incl. AOCL
Seite: AOCC Clang/Clang++
Seite: AOCC Flang
Seite: AOCL Tuning Guidlines
Seite: AOCL-Spase
Seite: Augsburg Linux Compute Cluster

B

Seite: Benutzergruppen-Vereinbarung
Seite: BLIS

C

Seite: Clang and Flang Options
Seite: Collected Data
Seite: Collected Data - ALCC
Seite: Collected Information
Seite: Compilers and Libraries
Seite: COMSOL
Seite: Connect to ALCC
Seite: Connect to Cluster
Seite: Container with Apptainer
Seite: Controlling the environment of a Job
Seite: CP2K
Seite: CRYSTAL

D

Seite: Data Transfer
Seite: Data Transfer - ALCC
Seite: Datenschutzerklärung
Seite: DFTB+
Seite: Different MPI flavors

E

Seite: ELK
Seite: Endbenutzer-Vereinbarung
Seite: Environment Modules (Lmod)
Seite: Environment Modules - ALCC
Seite: Erklärung zur Einhaltung der deutschen Exportkontroll-Verordnungen (Endnutzer/-in)
Seite: Erklärung zur Einhaltung der deutschen Exportkontroll-Verordnungen (Kontaktperson)
Seite: Erklärung zur Einhaltung der deutschen Exportkontroll-Verordnungen (Leitung von HPC-Projekten)
Seite: Exclusive jobs for benchmarking
Seite: Exportkontroll-Verordnungen

F

Seite: FAQ and Troubleshooting
Seite: FAQ and Troubleshooting - ALCC
Seite: FAQ and Troubleshooting - LiCCA
Seite: FFTW
Seite: File Systems

G

Seite: GAUSSIAN
Seite: General Note on Parallel Jobs (MPI/OpenMP)
Seite: GNU Compiler Collection
Seite: GROMACS

H

Seite: Handling Jobs running into TIMEOUT
Seite: High Performance Math Libraries
Seite: History of ALCC
Seite: How to write good (HPC) Service Requests
Seite: HPC Project
Seite: HPC Project Membership (HPC-Zugriff)
Seite: HPC Software and Libraries
Seite: HPC Software and Libraries - ALCC
Seite: HPC Tuning Guide
Seite: HPC Tuning Guide - ALCC
Seite: HPC-Chat

I

Seite: Intel Compilers via OneAPI incl. MKL
Seite: Interactive (Debug) Runs (not Slurm)
Seite: Interactive (Debug) Runs (not Slurm) - ALCC
Seite: Interpreters

J

Seite: Julia and Julia package management

K

Startseite: Knowledge Base für wissenschaftliches Rechnen (HPC) Startseite

L

Seite: LAMMPS
Seite: libFLAME
Seite: LibM
Seite: LiCCA Mailinglists
Seite: Linux Compute Cluster Augsburg
Seite: Linux Perf
Seite: Live resource utilization monitoring
Seite: Local Node Filesystem

M

Seite: Mailinglists
Seite: Misc Tools
Seite: MPI Libraries

N

Seite: Nodes
Seite: Nodes - ALCC
Seite: Nvidia CUDA Toolkit
Seite: Nvidia HPC-SDK
Seite: NWChem

O

Seite: ORCA
Seite: Origin of the name

P

Seite: Parallel File System
Seite: Parallel File System (GPFS)
Seite: Performance
Seite: PLUMED
Seite: Profiling Tool
Seite: Python and conda package management

Q

Seite: Q-Chem
Seite: Quantum ESPRESSO
Seite: Queues
Seite: Queues - ALCC

R

Seite: R and CRAN package management
Seite: RAM disk (tmpfs)
Seite: Resources
Seite: Resources - ALCC

S

Seite: Scientific Software Packages
Seite: Service and Support
Seite: Service and Support - ALCC
Seite: Service Desk
Seite: SIMPSON
Seite: Slurm
Seite: Slurm - ALCC
Seite: Slurm 101
Seite: Slurm Queues
Seite: Slurm Queues - ALCC
Seite: slurm-helper convenience scripts
Seite: Start
Seite: Start (ALCC)
Seite: Status
Seite: Status - ALCC
Seite: Submitting Array Jobs and Chain Jobs
Seite: Submitting GPU Jobs
Seite: Submitting Hybrid (Multinode, Multithreaded) Jobs
Seite: Submitting Interactive Jobs
Seite: Submitting Jobs (Slurm Batch System)
Seite: Submitting Jobs - ALCC
Seite: Submitting Multithreaded Jobs
Seite: Submitting Parallel Jobs (MPI/OpenMP)
Seite: Submitting Pure MPI Jobs
Seite: Submitting Serial Jobs

T

Seite: Trainings
Seite: TURBOMOLE

U

Seite: Usage Policies / Nutzungsregelungen

V

Seite: VASP
Seite: Vereinbarung zum Datenmanagement

W

Seite: What is the HPC hardware ecosystem at University of Augsburg
Seite: Workflow for HPC Project Application
Seite: Workflow for HPC Project Membership Application

X

Seite: XTB

Y

Z

!@#$


Recent space activity

Space contributors

{"mode":"list","scope":"descendants","limit":"5","showLastTime":"true","order":"update","contextEntityId":392035423}


  • Keine Stichwörter