- How do I register myself to use the HPC resources?
- How do I get access to the LiCCA or ALCC resources?
- What kind of resources are available on LiCCA?
- What kind of resources are available on ALCC?
- How do I acknowledge the usage of HPC resources on LiCCA in publications?
- How do I acknowledge the usage of HPC resources on ALCC in publications?
- What Slurm Partitions (Queues) are available on LiCCA?
- What Slurm Partitions (Queues) are available on ALCC?
- What is Slurm?
- How do I use Slurm batch system
- How do I submit the serial calculations?
- How do I run multithreaded calculations?
- How do I run parallel calculations on several nodes?
- How do I run GPU based calculations?
- How do I check Slurm current schedule, queue?
- Is there some kind of Remote Desktop for the cluster?
- If I have a question which is not listed here?
- If I want to report a problem?
- Which version of Python could be used?
- Which Anaconda, Miniconda, Miniforge, Micromamba?
- How do I monitor live CPU/GPU/memory/disk utilization?
- How do I check my GPFS filesystem usage and quota situation?
- Popular Labels:
- Search for topics by keyword:
How do I register myself to use the HPC resources?
Please consult HPC Project Membership (HPC-Zugriff).
How do I get access to the LiCCA or ALCC resources?
Please consult HPC Project Membership (HPC-Zugriff).
What kind of resources are available on LiCCA?
Please consult Cluster overview page.
What kind of resources are available on ALCC?
Please consult Cluster overview page.
How do I acknowledge the usage of HPC resources on LiCCA in publications?
Please consult Acknowledgement.
How do I acknowledge the usage of HPC resources on ALCC in publications?
Please consult Acknowledgement - ALCC.
What Slurm Partitions (Queues) are available on LiCCA?
See Slurm Queues.
What Slurm Partitions (Queues) are available on ALCC?
See Slurm Queues.
What is Slurm?
Slurm stands for Simple Linux Utility for Resource Management.
How to use Slurm batch system at University of Augsburg HPC facility please consult Submitting Jobs (Slurm Batch System).
The official documentation can be found at https://slurm.schedmd.com/documentation.html.
How do I use Slurm batch system
Please consult the simplified user manual at Slurm 101, and also Submitting Jobs (Slurm Batch System).
How do I submit the serial calculations?
How do I run multithreaded calculations?
See Submitting Multithreaded Jobs.
How do I run parallel calculations on several nodes?
See Submitting Parallel Jobs (MPI/OpenMP).
How do I run GPU based calculations?
See Submitting GPU Jobs.
How do I check Slurm current schedule, queue?
See Slurm 101.
Is there some kind of Remote Desktop for the cluster?
Please consult Connect to the Cluster.
If I have a question which is not listed here?
Please consult Service desk.
If I want to report a problem?
Please consult Service desk.
Which version of Python could be used?
Please consult Python and conda package management.
Which Anaconda, Miniconda, Miniforge, Micromamba?
Please consult Python and conda package management.
How do I monitor live CPU/GPU/memory/disk utilization?
Please consult Live resource utilization monitoring.
How do I check my GPFS filesystem usage and quota situation?
Please consult "Quota regulations and management" under Parallel File System (GPFS).
Popular Labels:
- access
- alcc
- alcc-epyc
- alcc-xeon
- amd
- answers
- aocl
- benutzergruppen
- cluster
- collected-data
- compiler
- compilers
- connect
- cores
- cpu
- cuda
- data
- datenmanagement
- discussions
- disk-space
- endbenutzer
- environment
- epyc
- epyc-gpu
- epyc-gpu-test
- ethernet
- exportkontroll-verordnungen
- faq
- file-system
- gpfs
- gpu
- guidelines
- homedirectory
- hpc
- hpc-chat
- hpc-project
- hpc-project-membership
- hpc-projekt
- hpc-zugriff
- interactive
- interconnect
- issues
- jobs
- key-fingerprints
- licca
- licca-epyc
- licca-epyc-gpu
- licca-epyc-gpu-sxm
- licca-epyc-gpu-test
- licca-epyc-mem
- lmod
- login-node
- ltmp
- mailinglists
- management
- mellanox
- memory
- modules
- mpi
- multithreaded
- news
- nodes
- numa
- nvidia
- nvidia-nvlink
- openmp
- parallel
- performance
- processor
- project
- questions
- queues
- ram
- ram-disk
- ramdisk
- register
- resources
- sbatch
- scientific-software
- scratch
- service-desk
- slurm
- software
- solutions
- squeue
- start
- status
- storage
- submitting
- threads
- tmp
- tmpfs
- tools
- trainings
- troubleshooting
- unrestored-unknown-attachment
- user
- users
- xeon
Search for topics by keyword:
-
A
-
B
-
C
-
D
-
E
-
F-G
-
H
-
I-K
-
L
-
M
-
N-O
-
P
-
Q
-
R
-
S
-
T-U
-
V-Z