Linux Cluster
Compact information in the CoolMUC cheat sheet
The LRZ CoolMUC cluster serves as a general purpose high-performance system and offers a wide span of HPC capabilities to users from Munich and Bavarian Universities.
Subpages
Access and Login to the Linux-Cluster
This document provides information on how to get first-time access to the Linux Cluster, how to login to the cluster and addresses topics concerning the login procedure (ssh, password, two-factor authentication).
- Acknowledgment of Cluster and DSS use — Find here the formulation for acknowleding the resources and the support provided by LRZ.
- Compact Guide to first-time Linux Cluster Access Process — Please read here to learn more about getting an account for the Linux Cluster System at LRZ!
- CoolMUC Web Portal — The CoolMUC Web Portal https://ood.hpc.lrz.de is an Open OnDemand https://openondemand.org/-based web service that allows access to various services through a graphical user interface (GUI).
- Linux Cluster General Security Policies
- Two-Factor Authentication on the Linux Cluster — This document explains how to configure 2FA for Linux Cluster access and provides important advices.
File Systems and IO on Linux-Cluster
This document gives an overview of background storage systems available on the LRZ Linux Cluster. Usage, special tools and policies are discussed.
Job Processing on the Linux-Cluster
- Guidelines for Resource Selection — This page serves as a general guidline for setting up compute jobs. It covers topics such as the selection of compute resources, job run time, memory and I/O requirements.
- Policies on the Linux Cluster — We kindly ask users to consider our policies on the login nodes and the job processing rules described here.
- Running interactive jobs on the Linux Cluster — This page describes how to set-up and start jobs on the interactive partitions cm4_inter and teramem_inter, providing essential commands and basic examples.
- Running large-memory jobs on the Linux Cluster — This document describes how to use the Teramem system to process large-memory jobs, providing a step-by-step recipe and a typical usecase example.
- Running parallel jobs on the Linux Cluster — This document briefly describes how to set-up and start parallel batch jobs on the parallel partitions cm4_std and cm4_tiny. A step-by-step recipe, essential Slurm commands and full examples are provided.
- Running serial jobs on the Linux Cluster — This document briefly describes how to set-up and start serial (using 1 CPU core) or very small parallel batch jobs on the cluster serial. A step-by-step recipe, essential Slurm commands and full examples are provided.
- Slurm Command Examples on the Linux Cluster — This document lists common Slurm commands for job submission, job manipulation or obtaining job and cluster information on CoolMUC-4.
Linux Cluster Segments
System overview.
Linux Cluster Status
This document provides status information on the Linux Cluster, e.g. current node usage or workload on the partitions of each cluster segment.
Useful links
- HPC Software and Programming Support
- Servicedesk for the Linux-Cluster
- User Guides for HPC
- FAQ and Troubleshooting
- Common Topics for all HPC Systems