Access and Login to the Linux-Cluster
How to get an Account
Please see article Compact Guide to first-time Linux Cluster Access Process.
How to apply for a Linux Cluster project
- Application for a new LRZ project with Linux Cluster service
- Activation of Linux Cluster access of an existing LRZ project
Login and Security
Only the login nodes can be accessed interactively from the outside world. Two mechanisms are provided for logging in to the system; both incorporate security features to prevent appropriation of sensitive information by a third party.
Access via Secure Shell to Login Nodes
Details on how to configure ssh for usage with the LRZ clusters are available in the document ssh - Secure Shell on LRZ HPC Systems.
From the UNIX command line on the own workstation the login to an LRZ account xyyyyzz is performed via one of the commands given in the following table.
ssh -Y lxlogin1.lrz.de -l xxyyyzz | Haswell (CoolMUC-2) login node |
ssh -Y lxlogin2.lrz.de -l xxyyyzz | Haswell (CoolMUC-2) login node |
ssh -Y lxlogin3.lrz.de -l xxyyyzz | Haswell (CoolMUC-2) login node |
ssh -Y lxlogin4.lrz.de -l xxyyyzz | Haswell (CoolMUC-2) login node |
ssh -Y lxlogin5.lrz.de -l xxyyyzz | Ice Lake (CoolMUC-4) login node |
ssh -Y lxlogin8.lrz.de -l xxyyyzz | KNL Segment (CooMUC-3) login node |
Notes:
The -Y option of ssh is responsible for tunneling of the X11 protocol, it may be omitted if no X11 clients are required, or if you already have otherwise configured X11 tunnelling in your ssh client.
The HOME directory on the Linux Cluster is an NFS mounted volume, which is uniformly mounted on all cluster nodes.
The login node lxlogin8.lrz.de for the KNL cluster is itself not a KNL system; you can develop and compile your software there, but if you optimized for KNL, you may not be able to execute the program on the login node itself, but must use an interactive or scripted SLURM job.
Two-Factor Authentication
Please refer to Two-Factor Authentication on the Linux Cluster
Secure Shell Public Keys
The Secure Shell ECDSA public keys for the interactive nodes are supplied here:
# Hosts lxlogin1,2,3,4 (CoolMUC-2) lxlogin1.lrz.de ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA+NMRJcKKJ0tlj8BnAvPg7f5ThcPhLNEfjbVJm+tjR6RXwtSHOl2lIeJxU4bmoMEyki1QfCuzxVtzMzYGb5rH0= lxlogin2.lrz.de ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA+NMRJcKKJ0tlj8BnAvPg7f5ThcPhLNEfjbVJm+tjR6RXwtSHOl2lIeJxU4bmoMEyki1QfCuzxVtzMzYGb5rH0= lxlogin3.lrz.de ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA+NMRJcKKJ0tlj8BnAvPg7f5ThcPhLNEfjbVJm+tjR6RXwtSHOl2lIeJxU4bmoMEyki1QfCuzxVtzMzYGb5rH0= lxlogin4.lrz.de ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA+NMRJcKKJ0tlj8BnAvPg7f5ThcPhLNEfjbVJm+tjR6RXwtSHOl2lIeJxU4bmoMEyki1QfCuzxVtzMzYGb5rH0= # Host lxlogin8 (CoolMUC-3) lxlogin8.lrz.de ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG/fJkhqIgA/KmxO3oZNcB7+8+/o2rFkIVtsUYhZdfhpiy7ceXzlqDR0EAo8ahuloL6MiPGBWeRxKhl9cBiAzJ4=
Please add these to ~/.ssh/known_hosts on your own workstation before logging in for the first time).
Usage Policy on Login Nodes
The login nodes are meant for management tasks, such as:
- preparing your jobs,
- developing your programs, e.g. compile codes,
- a gateway for copying data from your own computer to the cluster and back again.
Since this resource is shared among many users, LRZ demands that you do not start any
- long-running,
- memory-hogging or
- parallelized programs
on these nodes! Production runs should use batch jobs (serial or parallel) that are submitted to the SLURM scheduler. Our SLURM configuration also supports semi-interactive testing.
Violation of the usage restrictions on the login nodes may lead to your account being blocked from further access to the cluster, apart from your processes being forcibly removed by LRZ administrative staff!
Changing of Password and Shell
Please always use the web interface on the LRZ server to change your login password or your login shell for the cluster systems. Cluster-local commands cannot be used for this purpose.
Please note the LRZ policy for the selection and use of passwords:
Complete German text of the authentication regulations (PDF)
- Complete English text of the authentication regulations (PDF)
Changing the password is necessary after it has been newly issued, or reset to a starting value by a master user or LRZ staff. This assures that actual authentication is done with a password known only to the account owner.
Support via Service Desk
Questions concerning the usage of the Linux Cluster should always be directed to the LRZ Servicedesk. A member of the LRZ HPC support team will then attend to your needs.
Documentation for Application Software and Packages
Please start from the HPC Software and Programming Support entries on the LRZ web server.
LRZ-specific configuration and policies on the clusters
Moving data from/to the cluster
The preferred method to move data to/from LRZ's Linux Cluster is using the Globus Research Data Management Portal. Details on the usage of Globus can be found here. Alternatively, you can also use scp
(Secure Copy) or grid-ftp.
FTP access to the cluster from outside (and also within the clusters) is disabled for security reasons.
User accounts are personalized
User accounts are always assigned to a particular person. For a number of reasons, sharing of user accounts between different persons is not permitted; if noticed, it will lead to the account being deactivated by LRZ. All involved parties (including the Master User of the account's project) will be notified with information on the measures needed to rectify the situation.
Firewall, networking
The cluster is protected from certain types of external attacks by a firewall, the configuration of which may impact the functionality of certain applications as described in the following.
X11 Protocol
Direct X11 connections (via xhost
or xauth
) are prohibited, only ssh tunneling is supported.
Routing
None of the batch nodes in the cluster are by default routed to the outside world. Please contact LRZ Servicedesk if you require a particular system to be routed to the batch nodes.
Electronic mail
We recommend against using the Linux Cluster for mail purposes (apart from having the batch scheduler send mails to you, occasionally).
Environment
Environment settings are controlled via the LRZ module system. Such settings are needed to access specific application program packages, or to properly establish a development enviornment.
Using the cron or at commands
This is not allowed on the LRZ cluster. Please submit SLURM batch jobs for performing computations.
General Linux System Documentation
Typical for Linux systems there are (at least) two formats for the system documentation:
man pages
info pages