How to connect to the cluster

For security reasons, direct login to the HPC systems is allowed only from within University of Augsburg network. If you want to connect from the “outside” (e.g. from Home), you'll have to establish a VPN connection to the University’s network or use one of gateway systems first. See VPN Service for details.

Please note:

Working from outside the University of Augsburg may feel slower than from your office, depending on the quality (bandwidth, latency) of your internet connection. If graphical applications feel sluggish and respond slowly, it is usually due to this and not a technical problem with the cluster system. If you have a good internet connection, you should be able to work quite comfortably from Home, except perhaps if you are in countries that are distant to Germany.

The following addresses should be used to connect to the cluster system:

Important!

The login machines are not meant to be used for large computations like i.e. simulation runs. In order to keep these nodes accessible, resource limits are enforced on login nodes, and non-Slurm processes started there will automatically be terminated if their resource usage (including CPU time, memory and run time) exceed those limits. Please use interactive jobs for tasks like pre- or post-processing and even some larger compilations.

Note on Remote Desktop

There is no such thing like a Remote Desktop for the cluster. The cluster is not a workstation and the use of the Slurm Job scheduler is mandatory for any serious calculation (see above). The cluster can be accessed via SSH exclusively and running GUI applications is only supported using ssh -X or by exposing webinterfaces via SSH tunneling.

From the University of Augsburg campus network

Use ssh to connect to ALCC, e.g.: one of ALCC login nodes:
ssh username@alcc129.rz.uni-augsburg.de

Replace username with your university user name (RZ-account).

host key fingerprints:
256 SHA256:if0w0OjnsdEiBK7d7ok8xPfVAEM2Mx2HXgqtOC4+p4I root@alcc129.physik.uni-augsburg.de (ED25519)
256 SHA256:0VyfIRuxCeHICd+pDKUvWSJ/nXRzAtOxPP11Mgzce1g root@alcc129.physik.uni-augsburg.de (ECDSA)

Please note:

The settings dialogue of a tool like PuTTY itself is not a terminal, it's only a settings dialogue. So PuTTY only needs to know the hostname you want to connect to, it executes the ssh command automatically when connecting. It will first ask for your username, except if you entered username@ in front of the host name, which would also be ok. It is sufficient to enter "alcc129.rz.uni-augsburg.de" into the "Host name" field, make sure to NOT add "ssh" in front of it.

If you want to use graphical programs on the cluster system, add the option -X, which enables X11 forwarding (see below).

Use ssh to to connect to ALCC, e.g.: one of ALCC login nodes:
ssh -X username@alcc129.rz.uni-augsburg.de

Again replace username with your university user name.

Please note:

Again, tools like e.g. PuTTY have their own setup. To get X11 forwarding with these tools, use the corresponding button. In the PuTTY configuration settings, this would be Connection::SSH::X11::Enable X11 forwarding.

Outside of the University of Augsburg campus network (external access)

Using VPN (VPN einrichten)

Or alternatively use one of the gateway servers

First log in to the Uni-Ausgburg login node:
ssh username@xlogin.uni-augsburg.de

Replace username with your university user name (RZ-account).

Then use ssh to to connect to ALCC, e.g.: one of ALCC login nodes:
ssh alcc129.rz.uni-augsburg.de

After login

After the login you are located in the home directory on the A cluster.

Execute pwd to get to know the full path.

On the login-node one can also access the home directories in the campus files system (CFS) for example for user mustermann

ls /cfs/home/m/u/mustermann/

CFS-Access is only possible when logging in with a password or from a CFS-enabled Linux machine (Kerberos ticket forwarding via GSSAPI). When logging in via SSH keys, then the CFS is usually not available, unless you temporarily also login additionally via passwort, which generates a Kerberos ticket on the cluster which is then available for up to 7 days until it expires (check via klist ).

Acceptable use of login node include:

  • lightweight file transfers,
  • script and configuration file editing,
  • job submission and monitoring.