GROMACS

Description of the LRZ specific usage of GROMACS on the Linux Cluster and SuperMUC-NG HPC Systems.

Introductory Remarks

What is GROMACS?

GROMACS is a versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles.

It is primarily designed for biochemical molecules like proteins and lipids that have a lot of complicated bonded interactions, but since GROMACS is extremely fast at calculating the nonbonded interactions (that usually dominate simulations) many groups are also using it for research on non-biological systems, e.g. polymers.

GROMACS is licensed and redistributed under the GPL.

Please consult the GROMACS web site for further information on this package.

The xdrfile library facility for I/O to xtc, edr and trr files is also available.

Authors

GROMACS was first developed in Herman Berendsens group, department of Biophysical Chemistry of Groningen University. It is a team effort, with contributions from several current and former developers all over world.

Available Versions at LRZ

Use module avail gromacs to find the available versions of GROMACS installations at LRZ including the default versions.

Please consult the example batch scripts below for how to use the MPI parallel versions. The single precision builds typically show larger numerical instabilities than the double precision builds. Furthermore, the GROMACS executables always have the same name.

Please note:

Starting with version 5.0 all Gromacs executables are collected in the 'gmx' utilty (see http://manual.gromacs.org/programs/byname.html)

Usage

(Documentation applies to the spack provided software stacks spack/release/19.2 and later; not applicable to spack/release/19.1 and earlier.)

Access to the binaries, libraries, and data files are provided through the gromacs module. This module sets up environment variables which point to these locations and updates the required paths.

  • The simplest start is to

    > module load gromacs

    which will give you the default version. On the login nodes, it points to a serial verision to run the utilities grompp, trjconv etc.; on the compute nodes, it will give you the mpi parallel version. Note that the parallel version will not work on the login nodes

GROMACS versions available on SuperMUC-NG Phase1    


  • The latest versions available on SuperMUC-NG Phase 1 are provided in the latest software stack, stack/24.5.0
    module sw stack/23.1.0
  • The list of available gromacs modules on SuperMUC-NG Phase 1 you get e.g. by the command

    module av -t gromacs
    gromacs/2024.3-intel-impi-openmp-r32-parallel
    gromacs/2024.3-intel-impi-openmp-r64-parallel
    gromacs/2024.3-intel-r32-serial
    gromacs/2024.5-intel-impi-openmp-r32-parallel
    gromacs/2024.5-intel-impi-openmp-r64-parallel
    gromacs/2024.5-intel-r32-serial
    
    
  • This list contains modules with full version information. The suffixes indicate
    • the compiler (-intel)
    • parallel or serial version (with and without mpi (and openmp) support, respectively )
    • double (-r64) precision

Thus,  gromacs/2024.5-intel-r32-serial indicates gromacs version 2024.5, without mpi or openmp support. This is the default version of the stack.

  • For compatibility, older versions of Gromacs are available in the default spack/22.2.1, following the same name convention
    gromacs/2020.4-plumed
    gromacs/2020.6-plumed
    gromacs/2021.4-plumed
    gromacs/2021.5
    gromacs/2021.5-gcc
    gromacs/2021.5-r64
    gromacs/2021.6
    gromacs/2021.6-gcc
    gromacs/2021.6-r64
    gromacs/2022.3
    gromacs/2022.3-gcc
    gromacs/2022.3-r64

GROMACS versions available on Linux-Cluster


  • On CoolMUC 4 Gromacs is available via Stack/24.5.0. To see the available version one can do:
    module sw stack/24.5.0
    module av gromacs
  • and the available versions visible via module av gromacs are:
    gromacs/2024.3-intel-impi-openmp-r32-parallel  gromacs/2024.3-intel-r32-serial                gromacs/2024.5-intel-impi-openmp-r64-parallel  
    gromacs/2024.3-intel-impi-openmp-r64-parallel  gromacs/2024.5-intel-impi-openmp-r32-parallel  gromacs/2024.5-intel-r32-serial 
  • Both serial (without MPI and openMP support) and parallel (with MPI and openMPI support) versions of GROMACS are available on CoolMUC4. However, on the login nodes only the serial versions are allowed to be loaded, so that system preparation (pdb2gmx, grompp etc.) can be carried out. Running production simulations is to be carried out on the compute nodes where the parallel versions can be loaded in a SLURM script. The parallel versions have openMP and MPI support and are available in single (r32) or double (r64) precision. On the compute nodes, the gmx utility in the case of single precision is called gmx_mpi and in double precision  gmx_mpi_d. On the login nodes, the gmx utility is called without a suffix.

GROMACS versions available on SuperMUC-NG Phase 2

  • SuperMUC-NG Phase 2 is GPU accelerated cluster and the available GROMACS versions can be seen via: 
module sw stack/24.5.0
module av gromacs
gromacs/2024.3-intel-impi-openmp-r32-parallel             gromacs/2024.3-intel-impi-openmp-r64-parallel  gromacs/2024.5-intel-impi-openmp-r32-parallel-pvc         gromacs/2024.5-intel-r32-serial  
gromacs/2024.3-intel-impi-openmp-r32-parallel-pvc         gromacs/2024.3-intel-r32-serial                gromacs/2024.5-intel-impi-openmp-r32-parallel-pvc-heffte  
gromacs/2024.3-intel-impi-openmp-r32-parallel-pvc-heffte  gromacs/2024.5-intel-impi-openmp-r32-parallel  gromacs/2024.5-intel-impi-openmp-r64-parallel    
  • Parallel (with openMP and MPI support) single precision (r32) versions are available, with or without GPU support (PVC). As double precision is not supported for the SYCL version of GROMACS, paralell, double precision versions (r64) are provided without GPU support. Serial versions are provided for system preparation for simulation (pdb2gmx, grompp, etc.) with suffix "serial". For large scale simulations spanning multiple nodes, PME calculation need to be split over several GPUs. On the PVC GPUs with mkl used as PME library the split is done via the Heffte library (denoted with suffix "heffte" in the modules.) For efficient PME splitting, it is recommended that the PME tiles are on the same node/s using  -ddorder pp_pme. 
  • Two example scripts are provided in the next section for submitting simulations: the upper script will start 8 MPI tasks (--ntasks-per-node=8) each starting on a separate GPU tile. Each task will  use 8 CPUs (OMP_NUM_THREADS=8, -ntomp 8) and only one tile will be used for PME calculation (-npme 1) all particle calculations will be carried out on the rest of the GPUs. For efficient scaling with further increase of number of GPUs, the PME calculations need to be split over several GPUs,  the lower script.
  • Node sharing is not implemented on Phase 2, so in order to use its resources most efficiently all GPU tiles need to be used in a jobscript. Therefore, for smaller systems, where the usage of multiple GPUs won't be efficient and only a single tile is used per simulation, we advise running ensemble simulations on all 8 tiles (separate simulation on each) using the -multidir option (please refer to GROMACS manual for more information)

Setting up batch jobs

For long production runs, a SLURM batch job should be used to run the program. The example batch scripts provided in this section require the input files speptide.top, after_pr.gro and full.mdp, all contained in the example archive,  to be placed in ~/mydir before the run.

Further notes:

  • to run in batch mode, submit the script using the sbatch command. To run small test cases interactively, first log in to SLURM cells and reserve the needed resources.

  • for batch jobs, the nice switch is set to 0 for mdrun. Please omit this switch when running interactively, otherwise your job will be forcibly removed from the system after some time.

  • please do not forget to replace the dummy e-Mail address and the input folder 'mydircetory' in the example scripts by your own.


Linux-Cluster with SLURM

gromacs/2024.3

SuperMUC NG with SLURM

gromacs/2018

SuperMUC NG Phase 2 with SLURM

gromacs/2024.5

#!/bin/bash
#SBATCH -o /home/cluster/<group>/<user>/mydir/gromacs.%j.out
#SBATCH -D /home/cluster/<group>/<user>/mydir
#SBATCH -J <job_name>
#SBATCH --get-user-env
#SBATCH --clusters=cm4
#SBATCH --partition=cm4_std #SBATCH --nodes=2
#SBATCH --ntasks-per-node=112 #SBATCH --mail-type=end #SBATCH --mail-user=<email_address>@<domain> #SBATCH --export=NONE
#SBATCH --qos=cm4_std #SBATCH --time=24:00:00

module sw stack/24.5.0

# load the gromacs version you would like to use

module load gromacs/2024.3-intel-impi-openmp-r32-parallel
module list

mpiexec gmx_mpi mdrun -s full -e full -o full -c after_full -g flog

#!/bin/bash
#SBATCH -o ./%x.%j.out
#SBATCH -e ./%x.%j.err
#SBATCH -D ./
#SBATCH --mail-type=END
#SBATCH --time=00:15:00
#SBATCH --partition=test
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=48
#SBATCH --export=NONE
#SBATCH --get-user-env
#SBATCH --mail-user=<email_address>@<domain>
#SBATCH --account=<project id>
#SBATCH -J <job name>

module load slurm_setup
module sw stack/24.5.0 # for Gromacs 2024.5
module load gromacs
module list
mpiexec gmx mdrun -v -deffnm <input filenames>

#####for single node runs with one PME tile used#######

#!/bin/bash
#SBATCH -J MolecularSuperscaling
#SBATCH --account=........
#SBATCH --time=02:00:00
#SBATCH --export=NONE
#SBATCH --partition=general
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=8 

export OMP_NUM_THREADS=8
export GMX_ENABLE_DIRECT_GPU_COMM=1

module sw stack/24.5.0
module load slurm_setup
module load gromacs/2024.5-intel-impi-openmp-r32-parallel-pvc

mpirun -n 8 gmx_mpi mdrun -s relax_ethanol_water_novsite_995334.tpr -nb gpu -pme gpu -update gpu -ntomp 8 -tunepme -npme 1

#####for multinode runs with PME splitting over several GPU tiles:##########

#!/bin/bash
#SBATCH -J MolecularSuperscaling
#SBATCH --account=........
#SBATCH --time=02:00:00
#SBATCH --export=NONE
#SBATCH --partition=general
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=8 

export OMP_NUM_THREADS=8
export GMX_ENABLE_DIRECT_GPU_COMM=1
export GMX_GPU_PME_DECOMPOSITION=1
module sw stack/24.5.0 module load slurm_setup module load gromacs/2024.5-intel-impi-openmp-r32-parallel-pvc-heffte # simulation running on two nodes with two PME tiles mpirun -n 16 gmx_mpi mdrun -s relax_ethanol_water_novsite_995334.tpr -nb gpu -pme gpu -update gpu -ntomp 8 -tunepme -npme 2 -ddorder pp_pme

Scaling on LRZ Systems

SNG

Documentation

After loading the environment module, the $GROMACS_DOC variable points to a directory containing documentation and tutorials.

For further information (including the man pages for all GROMACS subcommands), please refer to the GROMACS web site.