General

Message Passing Interface (MPI) is one of the most popular standardized APIs for parallel processing both within a single node and across many nodes. With MPI, an independent process is spawned for each task in a Slurm job and all processes communicate to each other via MPI using one of many different libraries implementing the MPI API.

OpenMPI vs OpenMP

Do not mix up OpenMPI (an implementation of MPI) and OpenMP (Open Multi-Processing, a standardized API for shared-memory multi-processing/threading). They just happen to have names that are very similar.

In MPI parallelization variables, threads, file handles, and various other pieces of state are NOT shared between the processes even on the same node. Nothing is shared except information and data which is explicitly sent and received via the MPI API.

In contrast, OpenMP multi-processing runs over many threads in a single process and shares all memory.

You may even combine both of them (OpenMP on top of MPI, never the other way around), for example for memory reduction or additional speedup of individual MPI processes.


Implementations

MPI ImplementationCompiler modules supportedMPI modulesMPI type used by SlurmComment
OpenMPI (vanilla)gcc, aocc, intel (some)openmpipmix_v4 or pmix_v5manual loading required, will also load an appropriate pmix module
OpenMPI (Nvidia HPC-X)nvhpchpcxpmix_v4automatically loaded when loading nvhpc, other hpcx-flavors available, will also load an appropriate pmix module
Intel MPIintelimpipmi2manual loading required


Running MPI programms with srun

Run MPI programs always using srun

For all MPI implementations on the cluster MPI process management is supported to be done by Slurm, hence it is recommended to launch an MPI application using the srun command. For examples, see Submitting Pure MPI Jobs or Submitting Parallel Jobs (MPI/OpenMP)

Compiling MPI Code

All modules  providing MPI implementations (or dependencies thereof) will set the following environment variables:

Environment VariableDescription
MPICCC MPI wrapper compiler
MPICXXC++ MPI wrapper compiler
MPIFORTFortran MPI wrapper compiler
MPIF90Fortran MPI wrapper compiler
MPIF77Fortran MPI wrapper compiler
SLURM_MPI_TYPEDefince the MPI type used by srun. See also Implementations
plus some other implementation specific variables (use ml show to see all variables a module sets)

All MPI implementations work the same way when compiling code:

  1. Load a compiler
  2. Load an MPI implementation
  3. Pass the wrapper compilers to the build system

Every application may behave differently

Unfortunately, every build system or even different code using the same build system might behave slighly different in what compilers variables (CC, MPICC, etc.) are expected.

In the ideal case, both CC and MPICC (and the corresponding variables for C++ and Fortran, see table above) environment variables are picked up correctly by the build system (GNU build system, cmake or meson).

Sometimes however, the compiler variables (CC, etc.) are expected to refer to the MPI compiler wrappers instead. In these cases you may want to try overriding them as shown below.

It is usually not harmful to use an MPI compiler wrapper for code that does not use MPI at all.


Passing MPI wrappers to GNU build system (autotools, automake)
CC=$MPICC CXX=$MPICXX FC=$MPIFORT F90=$MPIF90 F77=$MPIF77 ./configure ...
Passing MPI wrappers to the cmake build system
CC=$MPICC CXX=$MPICXX FC=$MPIFORT F90=$MPIF90 F77=$MPIF77 cmake ...
Passing MPI wrappers to the meson build system
CC=$MPICC CXX=$MPICXX FC=$MPIFORT F90=$MPIF90 F77=$MPIF77 meson ...