A lot of scientific software, codes or libraries can be parallelized via MPI or OpenMP/Multiprocessing. Avoid submitting inefficient Jobs! If your code can be parallelized only paritially (serial parts remaining), familiarize with Amdahl's law and make sure your Job efficiency is still well above 50%. Default Values Slurm parameters like --ntasks
and --cpus-per-task
default to 1
if omitted.
Many codes combine multithreading with multinode parallelism using a hybrid OpenMP/MPI approach. Below is a Slurm script appropriate for such a code: Make sure your code actually supportes this mode of operation of combined MPI + OpenMP parallelism.. discouraged use of mpirun The use of Hybrid MPI+OpenMP Jobs (n×m×p CPUs over n×m Tasks on p Nodes)
#!/usr/bin/env bash
#SBATCH --job-name=test
#SBATCH --partition=epyc
#SBATCH --mail-type=END,INVALID_DEPEND
#SBATCH --mail-user=<e-mail address>
#SBATCH --time=1-0
# Request memory per CPU
#SBATCH --mem-per-cpu=1G
# Request n CPUs for your task.
#SBATCH --cpus-per-task=n
# Request n tasks per node
#SBATCH --ntasks-per-node=m
# Run on m nodes
#SBATCH --nodes=p
# Load application module here if necessary
# set number of OpenMP threads
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
# No need to pass number of tasks to srun
srun my_program
mpirun
is heavily discouraged when queuing your Job via Slurm.
For modules provided by the HPC-Team these variables are most lilely already set in the corresponding module definition.Environment variables for different MPI flavors
export I_MPI_PMI_LIBRARY=/hpc/gpfs2/sw/pmi2/current/lib/libpmi2.so
export I_MPI_FABRICS=shm:ofi
export FI_PROVIDER=mlx
export SLURM_MPI_TYPE=pmi2
# or more simply:
module load impi-envvars
export SLURM_MPI_TYPE=pmix_v4 # or pmix_v3 or pmix_v2 depending on what your self-compiled OpenMPI version supports