General Information

COMSOL Multiphysics is a finite element analyzer, solver, and simulation software package for various physics and engineering applications, especially coupled phenomena and multiphysics. The software facilitates conventional physics-based user interfaces and coupled systems of partial differential equations (PDEs). COMSOL Multiphysics provides an IDE and unified workflow for electrical, mechanical, fluid, acoustics, and chemical applications. 

Licensing

COMSOL Multiphysics is a commercial product that requires a network license to be run on a cluster. Some groups of the MNTF faculty are running a network license server, which will be used when running COMSOL.

Obtaining Access

In order to be able to use COMSOL you need to proof that you are eligible and you need to open a ticket at the HPC-Servicedesk requesting access.

You may be able to load the comsol module, but without access you will not be able to run any comsol command.

Running COMSOL

Discover available COMSOL versions

Discover available COMSOL versions
ml spider comsol

Slurm Job Template

Example Sbatch Template
#!/usr/bin/env bash

# Job name
#SBATCH --job-name=comsol

# Select a partition
#SBATCH --partition=epyc

#Select number of nodes
#SBATCH --nodes=1

# Select number of (MPI) tasks per node
#SBATCH --tasks-per-node=4

# Select number of threads per task (OpenMP)
#SBATCH --cpus-per-task=8

# Request memory (scale up dynamically)
#SBATCH --mem-per-cpu=4G

# Timelimit
#SBATCH --time=7-0

# Always assume your application might be multithreaded.
# Safeguard to limit number of threads to number of requested CPU cores.
# Maybe not necessary
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK

# Load required modules if necessary
ml purge
ml comsol/6.2

# Load Intel MPI
ml intel impi

# Comsol uses Hydra PM, unset the following ENV var to avoid warnings
unset I_MPI_PMI_LIBRARY

comsol batch -mpibootstrap slurm \
    -nnhost $SLURM_NTASKS_PER_NODE \
    -nn $SLURM_NTASKS \
    -np $SLURM_CPUS_PER_TASK \
    -mpiroot "$I_MPI_ROOT" \
    -inputfile input.mph \
    -outputfile output.mph

Need for Intel MPI

Comsol (v6.1 and v6.2) will start properly and but processes will just hang when using more than one Node. Load the impi module and set -mpiroot to make it work (see template above).

Performance considerations

Choosing the total number of cores and nodes

Not all problems scale well if you increase the number of cores and/or nodes. Small problems often run faster when less resources are used. In any case, this is highly dependent on the specific input and needs benchmarking if a lot of simulations need to be executed over days and/or weeks.

Choosing the number of cores per process

Our cluster nodes have 128 CPU cores per node, however choosing to use one process (--ntasks-per-node=1) and 128 OpenMP threads (--cpus-per-task=128) is not very performant (can be 2-5x slower). We made good experience using 4, 8 or 16 CPU cores per process. We recommend not to use more than 16 CPU cores per process due to NUMA latency penalties.

Temporary directories

By default all temporary files are stored on a node-local SSD (/tmp), if you need a different location, please add -tmpdir $LOCATION to the comsol command of the Slurm Template above.

Support

If you have any problems with COMSOL please contact the HPC-Servicedesk.

Also, if you have improvements to this documentation that other users can profit from, please reach out!