OpenFOAM on HPC Systems



What is OpenFOAM®?

In its simplest installed form, OpenFOAM® is a set of libraries and solvers for the solution of a wide range of problems in Fluid Dynamics and Continuum Mechanics from laminar simple regimes to DNS or LES including reactive turbulent flows. Many solvers have multiphysics features, where structural mechanics domains are coupled with CFD domains, or new features are available as a module for Molecular Dynamics (MD).

But OpenFOAM® also represents a C++ framework for manipulating fields, and solving general partial differential equations by means of finite volume methods (FVM) on unstructured grids. Thus, it is easy to adapt to complex geometries and wide spectra of configurations and applications. Furthermore, this software is MPI parallel, and it has a large set of useful utilities that allow importing and exporting meshes, configurations, etc., and they interface to, among others, Fluent, CFX, Paraview, and Ensight.

License Terms and Usage Conditions

OpenFOAM® is published under a Common Creative License (ESI), and source code is freely available. Users are encouraged to download and compile a preferred version/flavor of OpenFOAM on the LRZ HPC cluster.

ESI OpenFOAM, The OpenFOAM Foundation, Foam Extend

The installation procedure can be a bit clumsy - although one can in general follow the installation instructions guide of the respective OpenFOAM distributors. Decisions you have to make are about the MPI-flavor (Intel MPI, OpenMPI, etc.), and the usage of the various Third-Party libraries. The best recommendation is usually to compile any dependency you need by yourself. Then namely you can be sure to be independent (to a large extent) of the LRZ system.

In cases you need help, please contact the LRZ ServiceDesk.

We offer some support for maintained OpenFOAM installations (see next paragraph). The decision on version and flavor is guided by the size of request groups. We can support only fully released OpenFOAM versions (no development versions).

Getting Started

Check the available (i.e. installed) versions (for instance; Various systems might support different versions/modifications/...):

~> module avail openfoam
------- /lrz/sys/spack/..........  -------
openfoam/2006-gcc11-impi-i32  openfoam/2006-gcc11-impi-i64

If something is available, it is easy to load a module like as follows.

~> module load openfoam-com/2006-gcc11-impi-i64

(i32 and i64 mean the indexes are 32/64-bit numbers. For large meshes, i64 is probably better.)

COOLMUC-4

For the newer systems (CoolMUC-4), nothing is really prepared, yet. But also here, some OpenFOAM is available. Either use

~> module switch spack/22.2.1
~> module av openfoam
~> module load openfoam      # whatever suits you

In that case, please also load gcc/11 and intel-mpi, as these are not loaded per default.

Or, please, find some extra installations available in our extfiles:

~> module use /lrz/sys/share/modules/extfiles
~> module av openfoam

Here, the depending modules like compiler and MPI should be loaded automatically when you load OpenFOAM.

Please note! These installations were actually meant as experimental ones. They should be feature-complete. Still, differences to the spack installed versions are probable. In case of problems, please contact us via our Service Desk.

 As a first step, you might consider copying the large suite of OpenFOAM tutorials into your FOAM_RUN directory by invoking:

~> mkdir -p $FOAM_RUN 
~> cp -r $FOAM_TUTORIALS $FOAM_RUN/

Smaller serial tutorial cases can be run on the login node. The larger tutorial cases, especially the MPI parallel cases, must be submitted to the HPC clusters (see below).

Pre- and Post-Processing

For pre- and post-processing, i.e. meshing or visualizing results, the LRZ Remote Visualization System is a possible option. Paraview is the visualization tool for OpenFOAM.

For post-processing using ParaView, you only need to create a file with ending .foam (e.g. touch bla.foam), and open that file from ParaView.

You can either download your data to your PC/Laptop and analyze them there. Or, for larger cases, you can use one of the options ParaView offers to analyze the data in place (i.e. remotely on the LRZ systems). In order that this works e.g. in a parallel fashion, you need to leave the case decomposed, and start paraview or pvserver with exactly as many MPI tasks as processor folders are there. Alternatively, you can use reconstructPar and decomposePar to change the composition. In the GUI, you need to specify (hook in the right place) that you wish to open the decomposed case. If the number of MPI tasks does not match the number of processor folders, you will earn error messages.

Batch Jobs on LRZ Clusters

Production runs and longer simulations must be performed on the HPC clusters. A SLURM job on the Linux cluster looks for example like:

myfoam.sh
#!/bin/bash
#SBATCH -o ./jobOF_%j_%N.out 
#SBATCH -D .
#SBATCH -J my_job_name
#SBATCH --clusters=...                        # which cluster, if not default
#SBATCH --partition=...                       # which partition
#SBATCH --nodes=...                           # how many nodes
#SBATCH --tasks-per-node=..                   # number of MPI ranks per node (check the docu for architecture specs, i.e. how many CPUs the nodes have)
#SBATCH --export=NONE
#SBATCH --get-user-env
#SBATCH --time=02:00:00
module load slurm_setup
module load intel-mpi
module load openfoam
mpiexec interFoam -parallel

For different LRZ systems, please consult the documentation for further or different Slurm and Batch job settings! Submission is done via sbatch myfoam.sh.

In order that this works correctly, the total number of MPI tasks (nodes times tasks-per-node) must be equal to numberOfSubdomains inside system/decomposeParDict!

Own Installations

From Source

Please have a look into this guide. It covers essentially the installation from source procedure documented on the OpenFOAM's docu pages.

Using user_spack

Using spack is maybe the simplest approach on our system.

user_spack install procedure example
~> module load user_spack                                                      # activate spack tools

~> spack list openfoam                                                         # find openfoam spack packages
openfoam  openfoam-org
==> 2 packages

~> spack info openfoam                                                         # get OpenFOAM's install options and variants
Package:   openfoam
[...]

~> spack spec -lINt openfoam%gcc@7.5.0                                         # see spack concretization of your recipe (example; your's may be different)
Input spec
--------------------------------
 -   [    ]  .openfoam%gcc@7.5.0

Concretized
--------------------------------
 -   jdcruds  [    ]  builtin.openfoam@2306%gcc@7.5.0~int64~kahip~knl~metis~mgridgen~paraview+scotch+source~vtk~zoltan build_system=generic precision=dp arch=linux-sles15-x86_64
 -   6prl5bb  [bl  ]      ^builtin.adios2@2.9.2%gcc@7.5.0~aws+blosc2+bzip2~cuda~dataspaces~fortran~hdf5~ipo~kokkos+libcatalyst~libpressio+mgard+mpi~pic+png~python~rocm+sst+sz+zfp build_system=cmake build_type=Release generator=make patches=48766ac arch=linux-sles15-x86_64
[^]  j6esrtn  [blr ]          ^builtin.bzip2@1.0.8%gcc@7.5.0~debug~pic+shared build_system=generic arch=linux-sles15-x86_64
 -   kbvmstu  [bl  ]          ^builtin.c-blosc2@2.11.1%gcc@7.5.0+avx2~ipo+lizard+lz4+snappy+zlib+zstd build_system=cmake build_type=Release generator=make arch=linux-sles15-x86_64
[...]

~> spack install -j 40 openfoam%gcc@7.5.0+int64+kahip+metis+mgridgen           # start installation
[...]                                                                          # will take some time


# Usage via module system:
~> module use ~/user_spack/23.1.0/modules/x86_avx512                           # path is system/target dependent!!

~> module av openfoam
------------------- /dss/dsshome1/00/******/user_spack/23.1.0/modules/x86_avx512 -------------------
openfoam/2306-gcc7-impi-i64  

~> module load openfoam                                                        # also intel-mpi for runtime environment; and gcc/7.5.0 for own solver compilation

~> blockMesh -help
[...]


# Usage via spack:
~> module load user_spack

~> spack find -pl openfoam
-- linux-sles15-x86_64 / gcc@7.5.0 ------------------------------
fhzksz5 openfoam@2306  /dss/dsshome1/00/*******/user_spack/23.1.0/opt/linux-sles15-x86_64/openfoam/2306-gcc-7.5.0-fhzksz5
==> 1 installed package

~> spack load openfoam                                                         # if several OF installations are there, using a hash is probably easier; here /fhzksz5

~> spack find --loaded
-- linux-sles15-x86_64 / gcc@7.5.0 ------------------------------
openfoam@2306
==> 1 loaded package

If problems occur, there is not much we (LRZ) can do. We are not OpenFOAM developers, nor maintainers of the spack-packages. We'd kindly ask you to report issues directly to them. But often, it is worth trying different compilers or compiler versions. OpenFOAM reacts very sensible to that. And also please check what you really need. The simpler the dependency trees, the large the chances for success (e.g. there is rarely need to build VTK or paraview).

Building User-defined Solvers/Libraries against the Spack OpenFOAM Installation

An example on CoolMUC-X might look as follows:

~> module rm intel-mpi intel-mkl intel
~> module load gcc intel-mpi openfoam/2006-gcc8-i64-impi

(You can conserve this environment using module's collection feature. This simplifies the development work using frameworks like OpenFOAM.)

The name of the OpenFOAM module contains the compiler – gcc/8. So this module needs be loaded.

The MPI module is usually the default Intel MPI module. (Up to maybe that one better uses the version for GCC, as shown. But as Intel MPI is rather well behaved, and the actual MPI library is wrapped by the libPStream.so of OpenFOAM, you should hardly face occasions where you need to link directly against the MPI library.)

Example continued ...

~> cp -r $FOAM_APP/solvers/incompressible/pimpleFoam .
~> cd pimpleFoam
~/pimpleFoam> find . -name files -exec sed -i 's/FOAM_APPBIN/FOAM_USER_APPBIN/g; s/FOAM_LIBBIN/FOAM_USER_LIBBIN/g' {} +
~/pimpleFoam> WM_NCOMPPROCS=20 wmake
[...]
~/pimpleFoam> which pimpleFoam 
<your-HOME>/OpenFOAM/<your USER ID>-v2006/platforms/linux64GccDPInt64-spack/bin/pimpleFoam

That's it. Please note that using FOAM_USER_APPBIN and FOAM_USER_LIBBIN instead of FOAM_APPBIN and FOAM_LIBBIN is essential, because you have no permissions to install anything into our system folders.

Your user bin and lib paths are usually also before the system paths, such that they should be found first. Therefore, in the example above, pimpleFoam is found in the user path, not the system path.

For testing, prepare a small case for maybe one or two nodes, and use the interactive queue of the cluster to run some time steps.

GPFS parallel Filesystem on the LRZ HPC clusters!

OpenFOAM produces per default lots of small files - for each processor, every writeout step, and for each field. GPFS (i.e. WORK and SCRATCH at the LRZ) is not made for such finely grained file/folder structure. OpenFOAM does not seem to support any HDF5 or NetCDF output, currently, to face this and similar issues.

For more recent versions of OpenFOAM, you can but use collated I/O. Using FOAM_IORANKS environment variable, you can even determine, which ranks perform the I/O. So, our recommendation is to have one rank per node that performs I/O.

As I/O might be a bottleneck anyway, thinking about the problem before running it brute-force on the HPC clusters with possibly hundreds or even thousands of parallel threads, could be advisable! What do you want to get out of your simulation? What are the relevant questions to be answered? OpenFOAM offers also features for in-situ post-processing (Catalyst might be an option here). This mitigates the I/O problems largely, because it reduces the necessity of storing many data and/or hundreds of thousands of small files for the offline post-processing using e.g. Paraview. Please consult the OF User Guide, and look for Function Objects!

Post-Processing via Paraview

Please, use dedicated LRZ paraview modules. NOT paraFoam!

  1. Small cases (<200 MByte) can be copied (scp,rsync,filezilla,winscp,sftp, ...) and analyzed locally using ParaView
  2. Medium-size cases (<1 GByte) could be analyzed using ParaView on login nodes; we recommend to start VNC server-client connection through SSH tunnel; also, pvservers can be started in parallel (use -launcher fork option of mpiexec) and connected to locally on that node through the ParaView GUI (or, also possible, through the SSH tunnel if set properly - we do not recommend this, however). Please, be kind to the other users on the login node and do not monopolize all resources!
  3. Large cases. Start a Slurm job and distribute pvservers parallel on as many nodes as are necessary as described in ParaView Server-Client mode section.

Remarks, Topics, Troubleshooting

swak4Foam Installation

Swak4Foam is an OpenFOAM-external package, and can be installed as any other user provided solver. We can give no guarantee that the following procedure will always work out of the box. Specifically for the newest OpenFOAM versions, compatibility might not be given. However, the following procedure might succeed for users (by example on the Linux Cluster). 

Short Installation Guide
> module load openfoam/2006-gcc8-i64-impi
# it shows that you need to load gcc/8 and Intel MPI (for gcc) module - it is always the default's Intel MPI version
> module rm intel-mpi intel-mkl intel
> module load gcc/8 intel-mpi/2019-gcc
> run
> wget "https://github.com/Unofficial-Extend-Project-Mirror/openfoam-extend-swak4Foam-dev/archive/branches/develop.tar.gz" -O swak4Foam.tar.gz
> tar xf swak4Foam.tar.gz && mv openfoam-extend-swak4Foam-dev-branches-develop swak4Foam && cd swak4Foam
swak4Foam> ./maintainanceScripts/compileRequirements.sh
swak4Foam> export PATH=$PWD/privateRequirements/bin:$PATH
swak4Foam> ln -s swakConfiguration.debian swakConfiguration
swak4Foam> WM_NCOMPPROCS=20 ./Allwmake -j 20
# possibly repeat this step until you see no errors
[...]
wmake libso simpleCloudFunctionObjects
wmake libso swakCloudFunctionObjects
wmake funkySetFields
wmake funkySetBoundaryField
wmake replayTransientBC
wmake funkySetAreaFields
wmake funkyDoCalc
wmake calcNonUniformOffsetsForMapped
wmake fieldReport
wmake funkyPythonPostproc
wmake funkySetLagrangianField
wmake funkyWarpMesh
wmake makeAxialMesh
[...]
swak4Foam> ls $FOAM_USER_APPBIN
calcNonUniformOffsetsForMapped  funkySetAreaFields       funkyWarpMesh
fieldReport                     funkySetBoundaryField    makeAxialMesh
funkyDoCalc                     funkySetFields           replayTransientBC
funkyPythonPostproc             funkySetLagrangianField

The source is the development branch of swak4Foam (see http://openfoamwiki.net/index.php/Installation/swak4Foam/Downloading#swak4Foam_development_version).

For usage, it is important that you (currently) execute manually before (put it into your ~/.profile, if you want)

> export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$FOAM_USER_LIBBIN

No change of the modules are otherwise necessary for the usage. (You can check with ldd $FOAM_USER_APPBIN/fieldReport, for instance. No not found shoud appear!)

Legacy Versions

Old versions of OpenFOAM cannot be supported on LRZ-systems (more than three releases back from the current version).
You need to port to one of the supported recent versions of OpenFOAM.
Specifically, the older versions do not support the vector-register hardware of modern CPUs!
And generally, we recommend to use the latest stable release available.