Pflotran on HPC Systems

What is Pflotran?

Pflotran is an open source, state-of-the-art massively parallel subsurface flow and reactive transport code based on PETSc.

LRZ doesn't support it officially. But users can easily build it if their own.

Getting Started

Installation

The following recipes worked on CoolMUC-4 as of Oct. 2025. It is for PETSc 3.21.5 and Pflotran 7.0 – adapted from Pflotran's install docu page.

Even on different systems, or with different versions, please try first these recipes for installation. Only if you meet insurmountable difficulties, which you cannot resolve, or have questions concerning the LRZ specific cluster settings and environments, etc., please contact our Service Desk.

Install Recipe using GCC Compiler, Intel MPI and Intel MKL
###### ADAPT AS NEEDED ###########################
export INSTALL_DIR=$SCRATCH/pflotran_inst
export PFLOTRAN_BUILD_DIR=$SCRATCH/tmp_build_pflotran
export PETSC_VERSION=v3.21.5
export PFLOTRAN_VERSION=v7.0

export NPROCS=80
###### ADAPT AS NEEDED ###########################

rm -rf $PFLOTRAN_BUILD_DIR
mkdir -p $PFLOTRAN_BUILD_DIR
pushd $PFLOTRAN_BUILD_DIR

git clone https://gitlab.com/petsc/petsc.git petsc-src
pushd petsc-src/
git checkout $PETSC_VERSION
module load gcc intel-mpi intel-mkl

# necessary to let petsc test MPI on login node
export I_MPI_HYDRA_BOOTSTRAP=fork I_MPI_FABRICS=shm 
unset I_MPI_PMI_LIBRARY I_MPI_HYDRA_IFACE 

./configure --PETSC_ARCH=arch-linux-c-opt \
            --with-blaslapack-dir=$MKLROOT/lib \
            --with-cc=$(which mpigcc) \
            --with-cxx=$(which mpigxx) \
            --with-fc=$(which mpif90) \
            --with-mpi-f90=$(which mpif90) \
            --with-mpiexec=$(which mpiexec) \
            --with-mpi-include=$I_MPI_ROOT/include \
            --with-mpi-lib=$I_MPI_ROOT/lib/libmpi.a \
            COPTFLAGS='-g -O3 -march=native' \
            CXXOPTFLAGS='-g -O3 -march=native' \
            FOPTFLAGS='-g -O3 -march=native -Wno-unused-function -fallow-argument-mismatch' \
            --download-hypre=yes \
            --download-mumps=yes \
            --download-superlu_dist=yes \
            --download-scalapack=yes \
            --download-hdf5=yes \
            --download-hdf5-fortran-bindings=yes \
            --download-hdf5-configure-arguments="--with-zlib=yes" \
            --download-fblaslapack=yes \
            --download-metis=yes \
            --download-parmetis=yes \
            --with-debugging=no \
            --prefix=$INSTALL_DIR


make -j $NPROCS PETSC_DIR=$PWD PETSC_ARCH=arch-linux-c-opt all
make PETSC_DIR=$PWD PETSC_ARCH=arch-linux-c-opt install

export PATH=$INSTALL_DIR/bin:$PATH
export LD_LIBRARY_PATH=$INSTALL_DIR/lib:$LD_LIBRARY_PATH
export PETSC_DIR=$INSTALL_DIR

popd

git clone https://bitbucket.org/pflotran/pflotran pflotran-src
pushd pflotran-src
git checkout $PFLOTRAN_VERSION
./configure --prefix=$INSTALL_DIR
make -j $NPROCS
make install
popd

popd

rm -rf $PFLOTRAN_BUILD_DIR
Install Recipe using Intel Compiler, Intel MPI and Intel MKL
###### ADAPT AS NEEDED ###########################
export INSTALL_DIR=$SCRATCH/pflotran_inst
export PFLOTRAN_BUILD_DIR=$SCRATCH/tmp_build_pflotran
export PETSC_VERSION=v3.21.5
export PFLOTRAN_VERSION=v7.0

export NPROCS=80
###### ADAPT AS NEEDED ###########################

rm -rf $PFLOTRAN_BUILD_DIR
mkdir -p $PFLOTRAN_BUILD_DIR && pushd $PFLOTRAN_BUILD_DIR

git clone https://gitlab.com/petsc/petsc.git petsc-src
pushd petsc-src/
git checkout $PETSC_VERSION
module load intel intel-mpi intel-mkl

# necessary to let petsc test MPI on login node
export I_MPI_HYDRA_BOOTSTRAP=fork I_MPI_FABRICS=shm 
unset I_MPI_PMI_LIBRARY I_MPI_HYDRA_IFACE 

./configure --PETSC_ARCH=arch-linux-c-opt \
            --with-blaslapack-dir=$MKLROOT/lib \
            --with-cc=$(which mpiicx) \
            --with-cxx=$(which mpiicpx) \
            --with-fc=$(which mpiifx) \
            --with-mpi-f90=$(which mpiifx) \
            --with-mpiexec=$(which mpiexec) \
            --with-mpi-include=$I_MPI_ROOT/include \
            --with-mpi-lib=$I_MPI_ROOT/lib/libmpi.a \
            COPTFLAGS='-g -O3 -march=native' \
            CXXOPTFLAGS='-g -O3 -march=native' \
            FOPTFLAGS='-g -O3 -march=native' \
            --download-hypre=yes \
            --download-mumps=yes \
            --download-superlu_dist=yes \
            --download-scalapack=yes \
            --download-hdf5=yes \
            --download-hdf5-fortran-bindings=yes \
            --download-hdf5-configure-arguments="--with-zlib=yes" \
            --download-fblaslapack=yes \
            --download-metis=yes \
            --download-parmetis=yes \
            --with-debugging=no \
            --prefix=$INSTALL_DIR


make -j $NPROCS PETSC_DIR=$PWD PETSC_ARCH=arch-linux-c-opt all
make PETSC_DIR=$PWD PETSC_ARCH=arch-linux-c-opt install

export PATH=$INSTALL_DIR/bin:$PATH
export LD_LIBRARY_PATH=$INSTALL_DIR/lib:$LD_LIBRARY_PATH
export PETSC_DIR=$INSTALL_DIR

popd

git clone https://bitbucket.org/pflotran/pflotran pflotran-src
pushd pflotran-src
git checkout $PFLOTRAN_VERSION
./configure --prefix=$INSTALL_DIR
make -j $NPROCS
make install

popd
popd

rm -rf $PFLOTRAN_BUILD_DIR

Usage

For usage in Slurm jobs, it is essential to load the same compiler, MPI and MKL modules as those used for the build. Furthermore, add the bin and lib paths of the installation directory to PATH and LD_LIBRARY_PATH, respectively.

pflotran can then be started using mpiexec <mpi options> pflotran <options>, as usual MPI programs. Please consult the cluster's Slurm job and MPI configuration docu pages for details.

Please, consult Pflotran's user guide for its factual usage. As a PETSc descendant, Pflotran also allows for the normal PETSc command-line options (specifically -help for help, and for resource consumption and convergence monitoring ... -log_view and -info). Please, consult the PETSc user guide for their descriptions.

Documentation