Deploying software simulators through containers using Apptainer

General 

This page describes how to install quantum simulation software packages on Apptainer containers and run them on the SuperMUC-NG. Because Docker containers are not supported on the SuperMUC-NG, Apptainer enables the creation of containers using Docker images, providing efficiency with minimal demand on compute resources to run them. 

Apart from the fact that Apptainer containers are handled like regular files, it is built for research-oriented HPC environments and does not require root or superuser rights to use. It also provides easier communication between the host filesystem and the container(using the same namespace). You can find more details from the Apptainer Documentation by clicking the link. 

Supported HPC Systems

SuperMUC-NG, SuperMUC-NG Phase 2 

Setting up SuperMUC-NG Environment and Installing Apptainer

Setting up SuperMUC-NG for remote access:

  1. How to Connect to SuperMUC-NG through tunnelling and remoteforwarding 
    1. Open 2 terminal windows
    2. Run proxy with a defined port on one: ./Library/Python/3.12/bin/proxy —port 1234
    3. Alternatively, just issue the command runproxy if you have predefined your ports for localhost and RemoteForward in your .ssh/config file
    4. Then login to SuperMUC-NG through tunnelling using the port defined by issuing the command: ssh -R 1234:localhost:1234 sng
  2. Next, you export the following http proxy options(or put them into your .bash_profile) to set them for full access through localhost:1234:
    1. export http_proxy=localhost:1234
    2. export https_proxy=localhost:1234
    3. export HTTP_PROXY=localhost:1234
    4. export HTTPS_PROXY=localhost:1234

For other ways of providing internet access to SNG, please visit the SuperMUC-NG access documentation here.

Installing Apptainer on SuperMUC-NG:

Apptainer is already preinstalled on SuperMUC-NG; you just need to load it as a module:

module use /lrz/sys/share/modules/extfiles
module load apptainer/1.3.1
module load squashfs/4.4

Now the apptainer is ready to use.

Building Apptainer on SuperMUC-NG:

SuperMUC-NG does not allow building of apptainer containers without root access. Therefore, you can first install and build an x86 Apptainer container on your machine or Virtual Machine(VM), and then push the image to a container registry before pulling it to create your desired container on SuperMUC-NG. We describe here the mentioned steps:

  1. Step 1: Preparing the container locally
    1. Prerequisites
      • Apptainer ≥ 1.3.1
      • Root or --fakeroot  privileges (see Apptainer official documentation for a definition)
      • Local x86 Linux VM or machine to build containers. This is necessary only when you want to build IntelQS, which has x86 architecture as a requirement. 
      • DockerHub account (for publishing container images)
    2. Building Containers Locally (Required): SuperMUC-NG does not allow root builds. Therefore, all container images must be created on a local x86 environment first. 
      1. Install Apptainer

        Visit the Apptainer Documentation here for more details on installation

        # For Rocky Linux / RHEL / CentOS / Fedora
        sudo dnf install apptainer

         

      2. Verify Installation
        apptainer --version
      3. Create Definition File
         Create `container_name.def` describing:
                - Base OS
                - Required system packages
                - Environment modules
                - Runtime configuration
      4. Build the Container Image
        # With root privileges,
        apptainer build container_name.sif container_name.def
        
        # Or using fakeroot
        apptainer build --fakeroot container_name.sif container_name.def
        The output image is `container_name.sif`.   
  1. Step 2: Pushing and Pulling the Container to SuperMUC-NG
    1. Pushing Images to DockerHub
      1. Login
        apptainer registry login --username <USERNAME> docker://registry-1.docker.io
      2. Push Image
        apptainer push container_name.sif oras://registry-1.docker.io/<USERNAME>/container_name:latest


    2. Pulling and Running Containers on SuperMUC-NG
      1. Load Required Modules
        module use /lrz/sys/share/modules/extfiles
        module load apptainer/1.3.1 squashfs/4.4
        module unload intel-mpi/2019-intel || true
        module load openmpi/4.1.2-gcc11
      2. Pull Images
        apptainer pull container_name.sif oras://registry-1.docker.io/<USERNAME>/container_name:latest

Example Container with MPI Support:

 Intelqs Container

  1. Create intelqs definition file (intelqs_defintion_file.def):
    IntelQS with MPI Support
    Bootstrap: docker
    From: ubuntu:22.04
    
    %labels
        app "Intel-QS (MPI) dual-mode (host or container OpenMPI)"
        version "2.2-fixed"
    
    %environment
        export OMPI_DIR=/opt/ompi
        export IQS_PREFIX=/opt/iqs
        export PATH=$OMPI_DIR/bin:$IQS_PREFIX/bin:$PATH
        export LD_LIBRARY_PATH=$OMPI_DIR/lib:$IQS_PREFIX/lib:$LD_LIBRARY_PATH
        export OMP_NUM_THREADS=1
    
    %post
        set -eux
        export DEBIAN_FRONTEND=noninteractive
    
        # --- Runtime libraries (covering ldd deps + debugging tools) ---
        apt-get update && apt-get install -y --no-install-recommends \
            bash \
            libc6 \
            libc-bin \
            libstdc++6 \
            libgcc-s1 \
            libgomp1 \
            libtinfo6 \
            libncursesw6 \
            libreadline8 \
            zlib1g \
            libnuma1 \
            libpciaccess0 \
            libxml2 \
            liblzma5 \
            libatomic1 \
            coreutils \
            binutils \
            file \
            vim \
            strace \
            gdb \
            python3 \
            python3-pip \
            && apt-get clean && rm -rf /var/lib/apt/lists/*
    
        # --- Build tools + HPC extras (for compiling inside container) ---
        apt-get update && apt-get install -y --no-install-recommends \
            build-essential \
            g++ gcc \
            cmake \
            ninja-build \
            git \
            wget curl tar ca-certificates \
            libomp-dev \
            libboost-all-dev \
            python3-pybind11 \
            hwloc \
            libevent-2.1-7 \
            numactl \
            && apt-get clean && rm -rf /var/lib/apt/lists/*
    
        # --- Optional: build OpenMPI inside container ---
        export OMPI_DIR=/opt/ompi
        export OMPI_VERSION=4.1.2   # match host version if possible
        mkdir -p /tmp/ompi $OMPI_DIR
        cd /tmp/ompi
        wget -q https://download.open-mpi.org/release/open-mpi/v4.1/openmpi-$OMPI_VERSION.tar.bz2
        tar -xjf openmpi-$OMPI_VERSION.tar.bz2
        cd openmpi-$OMPI_VERSION
        ./configure --prefix=$OMPI_DIR
        make -j"$(nproc)"
        make install
        rm -rf /tmp/ompi
    
        # --- Fix: ensure /lib64 loader path exists ---
        mkdir -p /lib64
        if [ ! -e /lib64/ld-linux-x86-64.so.2 ]; then
            ln -sf /lib/x86_64-linux-gnu/ld-2.35.so /lib64/ld-linux-x86-64.so.2
        fi
    
        # --- Self-check: ensure bash runs ---
        echo "=== Checking bash runtime in container ==="
        /bin/bash --version
        ldd /bin/bash || true
    
        # --- Optional: build Intel-QS inside container (container OpenMPI mode) ---
        # export IQS_PREFIX=/opt/iqs
        # git clone --depth=1 https://github.com/intel/intel-qs.git /tmp/intel-qs
        # cmake -S /tmp/intel-qs -B /tmp/intel-qs/build \
        #     -DCMAKE_INSTALL_PREFIX=$IQS_PREFIX \
        #     -DCMAKE_BUILD_TYPE=Release \
        #     -DIqsMPI=ON -DBuildExamples=ON \
        #     -DCMAKE_C_COMPILER=$OMPI_DIR/bin/mpicc \
        #     -DCMAKE_CXX_COMPILER=$OMPI_DIR/bin/mpicxx
        # cmake --build /tmp/intel-qs/build -j"$(nproc)"
        # cmake --install /tmp/intel-qs/build
        # rm -rf /tmp/intel-qs
    
    %runscript
        exec /bin/bash "$@"
    
    
    
  2. Building the Container:
    Building IntelQS Container
    # Building the container using the definition file
    apptainer build intelqs_container.sif intelqs_definition_file.def
  3. Pushing Images to DockerHub:
    Push Image to DockerHub
    # Replace <USERNAME> with your DockerHub credentials
    apptainer push intelqs_container.sif oras://registry-1.docker.io/<USERNAME>/intelqs_container:latest
    
    
  4. Login to SuperMUC-NG and load the following modules:
    Loading Required Modules
    module use /lrz/sys/share/modules/extfiles
    module load apptainer/1.3.1 squashfs/4.4
    
    # unload intel-mpi/2019-intel if you wish to use a different MPI flavour in this case OpenMPI
    # Note that the version of mpi installed in the container must be the same with that available on SuperMUC-NG
    module unload intel-mpi/2019-intel || true
    module load openmpi/4.1.2-gcc11
  5. Pull Image:
    Pulling Image
    apptainer pull intelqs_container.sif oras://registry-1.docker.io/<USERNAME>/intelqs_container:latest
  6. Running Intelqs Inside the Container:
    Running the Container and binding a workspace
    apptainer shell --bind "$PWD:/workspace" intelqs_container.sif
  7. Cloning and building intelqs with MPI :
    Cloning and Building IntelQS with MPI in workspace directory for access via the container
    cd /workspace
    git clone https://github.com/intel/intel-qs.git
    cd intel-qs
    mkdir build && cd build
    
    cmake .. -DIqsMPI=ON
    make -j4
    
    
  8. Bash script to build intelqs and its examples:
    build_intelqs.sh
    #!/bin/bash                           
    
    # Script to build intelqs and its examples.
    # Modify to suit your need
    
    set -euo pipefail       # Exit on error, unset vars, or failed pipe
    
    echo ""
    echo "----------------------------------------"
    echo "Loading required modules..."
    echo "----------------------------------------"
    echo ""
    
    module use /lrz/sys/share/modules/extfiles
    module load apptainer/1.3.1 squashfs/4.4
    
    # Use OpenMPI for better portability with Slurm. 
    # Unload default IntelMPI on SNG and load OpenMPI 
    # (host and container versions must match).
    module unload intel-mpi/2019-intel || true
    
    module load cmake/3.21.4 gcc/11.2.0 ninja/1.13.1  # Build toolchain
    module load zlib hwloc                       # Extra runtime dependencies
    
    # === Select MPI mode ===
    # USE_CONTAINER_MPI=0 → use host-provided MPI (bind host libs into container).
    # USE_CONTAINER_MPI=1 → use container-provided MPI (ignore host MPI libs).
    
    nproc=6 # Number of processes
    
    # Workspace paths
    WORKDIR=$PWD
    IQS_SRC_DIR="$WORKDIR/intel-qs"             # Source directory for Intel-QS
    IQS_BUILD_DIR="$IQS_SRC_DIR/build"          # Build directory
    IQS_EXE="$IQS_SRC_DIR/examples/bin/expect_value_test.exe"  # Example binary (CHANGE to whatever binary you want to verify)
    
    USE_CONTAINER_MPI=${USE_CONTAINER_MPI:-0}   # 0 = host MPI, 1 = container MPI
    
    if [[ "$USE_CONTAINER_MPI" -eq 0 ]]; then
        echo ">>> Using HOST OpenMPI"
        module load openmpi/4.1.2-gcc11
    else
        echo ">>> Using CONTAINER OpenMPI"
        module unload openmpi/4.1.2-gcc11 || true
    fi
    
    
    # === Host OpenMPI mode ===
    if [[ "$USE_CONTAINER_MPI" -eq 0 ]]; then
        MPICXX=$(readlink -f "$(command -v mpicxx)")
        MPI_BIN=$(dirname "$MPICXX")
        MPI_ROOT=$(dirname "$MPI_BIN")
        MPI_LIB="$MPI_ROOT/lib"
    
        echo ">>> Using HOST OpenMPI"
        echo "MPICXX = $MPICXX"
        echo "MPI_LIB = $MPI_LIB"
    
        # Extend LD_LIBRARY_PATH with MPI lib + system defaults
        export LD_LIBRARY_PATH="$MPI_LIB:$LD_LIBRARY_PATH:/usr/lib64:/lib64:/usr/lpp/mmfs/lib"
    
        echo "LD_LIBRARY_PATH = $LD_LIBRARY_PATH"
    fi
    
    echo "----------------------------------------"
    echo "Cloning & building Intel-QS"
    echo "----------------------------------------"
    
    # Clone source if not already present
    if [[ ! -d "$IQS_SRC_DIR" ]]; then
      git clone --depth=1 https://github.com/intel/intel-qs.git "$IQS_SRC_DIR"
    fi
    
    # Create fresh build directory
    rm -rf "$IQS_BUILD_DIR"
    mkdir -p "$IQS_BUILD_DIR"
    cd "$IQS_BUILD_DIR"
    
    if [[ "$USE_CONTAINER_MPI" -eq 0 ]]; then
        # ✅ build with host mpicxx
        cmake -G Ninja \
              -DIqsMPI=ON -DIqsUtest=OFF -DIqsPython=OFF -DIqsNoise=OFF -DBuildExamples=ON \
              -DCMAKE_CXX_COMPILER=$(which mpicxx) \
              ..
    else
        # ✅ build inside container with its mpicxx
        apptainer exec --cleanenv --no-home "$IMAGE" \
          cmake -G Ninja -S .. -B . \
              -DIqsMPI=ON -DIqsUtest=OFF -DIqsPython=OFF -DIqsNoise=OFF -DBuildExamples=ON \
              -DCMAKE_CXX_COMPILER=/opt/ompi/bin/mpicxx
    fi
    
    cmake --build . -j$(nproc)
    
    # Verify binary exists
    if [[ ! -x "$IQS_EXE" ]]; then
      echo "ERROR: example not built!"
      exit 1
    fi
    
    
    echo ""
    echo "----------------------------------------"
    echo "Build Completed Successfully"
    echo "----------------------------------------"
    echo ""
    
    
    Run Bash Script
    ./build_intelqs.sh
  9. SLURM Script to run container and choose whether to execute examples using the host MPI or the Container MPI:
    SLURM Script to run and execute programs using Host or Container MPI
    #!/bin/bash                           
    #SBATCH --job-name=iqs-mpi-run              # Job name (appears in job queue)
    #SBATCH -o ./iqs_mpi_run_%j.out             # Standard output file (%j = job ID)
    #SBATCH -e ./iqs_mpi_run_%j.err             # Standard error file (%j = job ID)
    #SBATCH --nodes=1                           # Number of nodes to allocate
    #SBATCH --ntasks=4                          # Total number of MPI tasks (processes)
    #SBATCH --time=00:10:00                     # Maximum walltime (hh:mm:ss)
    #SBATCH --account=<ACOUNT>                  # <ACCOUNT> Project/account ID to charge resources
    #SBATCH --partition=<PARTITION>             # Partition/queue to run on (use appropriate SNG partition: test, micro, general, large etc)
    #SBATCH --mail-type=NONE                    # When to send email notifications (e.g., BEGIN, END, FAIL)
    #SBATCH --mail-user=NONE                    # Email address for job notifications
    
    # Users can choose a more fitting partition on the SNG
    
    set -euo pipefail
    
    echo ""
    echo "----------------------------------------"
    echo "Loading required modules..."
    echo "----------------------------------------"
    echo ""
    
    module use /lrz/sys/share/modules/extfiles
    module load apptainer/1.3.1 squashfs/4.4
    
    # Use OpenMPI for better portability with Slurm. 
    # Unload default IntelMPI on SNG and load OpenMPI 
    # (host and container versions must match).
    module unload intel-mpi/2019-intel || true
    
    # === Select MPI mode ===
    # USE_CONTAINER_MPI=0 → use host-provided MPI (bind host libs into container).
    # USE_CONTAINER_MPI=1 → use container-provided MPI (ignore host MPI libs).
    
    USE_CONTAINER_MPI=${USE_CONTAINER_MPI:-0}   # 0 = host MPI, 1 = container MPI
    
    if [[ "$USE_CONTAINER_MPI" -eq 0 ]]; then
        echo ">>> Using HOST OpenMPI"
        module load openmpi/4.1.2-gcc11
    else
        echo ">>> Using CONTAINER OpenMPI"
        module unload openmpi/4.1.2-gcc11 || true
    fi
    
    # Workspace
    WORKDIR=$PWD
    IMAGE=$WORKDIR/intelqs_container.sif    # Change to the correct image file
    IQS_SRC_DIR="$WORKDIR/intel-qs"           # Change to the appropriate directory of your program
    IQS_EXE="$IQS_SRC_DIR/examples/bin/expect_value_test.exe" # Change to the approriate executable you want to run
    
    # === Host OpenMPI mode ===
    if [[ "$USE_CONTAINER_MPI" -eq 0 ]]; then
        MPICXX=$(readlink -f "$(command -v mpicxx)")
        MPI_BIN=$(dirname "$MPICXX")
        MPI_ROOT=$(dirname "$MPI_BIN")
        MPI_LIB="$MPI_ROOT/lib"
    
        echo ">>> Using HOST OpenMPI"
        echo "MPICXX = $MPICXX"
        echo "MPI_LIB = $MPI_LIB"
    
        # Load runtime dependencies via LRZ modules (instead of hardcoding paths)
        module load zlib
        module load hwloc
    
        # Extend LD_LIBRARY_PATH with MPI lib + system defaults
        export LD_LIBRARY_PATH="$MPI_LIB:$LD_LIBRARY_PATH:/usr/lib64:/lib64:/usr/lpp/mmfs/lib"
    
        echo "LD_LIBRARY_PATH = $LD_LIBRARY_PATH"
    fi
    
    echo ""
    echo "----------------------------------------"
    echo "Running Intel-QS"
    echo "----------------------------------------"
    echo ""
    
    if [[ "$USE_CONTAINER_MPI" -eq 0 ]]; then
        echo ">>> Running with HOST OpenMPI + Apptainer for libs"
    
        echo ">>> Dependency check (host loader)"
        apptainer exec --cleanenv --no-home \
          --bind "$WORKDIR:/workspace" \
          --bind /usr/lib64/slurm \
          "$IMAGE" /lib/x86_64-linux-gnu/ld-linux-x86-64.so.2 \
          --list /workspace/intel-qs/examples/bin/expect_value_test.exe | grep "not found" || echo "All dependencies OK"
    
        echo ">>> Running with mpirun (host loader)"
        mpirun -np $SLURM_NTASKS -x LD_LIBRARY_PATH \
          apptainer exec --cleanenv --no-home \
          --bind "$WORKDIR:/workspace" \
          --bind /usr/lib64/slurm \
          "$IMAGE" /lib/x86_64-linux-gnu/ld-linux-x86-64.so.2 \
          /workspace/intel-qs/examples/bin/expect_value_test.exe
    
        # NOTE: /usr/lib64/slurm binds SLURM PMI libraries inside container
    else
        echo ">>> Running with CONTAINER OpenMPI"
        apptainer exec --cleanenv --no-home "$IMAGE" \
          /opt/ompi/bin/mpirun -np $SLURM_NTASKS \
          /opt/iqs/examples/bin/expect_value_test.exe
    fi
    
    echo "----------------------------------------"
    echo "Job completed"
    echo "----------------------------------------"
    
    
    run_intelqs.slurm
    sbatch run_intelqs.slurm

Final Notes:

You can follow this description to install other simulators. Also pay attention to the directory structure in the example given and modify to suit your desired directories.