General Information

The Vienna Ab initio Simulation Package (VASP) is a computer program for atomic scale materials modelling, e.g. electronic structure calculations and quantum-mechanical molecular dynamics, from first principles.

Licensing

IT-Physik has an installation and maintenance license for VASP. This license allows us to provide VASP installation to eligible users. However, before granting users access to the installation, every single user has to be approved individually. Your usage of VASP is bound to your group's VASP license agreement. VASP GmbH only offers group licenses.

Access to the VASP directory or module will only be granted upon a successful verification of your membership in your group's license. Each major version of VASP may require a separate license. 

Obtaining Access

In order to obtain access to VASP the following steps are necessary:

  1. Instruct your group license administrator to add your mail address (which needs to be associated with your RZ-account) to the license in VASP Portal.
  2. Contact IT-Physik and specify your email address from the previous step.
  3. Wait until your request has been processed. You will receive a reply to your request via mail.

Running VASP

Lmod Modules

List all available VASP modules
ml spider vasp

We currently only provide modules for VASP6. 

The following environmental variables will be set

  • OMP_NUM_THREADS will correspond to the number given by --cpus-per-task
  • OMP_STACKSIZE=1G
  • OMP_PLACES=cores
  • OMP_PROC_BIND=true

and ulimit -s unlimited will be executed upon loading the module. It is therefore important to load the module in a Slurm Job Script.

All modules provide the following functionality:

  • MPI support
  • OpenMP support
  • libXC support
  • wannier90 support
  • DFT-D4 support
  • HDF5 support

Note that the wannier90 executables provided with VASP are not MPI parallelized (VASP requirement)! If you need MPI-capable wannier90 executables, load wannier90/3.1.0-impi-intel2023.2 or lookup using ml spider wannier90. Do not load MPI parallelized wannier90 modules when running VASP.

Slurm Job Template for CPU Jobs

#!/bin/bash
#SBATCH --job-name=runvasp
#SBATCH --partition=epyc
#SBATCH --tasks-per-node=32
#SBATCH --cpus-per-task=1
#SBATCH --nodes=1
#SBATCH --mem-per-cpu=4G
#SBATCH --time=7-0

module purge
module load vasp6/6.4.3

# Write STOPCAR 120 seconds before Job TIMEOUT
(
    sleep $((SLURM_JOB_END_TIME - $(date +%s) - 1800))
    echo "Writing STOPCAR with LSTOP"
    echo "LSTOP = .TRUE." > STOPCAR
) &
STOPCAR1=$!

# Write STOPCAR with LBABORT 600 seconds before Job TIMEOUT
(
    sleep $((SLURM_JOB_END_TIME - $(date +%s) - 600))
    echo "Writing STOPCAR with LABORT"
    echo "LABORT = .TRUE." > STOPCAR
) &
STOPCAR2=$!

srun vasp_std

pkill -P $STOPCAR1
pkill -P $STOPCAR2
[[ -f STOPCAR ]] && rm -v STOPCAR

Slurm Job Template for GPU Jobs

We currently do not provide GPU capable VASP modules. It is planned to provide one once VASP 6.5.0 is released.

Restarting gracefully terminated calculations

In this case ISTART=2 should be added to the INCAR file (see also VASP Documentation) and the resubmitted Job will continue after reading the WAVECAR from the previous calculation.

Performance considerations

  • Familiarize yourself if VASPs NCORE parameter, which can greatly affect performance. Do benchmarks before attempting large scale calculations!

Hybrid MPI/OpenMP

It is almost always optimal to stick to pure MPI parallelization.

There are only a few cases when using hybrid MPI/OpenMP parallelization. Also, when OpenMP is used then NCORE will be reset to 1.


Support

If you have any problems with VASP please contact the team of IT-Physik (preferred) or the HPC-Servicedesk.

Also, if you have improvements to this documentation that other users can profit from, please reach out!