General Information
CP2K is a quantum chemistry and solid state physics software package that can perform atomistic simulations of solid state, liquid, molecular, periodic, material, crystal, and biological systems. CP2K provides a general framework for different modeling methods such as DFT using the mixed Gaussian and plane waves approaches GPW and GAPW. Supported theory levels include DFTB, LDA, GGA, MP2, RPA, semi-empirical methods (AM1, PM3, PM6, RM1, MNDO, etc.), and classical force fields (AMBER, CHARMM, etc.). CP2K can do simulations of molecular dynamics, metadynamics, Monte Carlo, Ehrenfest dynamics, vibrational analysis, core level spectroscopy, energy minimization, and transition state optimization using NEB or dimer method. Detailed overview of features.
CP2K is written in Fortran 2008 and can be run efficiently in parallel using a combination of multi-threading, MPI, and CUDA. It is freely available under the GPL license.
Running CP2K
Discover available CP2K versions
ml spider cp2k
Running CP2K on CPU
For CP2K the recommendation is to request relative ressources ( *-per-*=
) and first increase --tasks-per-node
up to 64 before increasing --nodes
to avoid inter-node communication, which is slower than intra-node communication.
#!/usr/bin/env bash #SBATCH --job-name=cp2k #SBATCH --partition=epyc #SBATCH --nodes=1 #SBATCH --tasks-per-node=64 #SBATCH --cpus-per-task=2 #SBATCH --mem-per-cpu=4G #SBATCH --mail-type=END,INVALID_DEPEND,TIME_LIMIT # replace the email with your personal one in order to receive mail notifications: #SBATCH --mail-user=noreply@physik.uni-augsburg.de #SBATCH --time=1-0 ml purge ml load cp2k/2024.1 srun cp2k.psmp < job.in > job-${SLURM_JOB_ID}.out
There is also a pure OpenMP binary cp2k.ssmp
available, but its usecase is quite limited in a HPC context (max 64 CPU cores). All olther versions are symlinks to the psmp and ssmp version.
Running CP2K on GPU
#!/usr/bin/env bash #SBATCH --job-name=cp2k-gpu #SBATCH --partition=epyc #SBATCH --nodes=1 #SBATCH --tasks-per-node=1 #SBATCH --cpus-per-task=32 #SBATCH --mem-per-cpu=4G #SBATCH --gpus-per-task=1 #SBATCH --mail-type=END,INVALID_DEPEND,TIME_LIMIT # replace the email with your personal one in order to receive mail notifications: #SBATCH --mail-user=noreply@physik.uni-augsburg.de #SBATCH --time=1-0 ml purge ml load cp2k/2024.1-ompi-gcc11-cuda12.2 srun cp2k.psmp < job.in > job-${SLURM_JOB_ID}.out
Benchmarks (CPU-only)
Single-Node (varying OpenMP threads)
H2O-n: a system of n water molecules (3n atoms, 8n electrons) in a 31.3 cubic angstrom cell and MD is run for 10 steps.
Benchmark | Task-per-Node | Nodes | CPUs-per-Task | Time (64) | Time (256) | Time (1024) |
---|---|---|---|---|---|---|
H2O-{64,256,1024} | 128 | 1 | 1 | 24.358 | 186.062 | 2899.579 |
H2O-{64,256,1024} | 64 | 1 | 2 | 17.791 | 156.640 | 2640.062 |
H2O-{64,256,1024} | 32 | 1 | 4 | 18.676 | 164.161 | 2626.582 |
H2O-{64,256,1024} | 16 | 1 | 8 | 19.662 | 169.290 | 3698.717 |
H2O-{64,256,1024} | 8 | 1 | 16 | 27.585 | 240.232 | 7278.237 |
Multi-Node (varying nodes)
Benchmark | Task-per-Node | Nodes | CPUs-per-Task | Time | Efficiency |
---|---|---|---|---|---|
H2O-1024 | 64 | 1 | 2 | 2640.062 | 100,00% |
H2O-1024 | 64 | 2 | 2 | 1591.700 | 82,93% |
H2O-1024 | 64 | 3 | 2 | 1478.285 | 59,53% |
H2O-1024 | 64 | 4 | 2 | 1043.409 | 63,26% |
H2O-1024 | 32 | 4 | 4 | 1059.455 | 62,30% |
H2O-1024 | 64 | 8 | 2 | 748.612 | 44,08% |
H2O-1024 | 32 | 8 | 4 | 722.932 | 45,65% |
H2O-1024 | 64 | 16 | 2 | 2070.555 | 7,97% |
H2O-1024 | 32 | 16 | 4 | 582.709 | 28,32% |
Support
If you have any problems with CP2K please contact the team of IT-Physik (preferred) or the HPC-Servicedesk.