General Information

GAUSSIAN is a general purpose computational chemistry software package initially released in 1970 by John Pople and his research group at Carnegie Mellon University as Gaussian 70. The name originates from Pople's use of Gaussian orbitals to speed up molecular electronic structure calculations as opposed to using Slater-type orbitals, a choice made to improve performance on the limited computing capacities of then-current computer hardware for Hartree–Fock calculations.

License Restrictions

  • GAUSSIAN may only be used by users or groups holding a valid license.
  • GAUSSIAN may only be used for academic and teaching purposes. Additionally, using the software for directly or indirectly engaging in competition with GAUSSIAN is strictly forbidden by the license conditions.
  • All scientific articles based on the usage of GAUSSIAN must contain suitable citations.
  • GAUSSIAN software installed on the HPC systems may only be used on these systems. It is strictly forbidden to copy any files or folders from the software installation folder.

Obtaining Access

The Insitut of Physics currently holds a site license of

  • GAUSSIAN09 version C.01
  • GAUSSIAN16 version C.02

According to the License Agreement, GAUSSIAN is restricted to users located at the Physics Insitute. Contact the team of IT-Physik to get access.

Running GAUSSIAN

Lmod modules

Obtaining a list of available GAUSSIAN versions

module spider gaussian

GAUSSIAN modules may only be loaded by users or groups holding a valid license and being part of the granting IdM-group rzhpc-gaussian-user .

Setting resource limits

CPU

If you choose to advise GAUSSIAN to use a certain number of CPU cores via a Link-0 Line (e.g. line %NProcShared=8 ), then make sure that it matches the --cpus-per-task Sbatch setting.

Make sure the number of CPUs is specified in the GAUSSIAN inputfile  and matches or remove the %NProcShared line entirely for autodetection (not compatible with the Simplified queuing-scripts).

Memory

If you choose to advise GAUSSIAN to make use of a certain amount of memory via a Link-0 Line (e.g. line %Mem=8GB), then make sure that is a bit less than what you requested via the --mem-per-cpu (times cpu cores) or --mem Sbatch setting, since GAUSSIAN itself also uses a bit of memory. The default value is 3GB.

Disk

It is recommended to define the maximum amount of diskspace the programm is supposed to use via the keyword MaxDisk (e.g. MaxDisk=32GB as part of the route section) as the default value is only 2GB.

Møller-Plesset Methods MP2 MP3 MP4

For link 906 (MPn methods, n=2,3,4) excessive amounts of disk space will be used for writing down the integral data into the rwf file when MaxDisk is not defined explicitly.

We therefore recommend to set the option FullDirect in round brackets after the MPn method keyword, e.g. MP2(FullDirect), in the link section if possible.

Fallback

In order to prevent inefficient Jobs when the %NProcShared or %Mem Link-0-Lines are missing and in case the module is loaded within a Slurm script (recommendation!), the module will automatically deduce the number of CPU cores and available memory from Slurm variables and set the following environment variables accordingly.

VariableDescription

GAUSS_PDEF

Number of CPU cores from --cpus-per-task Sbatch setting

GAUSS_MDEF

95% of the available RAM from --mem-per-cpu or --mem Sbatch setting

Note that you may always override these values using %NProcShared or %mem Link-0-Lines, which have a higher priority.

The Fallback solution is not compatible with the Simplified queuing-scripts below, therefore Link-0 lines for %NProcShared and %Mem should be included in the input file at all times.

Input File

GAUSSIAN is rather strict regarding the format of input files. Make sure empty lines are placed at all appropriate places, especially at the last line of the input file (which unfortunately is stripped by confluence in this example).

Sample GAUSSIAN input file formaldehyde.inp
%NProcShared=4
%Mem=4GB
%chk=formal.chk
#P HF/6-31G(d) scf=verytight

test1 HF/6-31G(d) sp formaldehyde

0 1
C1
O2  1  r2
H3  1  r3  2  a3
H4  1  r4  2  a4  3  d4

r2=1.20
r3=1.0
r4=1.0
a3=120.
a4=120.
d4=180.

Notes on parallization and performance

At the moment only SMP-parallelization of GAUSSIAN using the --cpus-per-task sbatch option is supported. Up to 128 CPU cores may be requested this way rendering multi-node parallelization unnecessary. There is no license available for multinode parallelization (named LINDA). Therefore keep the values of --ntasks and --nodes at 1 at all times.

Note on performance

We don't really recommend to use more than 64 cores for a GAUSSIAN calculation. Most of the time, even fewer cores are enough for small molecules! If you still do, then at least have the courtesy to run benchmarks and verfiy if more than 64 cores are actually worth it!

Furthermore, we recommend to use the --cores-per-socket Sbatch option and set it equal to --cpus-per-task for optimal performance. This will ensure that all cores are on the same socket and slowdowns due to larger inter-core latencies are avoided.

General purpose sbatch template


Jobscript g16.sl
#!/usr/bin/env bash
#SBATCH --job-name="g16"
#SBATCH --ntasks=1
# MAKE SURE THE NEXT LINE IS IN SYNC WITH INPUTFILE!
#SBATCH --cpus-per-task=4
# make sure all cores are on the same socket!
#SBATCH --cores-per-socket=4
#SBATCH --partition=epyc
#SBATCH --nodes=1
#SBATCH --mem-per-cpu=1G
#SBATCH --time=1-0

# The following two lines are important!
module purge
module load gaussian/g16

if [[ ! -f "$1" ]]; then
    echo "File $1 does not exist!"
    exit 1
fi

JOBFILE="$(basename $1)"

# Consider using /dev/shm for write-intensive Jobs like MPx
export GAUSS_SCRDIR=/tmp

echo - Calculation:\ \ \ \ Gaussian16 \($OMP_NUM_THREADS CPU\)
echo - Input File:\ \ \ \ \ $JOBFILE
echo - Host machine:\ \ \ $HOSTNAME
echo - I/O directory:\ \ $PWD
echo - SCF directory:\ \ $GAUSS_SCRDIR

JOBFILE_BASE=${JOBFILE%.*}

[[ -f $JOBFILE_BASE.rwf ]] && mv $JOBFILE_BASE.rwf $GAUSS_SCRDIR

timeout $((SLURM_JOB_END_TIME - $(date +%s) - 900)) g16 $JOBFILE $JOBFILE_BASE.out

[[ -f $GAUSS_SCRDIR/*.cub ]] && mv -av $GAUSS_SCRDIR/*.cub .
[[ -f $JOBFILE_BASE.chk ]] && formchk $JOBFILE_BASE.chk

sbatch script needs an argument

sbatch g16.sl formaldehyde.inp

Jobscript g09.sl
#!/usr/bin/env bash
#SBATCH --job-name="g09"
#SBATCH --ntasks=1
# MAKE SURE THE NEXT LINE IS IN SYNC WITH INPUTFILE!
#SBATCH --cpus-per-task=4
# make sure all cores are on the same socket!
#SBATCH --cores-per-socket=4
#SBATCH --partition=epyc
#SBATCH --nodes=1
#SBATCH --mem-per-cpu=1G
#SBATCH --time=1-0

# The following two lines are important!
module purge
module load gaussian/g09

if [[ ! -f "$1" ]]; then
    echo "File $1 does not exist!"
    exit 1
fi

JOBFILE="$(basename $1)"

# Consider using /dev/shm for write-intensive Jobs like MPx
export GAUSS_SCRDIR=/tmp

echo - Calculation:\ \ \ \ Gaussian09 \($OMP_NUM_THREADS CPU\)
echo - Input File:\ \ \ \ \ $JOBFILE
echo - Host machine:\ \ \ $HOSTNAME
echo - I/O directory:\ \ $PWD
echo - SCF directory:\ \ $GAUSS_SCRDIR

JOBFILE_BASE=${JOBFILE%.*}

[[ -f $JOBFILE_BASE.rwf ]] && mv $JOBFILE_BASE.rwf $GAUSS_SCRDIR

timeout $((SLURM_JOB_END_TIME - $(date +%s) - 900)) g09 $JOBFILE $JOBFILE_BASE.out

[[ -f $GAUSS_SCRDIR/*.cub ]] && mv -av $GAUSS_SCRDIR/*.cub .
[[ -f $JOBFILE_BASE.chk ]] && formchk $JOBFILE_BASE.chk

sbatch script needs an argument

sbatch g09.sl formaldehyde.inp

Running Gaussview

Gaussview can be started from the login-nodes using the bash alias gv defined by the Lmod module. Note that Gaussview5 is available with Gaussian09, and Gaussview6 with Gaussian16.

You need to enable X-Forwarding for your SSH session and load the Lmod module to make gv work.

Simplified queuing-script

For quick execution a simplified queuing-script g09q is provided. This script may be used for small calculations or teaching purposes as it makes some default assumptions regarding the requested memory (2G per CPU).

Submit a job using g09q

g09q 4 formaldehyde.inp

Submit a job using g16q

g16q 4 formaldehyde.inp

Support

If you have any problems with GAUSSIAN please contact the team of IT-Physik (preferred) or the HPC-Servicedesk.