ANSYS Mechanical (CSM)

ANSYS computational structural mechanics (CSN) analysis software enables you to solve complex structural engineering problems and make better, faster design decisions. With the finite element analysis (FEA) tools available in the suite, you can customize and automate solutions for your structural mechanics problems. ANSYS structural mechanics software is available in two different software environments - ANSYS Workbench (the newer GUI-oriented environment) and ANSYS Mechanical APDL (sometimes called ANSYS Classic, the older MAPDL scripted environment).

Getting Started

In order to use ANSYS Mechanical solutions, the corresponding ANSYS environment module has to be loaded (either for interactive work or in the batch script):

> module load ansys/2024.R1

Simulations which potentially produce a high work load on the used computer system (i.e. essentially all CSM/FEM engineering simulations) must be submitted as non-interactive (batch) jobs on the LRZ clusters via the respective job queueing systems. More information can be found here for SLURM (Linux Cluster, SuperMUC-NG).

ANSYS Mechanical Job Submission on LRZ Linux Clusters using SLURM

For the non-interactive execution of ANSYS Mechanical, there are several options. The general command (according to the ANSYS documentation) is:

> ansys [-j jobname]
        [-d device_type ]
        [-m work_space]
        [-db database_space ]
        [-dir directory ]
        [-b [ nolist ] ] [-s [ noread ] ]
        [-p ansys_product ]  [-g [ off ] ]
        [-custom]
        [ < inputfile ] [ > outputfile ]

However, this does not always work smoothly as intended. In such cases, you should try to execute the program modules directly. For instance, in case of MAPDL (here, a SLURM / CoolMUC-3 example is provided):

#!/bin/bash
#SBATCH -o ./myjob.%j.%N.out
# --- Provide here your own working directory ---
#SBATCH -D ./
#SBATCH -J ansys_mpp3
#SBATCH --clusters=mpp3
#SBATCH --get-user-env
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=64
# --- multiples of 64 for mpp3 ---
#SBATCH --mail-type=end
#SBATCH --mail-user=Max.Mustermann@lrz.de
#SBATCH --export=NONE
#SBATCH --time=0:10:00
#----------------------------------------------------
module load slurm_setup

# Extract from SLURM the information about cluster machine hostnames and number of tasks per node:
machines=""
for i in $(scontrol show hostnames=$SLURM_JOB_NODELIST); do
        machines=$machines:$i:$SLURM_NTASKS_PER_NODE
done
machines=${machines:1}
# For later check of this information, echo it to stdout so that the information is captured in the job file:
echo $machines

module avail ansys
module load ansys/2023.R2
module list

# cat /proc/cpuinfo
echo ========================================== ANSYS Start ==============================================
# For later check of the correctness of the supplied ANSYS MADPL command it is echoed to stdout,
# so that it can be reviewed afterwards in the job file:
echo mapdl -dis -mpi INTELMPI -machines $machines -j "file" -s read -l en-us -b -i ./<DAT-Filename> -o ./file.out
mapdl -dis -mpi INTELMPI -machines $machines -j "file" -s read -l en-us -b -i ./<DAT-Filename> -o ./file.out
# Please do not forget to insert here your own DAT file with its correct name!
echo ========================================== ANSYS Stop ===============================================

For the CoolMUC-2 cluster, cm2_tiny queue, the corresponding example would look like:

#!/bin/bash
#SBATCH -o ./myjob.%j.%N.out
#SBATCH -D ./
#SBATCH -J ansys_cm2_tiny
#SBATCH --cluster=cm2_tiny
#SBATCH --partition=cm2_tiny
#SBATCH --get-user-env
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=28
# ---- multiples of 28 for CoolMUC-2 ----
#SBATCH --mail-type=all
#SBATCH --mail-user=Max.Mustermann@lrz.de
#SBATCH --no-requeue
#SBATCH --export=NONE
#SBATCH --time=0:10:00
#-----------------------
module load slurm_setup

 # Extract from SLURM the information about cluster machine hostnames and number of tasks per node:
machines=""
for i in $(scontrol show hostnames=$SLURM_JOB_NODELIST); do
        machines=$machines:$i:$SLURM_NTASKS_PER_NODE
done
echo $machines
machines=${machines:1}
echo $machines

module list
module av ansys
module load ansys/2024.R1
module list

# cat /proc/cpuinfo
echo ========================================== ANSYS Start ==============================================
 # For later check of the correctness of the supplied ANSYS MADPL command it is echoed to stdout,
# so that it can be reviewed afterwards in the job file:
echo mapdl -dis -mpi INTELMPI -machines $machines -j "file" -s read -l en-us -b -i ./<DAT-Filename> -o ./file.out
mapdl -dis -mpi INTELMPI -machines $machines -j "file" -s read -l en-us -b -i ./<DAT-Filename> -o ./file.out
 # Please do not forget to insert here your own DAT file with its correct name!
echo ========================================== ANSYS Stop ===============================================


The somewhat tedious but transparent extraction of the parallel resource information from SLURM is necessary, since the original MAPDL-script in the ANSYS software wraps the actual call to "mpiexec". Alternatively, you can start "ansysdis241" directly by using "mpiexec", i.e. the execution command line in the above SLURM script would be required to be replaced by:

mpiexec ansysdis241 -dis -mpi INTELMPI -j "file" -s read -l en-us -b -i ./<DAT-Filename> -o ./file.out

Please note the missing parallel machine specification (number of nodes, number of tasks per node), which is in this case not necessary anymore, because the SLURM queueing system provides this information directly to the "mpiexec" call.

LRZ is currently not supporting the usage of the ANSYS Remote Solver Manager (ANSYS RSM), and thus the batch executation of ANSYS Workbench projects. This is because ANSYS RSM is not supporting SLURM as a batch queueing system, so that the parallel execution of ANSYS Workbench projects and the usage of ANSYS Parameter Manager for parallelized parametric design studies conflicts with the concept of operation of the LRZ Linux Cluster and SuperMUC-NG.

ANSYS Mechanical Job Submission on SuperMUC-NG using SLURM

In the following an example of a job submission batch script for ANSYS Mechanical on SuperMUC-NG (Login node: skx.supermuc.lrz.de, SLURM partition = test) in the batch queuing system SLURM is provided. Please note that supported ANSYS versions on SuperMUC-NG are ANSYS 2021.R1 or later. At this time ANSYS 2024.R1 is the default version.

Please mind in the following example the required change in the command line parameter "-mpi intelmpi2019" for the ANSYS version 2022.R1. Starting from ANSYS Release 2022.R2 the software is now utilizing INTEL MPI 2021.6/7 and is no longer requiring additional command line arguments.

#!/bin/bash
#SBATCH -o ./myjob.%j.%N.out
#SBATCH -D ./
#SBATCH -J ansys_test
#SBATCH --partition=test
# ---- partitions : test | micro | general | fat | large
#SBATCH --get-user-env
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=48
# ---- multiples of 48 for SuperMUC-NG ----
#SBATCH --mail-type=END
#SBATCH --mail-user=Max.Mustermann@lrz.de
#SBATCH --no-requeue
#SBATCH --export=NONE
#SBATCH --time=0:10:00
#SBATCH --account=<Your_own_project>
#SBATCH --switches=1@24:00:00
#
#########################################################
## Switch to disable energy-aware runtime (if required) :
## #SBATCH --ear=off
#########################################################
module load slurm_setup

machines=""
for i in $(scontrol show hostnames=$SLURM_JOB_NODELIST); do
        machines=$machines:$i:$SLURM_NTASKS_PER_NODE
done
machines=${machines:1}
echo $machines

module avail ansys
module load ansys/2024.R1
module list

# cat /proc/cpuinfo
echo ========================================== ANSYS Start ==============================================
# For later check of the correctness of the supplied ANSYS Mechanical command it is echoed to stdout,
# so that it can be reviewed afterwards in the job file: #
#
# For all versions up to and including 2021.R2 and 2022.R2 & following please set:
# echo mapdl -dis -mpi INTELMPI -machines $machines -j "file" -s read -l en-us -b -i ./<DAT-Filename> -o ./file.out
# mapdl -dis -mpi INTELMPI -machines $machines -j "file" -s read -l en-us -b -i ./<DAT-Filename> -o ./file.out #
#
# Only for ANSYS 2022.R1 version on SNG please set:
echo mapdl -dis -mpi  intelmpi2019 -machines $machines -j "file" -s read -l en-us -b -i ./<DAT-Filename> -o ./file.out
mapdl -dis -mpi intelmpi2019 -machines $machines -j "file" -s read -l en-us -b -i ./ds.dat -o ./file.out
# Please do not forget to insert here your own DAT and OUT file with their intended and correct names!
echo ========================================== ANSYS Stop ===============================================