LS-Dyna software is an advanced general-purpose multiphysics simulation software package developed by the Livermore Software Technology Corporation (LSTC), which was acquired by ANSYS Inc. in Q4/2019. The LS-Dyna software is part of the ANSYS standard software distribution and comes with an integration into the ANSYS Workbench environment as well as with an LS-Dyna standalone solver executable. While the package continues to contain more and more possibilities for the calculation of many complex, real world problems, its origins and core-competency lie in highly nonlinear transient dynamic finite element analysis (FEA) using explicit time integration.
Getting Started
In order to use LS-Dyna solutions, the corresponding LS-Dyna environment module has to be loaded (either for interactive work or in the batch script):
> module load lsdyna/2024.R2
Simulations which potentially produce a high work load on the used computer system (i.e. essentially all CSM/FEM engineering simulations) must be submitted as non-interactive (batch) jobs on the LRZ clusters via the respective job queueing systems. More information can be found here for SLURM (Linux Cluster, SuperMUC-NG).
LS-Dyna Job Submission on LRZ Linux Clusters using SLURM
The usage of the LS-Dyna solver can be treated almost in the same manner as the usage of ANSYS Mechanial (MAPDL) standalone solver simulations using the ANSYS FEA solver. Since ANSYS Release 2022.R2 (November 2022) both the basic LS-Dyna solver license (license key : dyna) as well as the required LS_Dyna parallel licenses (license key : dysmp) are included in the LRZ campus license for the ANSYS software. On LRZ High-Performance computing systems the LS-Dyna licenses are provided free of charge.
In contrary to the mainline ANSYS solver products the LS-Dyna solver is still a solver, which can only be executed on a single CPU core with the basic solver license (1*dyna). Executing LS-Dyna solver on a higher number of CPU cores (N cores) requires the checkout of (N-1) LS-Dyna HPC licenses ((N-1)*dysmp). For historical reasons and a still existing lack in the ANSYS software integration of this solver, the LS-Dyna HPC licenses are different from the generally applicable ANSYS HPC licenses (anshpc).
LS-Dyna Serial Execution
The serial execution of the LS-Dyna solver can be executed e.g. in the serial SLURM queue by the following script:
#!/bin/bash
#SBATCH -o ./myjob.lsdyna.%j.%N.out
#SBATCH -D ./
#SBATCH -J lsdyna_serial
#SBATCH --clusters=serial
#SBATCH --partition=serial_std
#SBATCH --get-user-env
#SBATCH --mem=4096mb
#SBATCH --cpus-per-task=1
#SBATCH --mail-type=NONE
### mail-type can be either one of (none, all, begin, end, fail,...)
#SBATCH --mail-user=Max.Mustermann@lrz.de
#SBATCH --export=NONE
#SBATCH --time=0:10:00
#----------------------------------------------------
module load slurm_setup
# Extract from SLURM the information about cluster machine hostnames and number of tasks per node:
machines=""
for i in $(scontrol show hostnames=$SLURM_JOB_NODELIST); do
machines=$machines:$i:$SLURM_NTASKS_PER_NODE
done
machines=${machines:1}
# For later check of this information, echo it to stdout so that the information is captured in the job file:
echo $machines
module avail lsdyna
module load lsdyna/2024.R2
module list
echo ========================================== LS-Dyna Start ==============================================
# For later check of the correctness of the supplied ANSYS LS-DYNA command it is echoed to stdout,
# so that it can be reviewed afterwards in the job file:
echo lsdyna I=<...my_lsdyna-example_case...>.k
lsdyna I=<...my_lsdyna-example_case...>.k
# Please do not forget to insert here your own *.k file with its correct name!
echo ========================================== LS-Dyna Stop ===============================================
LS-Dyna Local Parallel or Distributed Parallel Execution
Starting from November 2022 the ANSYS Academic Research licenses allow the parallel execution of LS-Dyna solver as long as a sufficient number of LS-Dyna HPC licenses (license key : dysmp) have been purchased from LRZ campus license. For LRZ high-performance computing systems (CoolMUC-2/3) the LS-Dyna solver and LS-Dyna parallel licenses (dyna, dysmp) are provided to Linux Cluster Users free of charge.
Provided, that LS-Dyna HPC licenses are available, a corresponding LS-Dyna SLURM submission script could look like the following for the CoolMUC-2 Linux cluster (cm2_tiny) in local parallel execution mode (--nodes=1) or distributed parallel execution mode (e.g.: --nodes=2):
#!/bin/bash
#SBATCH -o ./job.lsdyna.%j.%N.out
# --- Provide here your own working directory ---
#SBATCH -D ./
#SBATCH -J lsdyna_cm2
#SBATCH --clusters=cm2_tiny
#SBATCH --partition=cm2_tiny
#SBATCH --get-user-env
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=28
# --- multiples of 28 for cm2 ---
#SBATCH --mail-type=end
#SBATCH --mail-user=Max.Mustermann@lrz.de
#SBATCH --export=NONE
#SBATCH --time=0:10:00
#----------------------------------------------------
module load slurm_setup
# Extract from SLURM the information about cluster machine hostnames and number of tasks per node:
machines=""
for i in $(scontrol show hostnames=$SLURM_JOB_NODELIST); do
machines=$machines:$i:$SLURM_NTASKS_PER_NODE
done
machines=${machines:1}
# For later check of this information, echo it to stdout so that the information is captured in the job file:
echo $machines
module avail lsdyna
module load lsdyna/2024.R2
module list
echo ========================================== LS-Dyna Start ==============================================
# For later check of the correctness of the supplied ANSYS LS-DYNA command it is echoed to stdout,
# so that it can be reviewed afterwards in the job file:
echo lsdyna pr=dyna i=my_test.k -dp -dis -machines $machines
lsdyna pr=dyna i=my_test.k -dp -dis -machines $machines
# Please do not forget to insert here your own *.k file with its correct name!
echo ========================================== LS-Dyna Stop ===============================================
Similarly for a parallel LS-Dyna solver run the execution on CoolMUC-3 (mpp3) could be initiated by the following SLURM script:
#!/bin/bash
#SBATCH -o ./job.lsdyna.%j.%N.out
# --- Provide here your own working directory ---
#SBATCH -D ./
#SBATCH -J lsdyna_mpp3
#SBATCH --clusters=mpp3
#SBATCH --get-user-env
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=64
# --- multiples of 64 for mpp3 ---
#SBATCH --mail-type=end
#SBATCH --mail-user=Max.Mustermann@lrz.de
#SBATCH --export=NONE
#SBATCH --time=0:10:00
#----------------------------------------------------
module load slurm_setup
# Extract from SLURM the information about cluster machine hostnames and number of tasks per node:
machines=""
for i in $(scontrol show hostnames=$SLURM_JOB_NODELIST); do
machines=$machines:$i:$SLURM_NTASKS_PER_NODE
done
machines=${machines:1}
# For later check of this information, echo it to stdout so that the information is captured in the job file:
echo $machines
module avail lsdyna
module load lsdyna/2023.R2
module list
echo ========================================== LS-Dyna Start ==============================================
# For later check of the correctness of the supplied ANSYS LS-DYNA command it is echoed to stdout,
# so that it can be reviewed afterwards in the job file:
echo lsdyna pr=dyna i=my_test.k -dp -dis -machines $machines
lsdyna pr=dyna i=my_test.k -dp -dis -machines $machines
# Please do not forget to insert here your own *.k file with its correct name!
echo ========================================== LS-Dyna Stop ===============================================