Abaqus (Structural Mechanics)
Abaqus is a general purpose Computational Structural Mechanics (CSM) code. Abaqus is part of the Dassault Systems software portfolio since its acquisition from Abaqus Inc. in May 2005. Simulia Abaqus FEA as a leading program in nonlinear finite element analysis and as a general purpose CSM code provides a wide variety of material modeling capabilities, a collection of multiphysics capabilities, such as coupled acoustic-structural, piezoelectric, and structural-pore capabilities. Simulia Abaqus FEA software package includes Abaqus/Standard, Abaqus/Explicit and Abaqus/CFD.
Further information about Abaqus, licensing of the Dassault Systems (DS) software and related terms of software usage at LRZ, the Dassault Systems Software mailing list, access to the Dassault Systems software documentation and LRZ user support can be found on the main Dassault Systems Software documentation page.
Getting Started
Once you are logged into one of the LRZ cluster systems, you can check the availability (i.e. installation) of Abaqus software by:
> module avail abaqus
Load the prefered Abaqus version environment module, e.g.:
> module load abaqus/2023
One can use Abaqus in interactive GUI mode for the only purpose of pre- and/or postprocessing on the Login Nodes (Linux: SSH Option "-Y" or X11-Forwarding; Windows: using PuTTY and XMing for X11-forwarding). This interactive usage is mainly intended for making quick simulation setup changes, which require GUI access. It is NOT permitted to run computationally intensive Abaqus simulation runs or postprocessing sessions with large memory consumption on Login Nodes.
The Abaqus GUI is started by:
> abaqus cae
Abaqus Licensing - RESEARCH vs. TEACHING
Two different license types are available for the use of Abaqus software. For scientific research the license type RESEARCH should be used. This license type provides access to the unlimited capability Abaqus solvers. For testing and teaching purposes as well as for computational less intensive tasks requiring only a very limited number of CPU cores the license type TEACHING can and should be used. The TEACHING license type is limited to an execution on 1-4 CPU cores. Therefore the TEACHING license type should be used in the serial queue of the LRZ Linux Clusters (see example provided below). For further possible limitations of the TEACHING license type please refer to the Abaqus documentation by Dassault Systems.
For TUM/UTG users the corresponding license settings in the SLURM scripts (and correspondingly in the local abaqus_v6.env file) are as follows:
License Type | Settings in the SLURM script / abaqus_v6.env file |
---|---|
RESEARCH | license_server_type=FLEXNET abaquslm_license_file="8101@license4.lrz.de" academic=RESEARCH |
TEACHING | license_server_type=FLEXNET abaquslm_license_file="8101@license6.lrz.de" academic=TEACHING |
Why I should call Abaqus through the Abaqus-Python-Wrapper?
When Abaqus 2020/2021/2022/2023 was installed on the LRZ filesystem for CoolMUC-2, it turned out that the Abaqus software is not working to 100% correctly. The issue experienced and described here is obviously a well known deficiency of the abaqus software on some (but not all) Linux systems. Corresponding descriptions and attempts to mitigate the issue can be found on the internet.
When Abaqus is called on a command line prompt of e.g. a LRZ Linux Cluster login node (for testing purposes and with a small input file), the Abaqus software is carrying out the simulation, until the message string "THE ANALYSIS HAS COMPLETED SUCCESSFULLY" appears in the status file (*.sta). Now it would be expected, that Abaqus is abandoning its work and provides the user again with an accessible command line prompt. Instead the Abaqus Standard process seems to hang and the command line prompt is not made available to the user again. While this is not much of a problem in interactive usage of Abaqus, it becomes a major issue, if an Abaqus simulation should be submitted to a Linux Cluster in batch processing mode. In this case the SLURM job would hang and block cluster resources, until either the user or the SLURM scheduler would terminate the hanging process. Thereby substantial Linux cluster resources would be wasted, because the user defined maximum execution time of the SLURM job might be substantially longer than the actual execution time of the Abaqus simulation.
To mitigate this issue, a Python wrapper for Abaqus is provided. This Python wrapper launches Abaqus in background mode and afterwards it constantly monitors the Abaqus status file (*.sta) of the simulation. If the status file appears in the working directory and contains the success message issued by Abaqus, than it terminates/kills the Abaqus standard process and the simulation is finished.
The Abaqus-Python-wrapper can be called by the following syntax:
Usage of the Abaqus-Python-wrapper: abq_wrapper.py --job=<jobfile> --script=<Python-Script> --double=<explicit|both|off> --memory=<memory> --cpus=<cpus> [ --user=<Fortran-Routine> ] or: abq_wrapper.py -j <jobfile> -s=<Python-Script> -d <explicit|both|off> -m <memory> -c <cpus> [ -u <Fortran-Routine> ]
The input file should be given without the filename extension, i.e. without .inp. The argument for usage of a User-FORTRAn routine is an optional argument to the Abaqus-Python-Wrapper, i.e. it can be omitted if not required. The above syntax is printed to the screen, if the Abaqus-Python-wrapper is called wiht the "--help" argument:
> abq_wrapper.py --help
If there is the arising need to support more command line arguments, which can be recognized by the Abaqus-Python-wrapper and piped through to the Abaqus executable, than please specify your needs in a Service Request to the LRZ application support team.
Abaqus Parallel Execution (Batch Mode)
All parallel Abaqus simulations on LRZ Linux Clusters and SuperMUC-NG are submitted as non-interactive batch jobs to the appropriate scheduling system (SLURM) into the different pre-defined parallel execution queues. Further information about the batch queuing systems and the queue definitions, capabilities and limitations can be found on the documentation pages of the corresponding HPC system (LinuxCluster, SuperMUC-NG)
For job submission to a batch queuing system a corresponding small shell script needs to be provided, which contains:
- Batch queueing system specific commands for the job resource definition
- Module command to load the Abaqus environment module
- Commands for the assembly of the Abaqus machines/node list and the custoized environment file for Abaqus
- Start command for parallel execution of Abaqus with all appropriate command line parameters - here by using again the above mentioned Abaqus-Python-wrapper
The intended syntax and available command line options for the invocation of the Abaqus solver command can be found out by:
> abaqus -help
The configuration of the parallel cluster partition (list of node names and corresponding number of cores) is provided to the abaqus command from the batch queuing system (SLURM) by the provisioning of SLURM environment variables, containing the information provided by the cluster user in the job resource definition and by the dynamic assignment of cluster compute nodes by the SLURM scheduler at the tie of execution. The number of Abaqus solver processes is submitted to the abaqus solver command (or the abq_wrapper.py correspondingly) by the SLURM environment variable $SLURM_NTASKS.
Serial Queue of Linux Clusters: Abaqus Job Submission using SLURM on small number of CPU Cores (1-4; typical for TEACHING license type)
The name "serial queue" is to a certain extend misleading here. The serial queue of LRZ Linux Clusters (CM2) differ from other CoolMUC-2/-3 queues in that regard, that the access is granted to just one single cluster node and that the access to this cluster node is non-exclusive, i.e. might be shared with other cluster users depending on the resource requirements of the job as they are specified in the SLURM script. Nevertheless the launched application can make use of more than just a single CPU core on that cluster node, i.e. apply a certain degree of parallelization - either in shared memory mode or in distributed memory mode (MPI).
In the following an example of a job submission batch script for Abaqus using the TEACHINGlicense type on CoolMUC-2 (SLURM queue = serial) in the batch queuing system SLURM is provided. The example is formulated for the use of Abaqus floating license of license type TEACHING being provided by the TUM/UTG license server license6.lrz.de (User-ID authentication for license check-out). Consequently befor the use of an Abaqus floating license the license owner would need to register his LRZ User-ID in an appropriately provided Dassault Systems license server and the license server information needs to be included in the Abaqus environment file "abaqus_v6.env" accordingly, Since the SLURM script needs to overwrite this Abaqus environment file in the current working directory, any intended changes or additions to this environment file need to be implmented into the SLURM script of the user (not directly being written into the Abaqus environment file, as most users might be used to).
The license server information "8101@license6.lrz.de" in the above SLURM script needs to be adapted to the license server providing the valid Abaqus licenses. For TUM/UTG users and Abaqus license type TEACHING this license server is "8101@license6.lrz.de" as shown above.
Assumed that the above SLURM script has been saved under the filename "abaqus_serial.sh", the SLURM batch job has to be submitted by issuing the following command on one of the CM2 Linux Cluster login nodes (lxlogin1,2,3,4):
sbatch abaqus_serial.sh
CoolMUC-2 : Abaqus Job Submission on LRZ Linux Clusters running SLES15 SP1 using SLURM
Abaqus is provided on CoolMUC-2 (CM2) with support for the Intel MPI message passing library on CM2 queues (cm2_tiny, cm2_std) with Infiniband interfaces.
In the following an example of a job submission batch script for Abaqus using the RESEARCH license type on CoolMUC-2 (SLURM queue = cm2_tiny) in the batch queuing system SLURM is provided. Correspondingly larger jobs using a larger number of compute nodes can be submitted to the CM2 cluster queue cm2_std by adjusting the provided SLURM script accordingly (change of --cluster, --partition and --qos statements). The example is formulated for the use of Abaqus floating license being provided by the LRZ internal license server license1.lrz.de (User-ID authentication for license check-out). Consequently befor the use of an Abaqus floating license the license owner would need to register his LRZ User-ID in an appropriately provided Dassault Systems license server and the license server information needs to be included in the Abaqus environment file "abaqus_v6.env" accordingly, Since the SLURM script needs to overwrite this Abaqus environment file in the current working directory, any intended changes or additions to this environment file need to be implemented into the SLURM script of the user (not directly being written into the Abaqus environment file, as most users might be used to).
The license server information "<your_port>@<your_license_server>" in the above SLURM script needs to be adapted to the license server providing the valid Abaqus licenses. For TUM/UTG users this license server is "8101@license4.lrz.de".
Assumed that the above SLURM script has been saved under the filename "abaqus_cm2_tiny.sh", the SLURM batch job has to be submitted by issuing the following command on one of the CM2 Linux Cluster login nodes (lxlogin1,2,3,4):
sbatch abaqus_cm2_tiny.sh
CoolMUC-3 : Abaqus Job Submission on LRZ Linux Clusters running SLES12 SP5 using SLURM
Abaqus is provided on CoolMUC-3 (CM3) with support for the Intel MPI message passing library on MPP3 queues (mpp3) with Intel OmniPath communication interfaces.
Please mind, that one node of the CoolMUC-3 Linux Cluster has about 96Gb of node memory, where approx. 84Gb of the memory is available for user programs. Furthermore the 64 cores of an Intel KNL processor require for 64 Abaqus tasks an amount of 28 available Abaqus parallel tokens. The relationship between the number of used cores and required Abaqus parallel tokens is strongly non linear. So for the run on two fully deployed Intel KNL processor nodes (128 Abaqus processes/tasks) a total amount of 38 Abaqus parallel tokens will be required. By LRZ policy it is not permitted to run Abaqus on CM3 with an only partially filled Linux Cluster node (e.g. only 20 out of 64 possible processes). If such SLURM jobs are encountered on the machine, the user account might be blocked from further usage due to waste of Linux Cluster resources.
In the following an example of a job submission batch script for Abaqus using the RESEARCH license type on CoolMUC-3 (SLURM queue = mpp3) in the batch queuing system SLURM is provided. The example is formulated for the use of Abaqus floating license being provided by the LRZ internal license server license4.lrz.de (User-ID authentication for license check-out). Consequently befor the use of an Abaqus floating license the license owner would need to register his LRZ User-ID in an appropriately provided Dassault Systems license server and the license server information needs to be included in the Abaqus environment file "abaqus_v6.env" accordingly, Since the SLURM script needs to overwrite this Abaqus environment file in the current working directory, any intended changes or additions to this environment file need to be implemented into the SLURM script of the user (not directly being written into the Abaqus environment file, as most users might be used to).
The license server information "<your_port>@<your_license_server>" in the above SLURM script needs to be adapted to the license server providing the valid Abaqus licenses. For TUM/UTG users this license server is "8101@license4.lrz.de".
Assumed that the above SLURM script has been saved under the filename "abaqus_mpp3.sh", the SLURM batch job has to be submitted by issuing the following command on one of the CM3 (MPP3) Linux Cluster login node (lxlogin8) or any of the CM2 Linux Cluster login nodes (lxlogin1,2,3,4):
sbatch abaqus_mpp3.sh
CoolMUC-4 : Abaqus Job Submission on LRZ Linux Clusters running SLES15 SP4 using SLURM
Abaqus is provided on a small set of nodes of the new CoolMUC-4 (CM4) with support for the Intel MPI message passing library on CM4 queues (cm4_inter_large_mem) with Infiniband interfaces.
In the following an example of a job submission batch script for Abaqus using the RESEARCH license type on CoolMUC-4 (SLURM queue = cm4_inter_large_mem) in the batch queuing system SLURM is provided. The example is formulated for the use of Abaqus floating license being provided by the LRZ internal license server license1.lrz.de (User-ID authentication for license check-out). Consequently befor the use of an Abaqus floating license the license owner would need to register his LRZ User-ID in an appropriately provided Dassault Systems license server and the license server information needs to be included in the Abaqus environment file "abaqus_v6.env" accordingly, Since the SLURM script needs to overwrite this Abaqus environment file in the current working directory, any intended changes or additions to this environment file need to be implemented into the SLURM script of the user (not directly being written into the Abaqus environment file, as most users might be used to).
The license server information "<your_port>@<your_license_server>" in the above SLURM script needs to be adapted to the license server providing the valid Abaqus licenses. For TUM/UTG users this license server is "8101@license4.lrz.de".
Assumed that the above SLURM script has been saved under the filename "abaqus_cm4.sh", the SLURM batch job has to be submitted by issuing the following command on one of the CM2 Linux Cluster login nodes (lxlogin1,2,3,4):
sbatch abaqus_cm4.sh
SuperMUC-NG : Abaqus Job Submission on SNG running SLES12 using SLURM
Abaqus has not yet been tested on the SuperMUC-NG.