Simcenter StarCCM+ (Fluid Dynamics)
Simcenter StarCCM+ is a general purpose Computational Fluid Dynamics (CFD) code. Simcenter StarCCM+is part of the Siemens PLM software portfolio since April 2016 (formerly owned by cd-adapco also known as Computational Dynamics-Analysis & Design Application Company Ltd). As a general purpose CFD code Simcenter StarCCM+ provides a wide variety of physical models for turbulent flows, acoustics, Eulerian and Lagrangian multiphase flow modeling, radiation, combustion and chemical reactions, heat and mass transfer including CHT (conjugate heat transfer in solid domains).
Further information about Simcenter StarCCM+, licensing of the Siemens PLM software and related terms of software usage at LRZ, the Siemens PLM mailing list, access to the Siemens PLM software documentation and LRZ user support can be found on the main Siemens PLM documentation page.
Getting Started
Once you are logged into one of the LRZ cluster systems, you can check the availability (i.e. installation) of StarCCM+ software by:
> module avail starccm
Load the prefered StarCCM+ version environment module, e.g.:
> module load starccm/2024.2.1
One can use StarCCM+ in interactive GUI mode for the only purpose of pre- and/or postprocessing on the Login Nodes (Linux: SSH Option "-Y" or X11-Forwarding; Windows: using PuTTY and XMing for X11-forwarding). This interactive usage is mainly intended for making quick simulation setup changes, which require GUI access. And since StarCCM+ is loading the mesh into the login nodes memory, this approach is only applicable to comparable small cases. It is NOT permitted to run computationally intensive StarCCM+ simulation runs or postprocessing sessions with large memory consumption on Login Nodes. The formerly existing Remote Visualization systems have been switched off in March 2024 without replacement due to their end-of-life. Any work with the Siemens PLM software (StarCCM+) being related to interactive mesh generation as well as graphically intensive pre- and postprocessing tasks need to be carried out on local computer systems and by using a Siemens PLM POD or node-locked license.
The Simcenter StarCCM+ GUI is started by:
> starccm+
The 3d results visualization program StarView+ can be launched by:
> starview+
Siemens PLM StarCCM+ on Linux Cluster and SupermUC-NG Login Nodes
StarCCM+ is a very resource-intensive application with respect to both main memory (RAM) and CPU resources! Please run StarCCM+ on login nodes with greatest care and under your supervision (e.g. using command "top" + <Cntrl>-M in a second terminal window) !
Especially, involving multiple StarCCM+ processes / parallelization might cause a high load on the login node and has the potential to massively disturb other users on the same system! Running StarCCM+ in compute or meshing mode on login nodes can easily lead to overload and make the login node no longer responsive, so that a reboot of the machine is required. Be careful !
StarCCM+ applications, which cause a high load on login nodes and disturbance to other users or general operation of the login node, will be terminated by system administrators without any prior notification!
Our recommendations on the login nodes:
- Running multiple instances of StarCCM+ by the same user is prohibited!
Please run only one instance of the software on a particular login node at any time! - It is only allowed to run a single instance of StarCCM+ on login nodes solely for the purpose of pre- and/or postprocessing. Absolutely none (zero) StarCCM+ simulations are allowed on login nodes.
- The maximum allowed number of cores in starCCM+ parallelization is <=4 CPU cores!
Any StarCCM+ instance using a higher degree of parallelization on login nodes will be terminated by system administrators without any prior notification! - If using <=4 cores in StarCCM+ parallelization it is recommended to switch StarCCM+ to Open-MPI. The default Intel MPI might not work on login nodes due to conflicts with SLURM.
- Please check the load and memory consumption of your own StarCCM+ session. Usually, you can do this via the "top" command, e.g.:Using <Cntrl>-M is sorting the displayed process list by the amount of consumed memory per process.
top -u $USER
- If a graphical user interface is needed, then you may run a single instance of StarCCM+ via a VNC session (VNC Server on Login-Nodes) to increase the performance and responsiveness of the GUI!
- If a graphical user interface is not needed, it is adviced to run StarCCM+ via an interactive Slurm job or in batch mode under SLURM control.
These jobs run on compute nodes. A high degree of parallelization is explicitly allowed here, as long as StarCCM+ is run efficiently (with the rule of thumb of approx. 10.000 mesh elements per CPU core at least)!
Do not over-parallelize StarCCM+ simulations. Non effectively used HPC resources are stolen resources from other users, since they are wasted. - Repeated violation of the above mentioned restrictions to StarCCM+ usage on login nodes might result in a ban of the affected user account and notification to the scientific supervisor/professor.
Mixed vs. Double Precision Solver of StarCCM+
Siemens PLM is providing installation packages for mixed precision and higher accuracy double precision simulations. The latter comes for the price of approx. 20% higher execution times and approx. twice as large simulation results files. The LRZ module system is providing access to both versions of StarCCM+.
Access to StarCCM+ mixed precision solvers e.g. by:
module load starccm/2024.2.1 # loading the mixed precision StarCCM+ module
Access to StarCCM+ double precision solvers e.g. by:
module load starccm_dp/2024.2.1 # loading the double precision StarCCM+ module
Simcenter StarCCM+ Parallel Execution (Batch Mode)
All parallel StarCCM+ simulations on LRZ Linux Clusters and SuperMUC-NG are submitted as non-interactive batch jobs to the appropriate scheduling system (SLURM) into the different pre-defined parallel execution queues. Further information about the batch queuing systems and the queue definitions, capabilities and limitations can be found on the documentation pages of the corresponding HPC system (LinuxCluster, SuperMUC-NG)
For job submission to a batch queuing system a corresponding small shell script needs to be provided, which contains:
- Batch queueing system specific commands for the job resource definition
- Module command to load the Simcenter StarCCM+ environment module
- Start command for parallel execution of starccm+ with all appropriate command line parameters, including a controling StarCCM+ java macro.
The intended syntax and available command line options for the invocation of the starcmm+ solver command can be found out by:
> starccm+ -help
The configuration of the parallel cluster partition (list of node names and corresponding number of cores) is provided to the starccm+ command from the batch queuing system (SLURM) by the provision of an automatically generated environment variable $STAR_HOSTLIST, based on the information provided by the cluster user in the job resource definition. The number of StarCCM+ solver processes is submitted to the starccm+ solver command by the SLURM environment variable $SLURM_NTASKS.
Furthermore we recommend to LRZ cluster users to write for longer simulation runs regular backup files, which can be used as the basis for a job restart in case of machine or job failure. A good practice for a 48 hour StarCCM+ simulation (max. time limit) would be to write backup files every 6 or 12 hours. Further information you can find in the StarCCM+ documentation.
CoolMUC-2 : Simcenter StarCCM+ Job Submission on LRZ Linux Clusters running SLES15 using SLURM
Actual versions 2024.1.1 and 2024.2.1 of StarCCM+ are no longer compatible with the outdated CoolMUC-2 operating system. This issue is known and will no longer be fixed.
StarCCM+ can be provided on CoolMUC-2 (CM2) with support for both the default Intel MPI and OpenMPI on CM2 queues (cm2_tiny, cm2_std) with Infiniband interfaces.
Similar SLURM batch script syntax can be applied for either using Power-on-Demand licensing or the access to a license server which provides floating licenses for StarCCM+.
Using a Power-on-Demand License
In the following an example of a job submission batch script for StarCCM+ on CoolMUC-2 (SLURM queue = cm2_tiny) in the batch queuing system SLURM is provided. The example is formulated for the use of POD (Power-on-Demand) licensing. POD keys can be obtained either through the TUM campus license or directly from Siemens PLM.
Assumed that the above SLURM script has been saved under the filename "starccm_POD_cm2_tiny.sh", the SLURM batch job has to be submitted by issuing the following command on one of the Linux Cluster login nodes:
sbatch starccm_POD_cm2_tiny.sh
Using a LRZ Floating License
In the following an example of a job submission batch script for StarCCM+ on CoolMUC-2 (SLURM queue = cm2_std) in the batch queuing system SLURM is provided. Correspondingly smaller jobs using a smaller number of compute nodes can be submitted to the CM2 cluster queue cm2_tiny by adjusting the provided SLURM script accordingly (change of --cluster and --partition statements and omitting the --qos statement - see example for cm2_tiny queue above). The example is formulated for the use of a StarCCM+ floating license being provided by the LRZ internal license server license1.lrz.de (User-ID authentication for license check-out). Consequently befor the use of StarCCM+ floating licenses the User-ID of the user has to be registered in the LRZ license server (please send a LRZ Service Request).
Assumed that the above SLURM script has been saved under the filename "starccm_floatlic_cm2_tiny.sh", the SLURM batch job has to be submitted by issuing the following command on one of the Linux Cluster login nodes (lxlogin1,2,3,4):
sbatch starccm_floatlic_cm2_std.sh
CoolMUC-3 : Simcenter StarCCM+ Job Submission on LRZ Linux Clusters running SLES12 using SLURM
Actual versions of StarCCM+ are no longer compatible with the outdated CoolMUC-3 operating system. This issue is known and will no longer be fixed.
Using a Power-on-Demand License
In the following an example of a job submission batch script for StarCCM+ on CoolMUC-3 (SLURM queue = mpp3) in the batch queuing system SLURM is provided. The example is formulated for the use of POD (Power-on-Demand) licensing. POD keys can be obtained either through the TUM campus license or directly from Siemens PLM.
Assumed that the above SLURM script has been saved under the filename "starccm_POD_mpp3_slurm.sh", the SLURM batch job has to be submitted by issuing the following command on one of the Linux Cluster login nodes:
sbatch starccm_POD_mpp3_slurm.sh
Using a LRZ Floating License
In the following an example of a job submission batch script for StarCCM+ on CoolMUC-3 (SLURM queue = mpp3) in the batch queuing system SLURM is provided. The example is formulated for the use of a StarCCM+ floating license being provided by the LRZ internal license server license1.lrz.de (User-ID authentication for license check-out). Consequently befor the use of StarCCM+ floating licenses the User-ID of the user has to be registered in the LRZ license server (please send a LRZ Service Request).
Assumed that the above SLURM script has been saved under the filename "starccm_floatlic_mpp3_slurm.sh", the SLURM batch job has to be submitted by issuing the following command on one of the Linux Cluster login nodes:
sbatch starccm_floatlic_mpp3_slurm.sh
Warning: Do NOT use additionally mpirun, mpiexec or any srun command to start the parallel processes. This is done by a MPI wrapper by the starccm+ startup script in the background. Also, do not try to change the default Intel MPI to any other MPI version to run StarCCM+ in parallel. On the LRZ cluster systems only the usage of Intel MPI is supported and known to work propperly with Simcenter StarCCM+.
CoolMUC-4 : Simcenter StarCCM+ Job Submission on LRZ Linux Clusters running SLES15 using SLURM
StarCCM+ can be provided on CoolMUC-4 (CM4) with support for both the default Intel MPI and OpenMPI on CM4 queues (cm4_inter_large_mem) with Infiniband interfaces.
Similar SLURM batch script syntax can be applied for either using Power-on-Demand licensing or the access to a license server which provides floating licenses for StarCCM+.
Please mind, that CM-4 compute nodes have access to the $HOME and $SCRATCH_DSS filesystems.
Using a Power-on-Demand License
In the following an example of a job submission batch script for StarCCM+ on CoolMUC-4 (SLURM queue = cm4_inter_large_mem) in the batch queuing system SLURM is provided. The example is formulated for the use of POD (Power-on-Demand) licensing. POD keys can be obtained either through the TUM campus license or directly from Siemens PLM.
Assumed that the above SLURM script has been saved under the filename "starccm_POD_cm4.sh", the SLURM batch job has to be submitted by issuing the following command on the Linux Cluster login node (lxlogin5):
sbatch starccm_POD_cm4.sh
Using a LRZ Floating License
In the following an example of a job submission batch script for StarCCM+ on CoolMUC-4 (SLURM queue = cm4_inter_large_mem) in the batch queuing system SLURM is provided. The example is formulated for the use of a StarCCM+ floating license being provided by the LRZ internal license server license1.lrz.de (User-ID authentication for license check-out). Consequently befor the use of StarCCM+ floating licenses the User-ID of the user has to be registered in the LRZ license server (please send a LRZ Service Request).
Assumed that the above SLURM script has been saved under the filename "starccm_floatlic_cm4.sh", the SLURM batch job has to be submitted by issuing the following command on the Linux Cluster login node (lxlogin5):
sbatch starccm_floatlic_cm4.sh
SuperMUC-NG : Simcenter StarCCM+ Job Submission on SNG running SLES15 using SLURM
In the following an example of a job submission batch script for StarCCM+ on SuperMUC-NG (Login node: skx.supermuc.lrz.de, SLURM partition = test) in the batch queuing system SLURM is provided.
Please note that POD licensing is not supported on SuperMUC-NG, since it is not possible to reach any external license server from SuperMUC-NG compute nodes due to additional hardening of this supercomputer machine.
LRZ can provide a rather small number of available StarCCM+ floating licenses on request. Alternatively interested users in using StarCCM+ on SuperMUC-NG need to provide their own StarCCM+ licenses on a hosted license server in the LRZ network. For that the migration of existing license pools to this LRZ license server need to be arranged with Siemens PLM as a software vendor for StarCCM+. If that is the licensing solution for starCCM+ on SuperMUC-NG you would like to go for, please contact our LRZ Service Desk accordingly.
Assumed that the above SLURM script has been saved under the filename "starccm_sng_slurm.sh", the SLURM batch job has to be submitted by issuing the following command on one of the SuperMUC-NG login nodes:
sbatch starccm_sng_slurm.sh