ANSYS Lumerical (Optics / Photonics)

ANSYS Lumerical is a complete photonics simulation software solution, which enables the design of photonics components, circuits, and systems. Device and system level tools allow designers to model interacting optical, electrical, and thermal effects. Flexible interoperability between products enables a variety of workflows that combine device multiphysics and photonic circuit simulation with third-party design automation and productivity tools. 

Further information about ANSYS Lumerical, licensing of the ANSYS software and related terms of software usage at LRZ, the ANSYS mailing list, access to the ANSYS software documentation and LRZ user support can be found on the main ANSYS documentation page.

Licensing

ANSYS Lumerical is regarding the licensing still an exception from other provided ANSYS software at LRZ. This is due to the fact, that ANSYS Inc. has not yet included ANSYS Lumerical licenses in the general ANSYS Academic Campus license, so that ANSYS Lumerical licenses can NOT be provided by LRZ, but interested institutions have to obtain licenses for ANSYS Lumerical by themselfes (e.g. through CADFEM GmbH, Grafing). 

SSH Setup

Please mind, that you need to follow the steps for the generation of the LRZ-internal SSH key setup as described on the main ANSYS documentation page for ANSYS Lumerical solvers as well.

Getting Started

Once you are logged into one of the LRZ cluster systems, you can check the availability (i.e. installation) of ANSYS Lumerical software by:

> module avail lumerical

Load the prefered ANSYS Lumerical version environment module, e.g.:

> module load lumerical/2024.R2

CoolMUC-2 : ANSYS Lumerical Job Submission on LRZ Linux Clusters running SLES15 using SLURM

In the following an example of a job submission batch script for ANSYS Lumerical, fdtd solver on CoolMUC-2 (SLURM queues = cm2_tiny | cm2_std | cm2_large) in the batch queuing system SLURM is provided:

#!/bin/bash
#SBATCH -o ./myjob.%j.%N.out
#SBATCH -D ./
#SBATCH -J fdtd_cm2_std
#SBATCH --clusters=cm2
#SBATCH --partition=cm2_std
#SBATCH --qos=cm2_std
#SBATCH --get-user-env
#SBATCH --nodes=3
#SBATCH --ntasks-per-node=28
# --- should be 28 for cm2 ---
#SBATCH --mail-type=NONE
#SBATCH --mail-user=Max.Mustermann@lrz.de
#SBATCH --export=NONE
#SBATCH --time=0:10:00 
#----------------------------------------------------
module load slurm_setup

#----------- Generation of the ANSYS Lumerical machines file -----------------
machines=""
network="ib"
myhostname=`hostname`
machinefile=machinefile.txt
if [ -f $machinefile ]; then
  rm $machinefile
fi
for i in $(scontrol show hostnames=$SLURM_JOB_NODELIST); do
  if [ "$i" = "$myhostname" ]; then
    machines=$i:$SLURM_NTASKS_PER_NODE
  else
    machines=$i$network:$SLURM_NTASKS_PER_NODE
  fi
  echo $machines >> $machinefile
done
cat $machinefile
echo "-------------------------------------------------------------------------"

module avail lumerical
module load lumerical/2024.R2
module list 

# cat /proc/cpuinfo
echo ========================================== ANSYS Lumerical Start ==============================================
# For later check of the correctness of the supplied ANSYS Lumerical, fdtd command it is echoed to stdout,
# so that it can be reviewed afterwards in the job file:
echo mpirun -machinefile $machinefile fdtd-engine-impi-lcl -t 1 test.fsp
mpirun -machinefile $machinefile fdtd-engine-impi-lcl -t 1 test.fsp
# Please do not forget to insert here your own FSP file with its correct name!
echo ========================================== ANSYS Lumerical Stop ===============================================

Assumed that the above SLURM script has been saved under the filename "lumerical_cm2_std.sh", the SLURM batch job has to be submitted by issuing the following command on one of the Linux Cluster login nodes:

sbatch lumerical_cm2_std.sh

Correspondingly for the cm2_tiny queue the job script would look like:

#!/bin/bash
#SBATCH -o ./myjob.%j.%N.out
#SBATCH -D ./
#SBATCH -J fdtd_cm2_tiny
#SBATCH --clusters=cm2_tiny
#SBATCH --partition=cm2_tiny
#SBATCH --get-user-env
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=28
# --- should be 28 for cm2 ---
#SBATCH --mail-type=NONE
#SBATCH --mail-user=Max.Mustermann@lrz.de
#SBATCH --export=NONE
#SBATCH --time=0:10:00 
#----------------------------------------------------
module load slurm_setup

#----------- Generation of the ANSYS Lumerical machines file -----------------
machines=""
network="ib"
myhostname=`hostname`
machinefile=machinefile.txt
if [ -f $machinefile ]; then
  rm $machinefile
fi
for i in $(scontrol show hostnames=$SLURM_JOB_NODELIST); do
  if [ "$i" = "$myhostname" ]; then
    machines=$i:$SLURM_NTASKS_PER_NODE
  else
    machines=$i$network:$SLURM_NTASKS_PER_NODE
  fi
  echo $machines >> $machinefile
done
cat $machinefile
echo "-------------------------------------------------------------------------"

module avail lumerical
module load lumerical/2024.R2
module list 

# cat /proc/cpuinfo
echo ========================================== ANSYS Lumerical Start ==============================================
# For later check of the correctness of the supplied ANSYS Lumerical, fdtd command it is echoed to stdout,
# so that it can be reviewed afterwards in the job file:
echo mpirun -machinefile $machinefile fdtd-engine-impi-lcl -t 1 test.fsp
mpirun -machinefile $machinefile fdtd-engine-impi-lcl -t 1 test.fsp
# Please do not forget to insert here your own FSP file with its correct name!
echo ========================================== ANSYS Lumerical Stop ===============================================

Assumed that the above SLURM script has been saved under the filename "lumerical_cm2_tiny.sh", the SLURM batch job has to be submitted by issuing the following command on one of the Linux Cluster login nodes:

sbatch lumerical_cm2_tiny.sh

CoolMUC-4 : ANSYS Lumerical Job Submission on LRZ Linux Clusters running SLES15 using SLURM

In the following an example of a job submission batch script for ANSYS Lumerical, fdtd solver on CoolMUC-4 (SLURM queues = cm4_inter_large_mem) in the batch queuing system SLURM is provided:

#!/bin/bash
#SBATCH -o ./myjob.%j.%N.out
#SBATCH -D ./
#SBATCH -J lumerical_cm4
#SBATCH --cluster=inter
#SBATCH --partition=cm4_inter_large_mem
#SBATCH --get-user-env
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=80
# ---- should be 80 for CoolMUC-4 ----
#SBATCH --mail-type=NONE
#SBATCH --mail-user=Max.Mustermann@lrz.de
#SBATCH --no-requeue
#SBATCH --export=NONE
#SBATCH --time=00:10:00
#-----------------------
module load slurm_setup

#----------- Generation of the ANSYS Lumerical machines file -----------------
machines=""
network=""
myhostname=`hostname`
machinefile=machinefile.txt
if [ -f $machinefile ]; then
  rm $machinefile
fi
for i in $(scontrol show hostnames=$SLURM_JOB_NODELIST); do
  if [ "$i" = "$myhostname" ]; then
    machines=$i:$SLURM_NTASKS_PER_NODE
  else
    machines=$i$network:$SLURM_NTASKS_PER_NODE
  fi
  echo $machines >> $machinefile
done
cat $machinefile
echo "-------------------------------------------------------------------------"

module avail lumerical
module load lumerical/2024.R2
module list 

# cat /proc/cpuinfo
echo ========================================== ANSYS Lumerical Start ==============================================
# For later check of the correctness of the supplied ANSYS Lumerical, fdtd command it is echoed to stdout,
# so that it can be reviewed afterwards in the job file:
echo mpirun -machinefile $machinefile fdtd-engine-impi-lcl -t 1 test.fsp
mpirun -machinefile $machinefile fdtd-engine-impi-lcl -t 1 test.fsp
# Please do not forget to insert here your own FSP file with its correct name!
echo ========================================== ANSYS Lumerical Stop ===============================================

Assumed that the above SLURM script has been saved under the filename "lumerical_cm4.sh", the SLURM batch job has to be submitted by issuing the following command on one of the Linux Cluster login nodes:

sbatch lumerical_cm4.sh

General Concluding Remarks

On the LRZ cluster systems only the usage of Intel MPI is supported and known to work propperly with ANSYS Lumerical.

ANSYS Lumerical solvers have so far only be tested on CoolMUC-2 and CoolMUC-4 (cm4_inter_large_mem) Linux Cluster partitions.
If you have the need to run ANSYS Lumerical solvers on any other LRZ HPC system, please file a corresponding LRZ Service Request.