ANSYS Lumerical (Optics / Photonics)

ANSYS Lumerical is a complete photonics simulation software solution, which enables the design of photonics components, circuits, and systems. Device and system level tools allow designers to model interacting optical, electrical, and thermal effects. Flexible interoperability between products enables a variety of workflows that combine device multiphysics and photonic circuit simulation with third-party design automation and productivity tools. 

Further information about ANSYS Lumerical, licensing of the ANSYS software and related terms of software usage at LRZ, the ANSYS mailing list, access to the ANSYS software documentation and LRZ user support can be found on the main ANSYS documentation page.

Licensing

ANSYS Lumerical is regarding the licensing still an exception from other provided ANSYS software at LRZ. This is due to the fact, that ANSYS Inc. has not yet included ANSYS Lumerical licenses in the general ANSYS Academic Campus license, so that ANSYS Lumerical licenses can NOT be provided by LRZ, but interested institutions have to obtain licenses for ANSYS Lumerical by themselfes (e.g. through CADFEM GmbH, Grafing). 

SSH Setup

Please mind, that you need to follow the steps for the generation of the LRZ-internal SSH key setup as described on the main ANSYS documentation page for ANSYS Lumerical solvers as well.

Getting Started

Once you are logged into one of the LRZ cluster systems, you can check the availability (i.e. installation) of ANSYS Lumerical software by:

> module avail lumerical

Load the prefered ANSYS Lumerical version environment module, e.g.:

> module load lumerical/2024.R2

CoolMUC-4 : ANSYS Lumerical Job Submission on LRZ Linux Clusters running SLES15 using SLURM

In the following an example of a job submission batch script for ANSYS Lumerical, fdtd solver on CoolMUC-4 (SLURM queues = serial, partition = serial_std) in the batch queuing system SLURM is provided.

Please use this large and powerful compute resource with a carefully specified number of CPU cores and a reasonably quantified amount of requested node memory per compute node of CM4. Don't waste powerful CM4 compute resources and please be fair to other CM4 cluster users.

#!/bin/bash
#SBATCH -o ./myjob.%j.%N.out
#SBATCH -D ./
#SBATCH -J lumerical_cm4_serial
#SBATCH --clusters=serial
#SBATCH --partition=serial_std
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=10
# --- Less or equal to the maximum number of CPU Cores of a single CM4 cluster node ---
#SBATCH --mem=250G
# --- Realistic assumption for memory requirement of the task and proportonal to the used number of CPU cores ---
 #SBATCH --get-user-env
#SBATCH --mail-type=NONE
#SBATCH --mail-user=Max.Mustermann@lrz.de
#SBATCH --no-requeue
#SBATCH --export=NONE
#SBATCH --time=00:10:00
#-----------------------
module load slurm_setup

#----------- Generation of the ANSYS Lumerical machines file -----------------
machines=""
network=""
myhostname=`hostname`
machinefile=machinefile.txt
if [ -f $machinefile ]; then
  rm $machinefile
fi
for i in $(scontrol show hostnames=$SLURM_JOB_NODELIST); do
  if [ "$i" = "$myhostname" ]; then
    machines=$i:$SLURM_NTASKS_PER_NODE
  else
    machines=$i$network:$SLURM_NTASKS_PER_NODE
  fi
  echo $machines >> $machinefile
done
cat $machinefile
echo "-------------------------------------------------------------------------"

module avail lumerical
module load lumerical/2024.R2
module list 

# cat /proc/cpuinfo
echo ========================================== ANSYS Lumerical Start ==============================================
# For later check of the correctness of the supplied ANSYS Lumerical, fdtd command it is echoed to stdout,
# so that it can be reviewed afterwards in the job file:
echo mpirun -machinefile $machinefile fdtd-engine-impi-lcl -t 1 test.fsp
mpirun -machinefile $machinefile fdtd-engine-impi-lcl -t 1 test.fsp
# Please do not forget to insert here your own FSP file with its correct name!
echo ========================================== ANSYS Lumerical Stop ===============================================

Assumed that the above SLURM script has been saved under the filename "lumerical_cm4_serial.sh", the SLURM batch job has to be submitted by issuing the following command on one of the Linux Cluster login nodes:

sbatch lumerical_cm4_serial.sh

General Concluding Remarks

On the LRZ cluster systems only the usage of Intel MPI is supported and known to work propperly with ANSYS Lumerical.

ANSYS Lumerical solvers have so far only be tested on CoolMUC-4 (serial / cm4_inter_large_mem / cm4_tiny) Linux Cluster partitions.
If you have the need to run ANSYS Lumerical solvers on any other LRZ HPC system, please file a corresponding LRZ Service Request.