General

Intel oneAPI is a open, standards-based and unified programming modell, which encompasses Intel Compilers and Libraries. The compiler (C,C++ and Fortran) stack includes Classic and soon to be deprecated versions, and modern, LLVM-based versions which are also starting to target heterogeneous Offloading to OpenMP/GPU via SYCL. The most notable Library is the Math-Kernel-Library (MKL), which accelerates BLAS, LAPACK, sparse solvers, fast Fourier transforms (FFT), random number generator functions (RNG), summary statistics, data fitting, and vector math.

Intel oneAPI is shipped as toolkits and the following toolkits are installed:

  • oneAPI Base Toolkit
  • oneAPI HPC Toolkit

The following Compilers are available:

CompilerClassicModern (LLVM)Intel-MPI (classic)Intel-MPI (LLVM) until 2023.1.0Intel-MPI (LLVM) from 2023.2.0
Fortranifortifxmpiifortmpiifort -fc=ifxmpiifx
Ciccicxmpiiccmpiicc -fc=icxmpiicx
C++icpc

icpx

mpiicpc

mpiicpc -fc=icpx

mpiicpx

Intel Classic/LLVM Compiler recommendation

Intel Classic Compilers are needed: choose oneAPI 2023 or older.

Intel LLVM Compilers are needed: choose oneAPI 2024 or newer.

Although available for some older versions, we do not reccomend and support the usage of Intel LLVM compilers on oneAPI 2023 or older, or mixing of Intel Classic and LLVM compilers. In fact, when loading the modules, environment variables will be set to use Intel Classic Compilers for oneAPI 2023 or older and Intel LLVM compilers for oneAPI 2024 or newer. Also all provided software and libraries are compiled this way.


Intel Classic Compiler removal

From oneAPI 2024.0.0 onwards, icc and icpc are no longer available, and therefore mpiicc and mpiicpc are also no longer working. ifort and mpiifort are scheduled for removal in oneAPI 2025.0.0.

Check out Intel's Porting Guide to transision from Intel Classic Compilers to Intel LLVM Comilers.


The individual components of the Toolkits may have version numbers different from the Toolkit version number:

Intel oneAPI component 

individual

module

components in

oneAPI v2021.4.0

components in

oneAPI v2022.3.1

components in

oneAPI v2023.2.1

components in

oneAPI v2024.2.1

C/C++ LLVMcompiler2021.4.02022.2.12023.2.12024.2.1
C/C++ classiccompiler2021.4.02021.7.12021.10.0-
Fortran LLVMcompiler2021.4.02022.2.12023.2.02024.2.1
Fortran classiccompiler2021.4.02021.7.12021.10.02021.13.1
Intel MPIimpi2021.4.02021.7.12021.10.02021.13.0
MKLmkl2021.4.02022.2.12023.2.02024.2.0
VTUNEvtune2021.7.12022.4.12023.2.02024.2.0
DPC++ compatibility tooldpct2021.5.02022.2.12023.2.02024.2.0
Inspectorinspector2021.4.02022.3.12023.2.0-
Advisoradvisor2021.4.02022.3.12023.2.02024.2.0
Collective Communications Libraryccl2021.4.02021.7.12021.10.02021.13.1
Deep Neural Networks Librarydnnl2021.4.02022.2.12023.2.02024.2.0
ITACitac2021.4.02021.7.12021.10.0-
TBBtbb2021.4.02021.7.12021.10.02021.13.0
IPPintel_ipp_intel642021.4.02021.6.22021.9.02021.12.0
Cluster checkerclck2021.4.02021.7.1--
DALdal2021.4.02021.7.12023.2.0-
Processing Library (oneVPL)vpl2021.6.02022.2.5

-

-

IPP Crypto (oneIPP-cp)intel_ippcp_intel642021.4.02021.6.22021.9.02021.12.0


Policy on oneAPI versions

Intel ships an annual major release of oneAPI (e.g. 2021, 2022, etc.). We try to keep up with newer releases, however, superseded minor releases will be removed from time to time, and only the latest minor release of the respective major releases will be kept, e.g. 2021.4 for 2021, 2022.3 for 2022, and so on.

Loading a specific version

When loading oneAPI, it is recommended to load a specific version. The version number is X.Y.Z , where X  is the major version, Y  is the minor version and Z  is the bugfix version. Do not load X.Y.Z  but only X.Y , e.g. 2021.4 , 2022.32023.1 or 2024.0, beacause whenever a bugfix release (where just Z is different) is installed, the old version might be removed without notice.


Note on Optimizations

Intel Compilers and Libraries are a very popular choice for both Workstation- and HPC use cases and a lot of scientific software is developed with 1st class support for Intel Compilers and Libraries. Despite its vendor name, compiled programms work well on platforms with both Intel and AMD CPUs. Nevertheless, care should be taken with user compiled software, and various compiler variants (including AOCC/AOCL, which might be a bit faster on AMD platforms like LiCCA) should be tested and benchmarked, especially if a large number of Jobs are going to be submmitted. This will make sure that HPC ressources are used in the most economical way.

Recommended march/mtune combinations for Intel Classic Compilers (icc,icpc,ifort) on AMD CPUs
# happens to run well on most processors supporting AVX2 instructions.
icc -O3 -march=core-avx2 -mtune=core-avx2 ...
icpc -O3 -march=core-avx2 -mtune=core-avx2 ...
ifort -O3 -march=core-avx2 -mtune=core-avx2 ... 
Fallback for Intel Classic Compilers (icc,icpc,ifort) on AMD CPUs
# In case the above fails
icc -O3 -mavx2 -mfma -mtune=core-avx2 ...
icpc -O3 -mavx2 -mfma -mtune=core-avx2 ...
ifort -O3 -mavx2 -mfma -mtune=core-avx2 ... 
Recommended march/mtune combinations for Intel LLVM Compilers (icx,icpx,ifx) on AMD CPUs
icx -O3 -march=znver3 -mtune=znver3 ...
icpx -O3 -march=znver3 -mtune=znver3 ...
ifx -O3 -march=znver3 -mtune=znver3 ...
Fallback march/mtune combinations for Intel LLVM Compilers (icx,icpx,ifx) on AMD CPUs
# In case also older CPU generations need to be supported
icx -O3 -march=x86-64-v3 -mtune=znver3 ...
icpx -O3 -march=x86-64-v3 -mtune=znver3 ...
ifx -O3 -march=x86-64-v3 -mtune=znver3 ... 

In case of numerical error

Try using -fp-model=strict (disables some floating-point optimizations) and/or disable FMA using -no-fma when unit tests complain about numerical errors.

Modules providing Intel Compilers

Complete OneAPI packages (except MPI)

module load intel # will load the default version (last year's release), will change every now and then
module load intel/2021.4 # load a specific version (recommended)

Loads the complete oneAPI set of toolkits (as would using the setvars.sh script do), but all MPI excluded. This should be good for most users.

MPI

There are two flavors Intel-MPI (Lmod module impi) and OpenMPI (Lmod module openmpi). Just load any of them after loading the intel module.

You cannot load both impi and openmpi at the same time. When you try to load a second MPI flavor, the first one will be unloaded automatically by Lmod. 

Errors when using mpirun

The impi modules set a couple of environment variables (I_MPI_PMI_LIBRARY, I_MPI_FABRICS, SLURM_MPI_TYPE and sometimes UCX_TLS, depending on the current node) which might interfere with using mpirun and can cause errors. This will not happen when using srun in a sbatch script (recommended), but sometimes cannot be avoided (i.e. running tests after software compilation). In this case you need to unset these variables:

unset I_MPI_PMI_LIBRARY I_MPI_FABRICS FI_PROVIDER UCX_TLS

Since October 2024, I_MPI_PMI_LIBRARY is only set when loaded in a Slurm job. Therefore it is even more important to load the impi module explicitly in a job after purging all modules.

Individual components

If only individual components are strictly required, they can be loaded after loading the respective intel/individual module.

module load intel/individual # will load the default version (last year's release), will change every now and then
module load intel/individual/2021.4 # load a specific version (recommended)

Afterwards all the individual components of the oneAPI Toolkoits will be become available in module avail . See also the above table on the individual oneAPI component versions.

Linking against Intel MKL

In order to link your Code/Software against Intel MKL, make use of the Link Line Advisor. The environment variable MKLROOT will be set with either of the following module commands

module load intel/2021.4 # using intel compilers
module load intel/individual/2021.4 mkl # using other compilers


SYCL C++ applications with support for NVIDIA GPUs

By default, Intel oneAPI only supports Intel GPUs when offloading calculations via SYCL. For the most recent 2023 and 2024 versions of the oneAPI compilers, the codeplay NVIDIA plugin has been installed.