RAM disk (tmpfs)

Every Job can make use of a local RAM disk located at /dev/shm , which has a significantly higher performance (both I/O operations per seconds as well as bandwidth) as the filesystem on the local SSD disk. The usage is similar to the local SSD storage (see above). Contrary to disk storage, RAM disk storage requirements have to be added to the requested amount of RAM. The maximum size of the RAM disk is limited to approx. 50% of the total amount of RAM per node, i.e.  500G for nodes of the epyc and epyc-gpu nodes, and 2T for epyc-mem  nodes.

Given that your application requires 4G of RAM, and up to 8G of RAM disk storage will be used, you need to request at least #SBATCH --mem=12G of RAM. Failure to do so will result in your Job being terminated by the OOM (Out-Of-Memory) killer.

This directory is a private directory, it will only be seen by your Job.

Handling Timelimit-situations for Jobs using the RAM disk.

If you are unsure how long your Job will take, it might run into the timelimit. Make sure you implement a mechanism to copy back important intermediate results in this case, because the private /dev/shm directory will be deleted right at the end (timeout or not) of a Job.

Do not submit Jobs with significantly more that 8G per CPU core on the epyc partition. Use the epyc-mem partition for high memory applications instead.