Every Job can make use of a local RAM disk located at This directory is a private directory, it will only be seen by your Job. Handling Timelimit-situations for Jobs using the RAM disk. If you are unsure how long your Job will take, it might run into the timelimit. Make sure you implement a mechanism to copy back important intermediate results in this case, because the private Do not submit Jobs with significantly more that 8G per CPU core on the RAM disk (tmpfs)
/dev/shm
, which has a significantly higher performance (both I/O operations per seconds as well as bandwidth) as the filesystem on the local SSD disk. The usage is similar to the local SSD storage (see above). Contrary to disk storage, RAM disk storage requirements have to be added to the requested amount of RAM. The maximum size of the RAM disk is limited to approx. 50% of the total amount of RAM per node, i.e. 500G for nodes of the epyc
and epyc-gpu
nodes, and 2T for epyc-mem
nodes.#SBATCH --mem=12G
of RAM. Failure to do so will result in your Job being terminated by the OOM (Out-Of-Memory) killer./dev/shm
directory will be deleted right at the end (timeout or not) of a Job.epyc
partition. Use the epyc-mem
partition for high memory applications instead.