One of the advantages of slurm is the ability to schedule an interactive session, just as you might submit a batch job. Slurm will find a free node with the lightest load, and start your session on it. This will allow you to run GUI versions of the software, as well as requesting a node with a GPU. Once the session starts, all of the normal commands should work, like virtual environments for Python, the module commands to request a specific module etc. The main difference is that your will be restricted to the amount of memory you have by the –mem parameter, and the length (walltime) of your session by the –time parameter. For instance, let’s request an interactive session on a node, and ask for 4GB of ram and restrict the session to 4 hours. From one of the login nodes (currently rnd and vleda), type:
srun --pty --mem=4gb --time=4:00:00 /bin/bash
This will start your session on one of our Slurm nodes and give you 4gb of ram and allow the session to last for 4 hours. Please remember to “exit” when you are done to free the resources for other users.
If you need other resources, you can also request them from the srun command. Assume you want a node with 1 V100 GPU and 4 processors, then the command would be:
srun --pty --gres=gpu:v100:1 --cpus-per-task=4 --mem=4gb --time-4:00:00 /bin/bash
Slurm has many, many options for controlling the number of cpus, thread, cores, gpus, … Most of them are not supported on our grid, but the NYU HPC cluster runs a much larger cluster with more possible configurations.
As we move users over to slurm, we will be turning off some of our dedicated interactive nodes, and have users request interactive nodes using slurm.