Interactive Use

In order to use an HPC system interactively – i.e. whilst sat in front of the terminal interacting live with allocated resources – there is a simple two stage process in Slurm.

Firstly, we must create an allocation – that is, an allocation of a certain amount of resources that we specify we need. This is done using the salloc command, like this:

[test.user@cl1 imb]$ salloc -n 8 --ntasks-per-node=1
 salloc: Granted job allocation 134

Now that an allocation has been granted, we have access to those specified resources. Note that the resource specification we made in this case is exactly as the parameters passed for batch use was – so in this case we have asked for 8 tasks (processes) with them distributed at one per node.

Now that we are ‘inside’ an allocation, we can use the srun command to execute against the allocated resources, for example:

[test.user@cl1 imb]$ srun hostname
ccs0129
ccs0130
ccs0126
ccs0133
ccs0125
ccs0121
ccs0134
ccs0122

The above output shows how, by default, srun executes a command on all allocated processors. Arguments can be passed to srun to operate differently, for example:

We could also launch an MPI job here if we wished. We would load the software modules as we do in a batch script and call mpirun in the same way. This can be useful during code debugging.

It is also possible to use srun to launch an interactive shell process for some heavy processing on a compute node, for example:

srun -n 2 --pty bash

This would move us to a shell on a compute node.