PBS Pro to Slurm Very Quick Reference
Commands
PBS | Slurm | Description |
---|---|---|
qsub script_file | sbatch script_file | Submit a job from script_file |
qsub -I | salloc [options] | Request an interactive job |
qdel 123 | scancel 123 | Cancel job 123 |
qstat -u [username] | squeue | List user's pending and running jobs |
qstat -f 123 qstat -fx 123 | scontrol show job 123 | Show job details -x in PBS to show finished job |
qstat queue_name | sinfo sinfo -s | Cluster status with partition (queue) list With '-s' a summarised partition list, which is shorter and simpler to interpret. |
Job Specification
PBS | Slurm | Description |
---|---|---|
#QSUB | #SBATCH | Scheduler directive |
-q queue_name | -p queue_name | Queue to 'queue_name' |
-l select=4:ncpus=16 (request 4 nodes and request that each node has the property of 16 cpus) | -n 64 | Processor count of 64 |
-l walltime=h:mm:ss | -t [minutes] or -t [days-hh:mm:ss] | Max wall run time |
-o file_name | -o file_name | STDOUT output file |
-e file_name | -e file_name | STDERR output file |
-N job_name | --job-name=job_name | Job name |
-l place=excl | --exclusive | Exclusive node usage for this job - i.e. no other jobs on same nodes |
-l mem=1gb | --mem-per-cpu=128M or --mem-per-cpu=1G | Memory requirement |
-l nodes=x:ppn=16 | --tasks-per-node=16 | Processes per node |
-P proj_code | --account=proj_code | Project account to charge job to |
-t 1-10 | --array=array_spec | Job array declaration |
Job Environment Variables
PBS | Slurm | Description |
---|---|---|
$PBS_JOBID | $SLURM_JOBID | Job ID |
$PBS_O_WORKDIR | $SLURM_SUBMIT_DIR | Submit directory |
$SLURM_ARRAY_JOB_ID | Job Array Parent | |
$PBS_ARRAYID | $SLURM_ARRAY_TASK_ID | Job Array Index |
$PBS_O_HOST | $SLURM_SUBMIT_HOST | Submission Host |
$PBS_NODEFILE | $SLURM_JOB_NODELIST | Allocated compute nodes |
$SLURM_NTASKS (mpirun can automatically pick this up from Slurm, it does not need to be specified) | Number of processors allocated | |
$PBS_QUEUE | $SLURM_JOB_PARTITION | Queue job is running in |
Much more detail available in the Slurm documentation.