PBS Pro to Slurm Very Quick Reference

Commands

PBSSlurmDescription
qsub script_filesbatch script_fileSubmit a job from script_file
qsub -Isalloc [options]Request an interactive job
qdel 123scancel 123Cancel job 123
qstat -u [username]squeueList user's pending and running jobs
qstat -f 123

qstat -fx 123
scontrol show job 123Show job details

-x in PBS to show finished job
qstat queue_namesinfo

sinfo -s
Cluster status with partition (queue) list

With '-s' a summarised partition list, which is shorter and simpler to interpret.

Job Specification

PBSSlurmDescription
#QSUB#SBATCHScheduler directive
-q queue_name-p queue_nameQueue to 'queue_name'
-l select=4:ncpus=16

(request 4 nodes and request that each node has the property of 16 cpus)
-n 64Processor count of 64
-l walltime=h:mm:ss-t [minutes]
or
-t [days-hh:mm:ss]
Max wall run time
-o file_name-o file_nameSTDOUT output file
-e file_name-e file_nameSTDERR output file
-N job_name--job-name=job_nameJob name
-l place=excl--exclusiveExclusive node usage for this job - i.e. no other jobs on same nodes
-l mem=1gb--mem-per-cpu=128M
or
--mem-per-cpu=1G
Memory requirement
-l nodes=x:ppn=16--tasks-per-node=16Processes per node
-P proj_code--account=proj_codeProject account to charge job to
-t 1-10--array=array_specJob array declaration

Job Environment Variables

PBSSlurmDescription
$PBS_JOBID$SLURM_JOBIDJob ID
$PBS_O_WORKDIR$SLURM_SUBMIT_DIRSubmit directory
$SLURM_ARRAY_JOB_IDJob Array Parent
$PBS_ARRAYID$SLURM_ARRAY_TASK_IDJob Array Index
$PBS_O_HOST$SLURM_SUBMIT_HOSTSubmission Host
$PBS_NODEFILE$SLURM_JOB_NODELISTAllocated compute nodes
$SLURM_NTASKS
(mpirun can automatically pick this up from Slurm, it does not need to be specified)
Number of processors allocated
$PBS_QUEUE$SLURM_JOB_PARTITIONQueue job is running in

 

Much more detail available in the Slurm documentation.