SGE to Slurm Very Quick Reference

Commands

SGESlurmDescription
qsub script_filesbatch script_fileSubmit a job from script_file
qdel 123scancel 123Cancel job 123
qstatsqueueList user's pending and running jobs
qhost -qsinfo

sinfo -s
Cluster status with partition (queue) list

With '-s' a summarised partition list, which is shorter and simpler to interpret.

Job Specification

SGESlurmDescription
#$#SBATCHScheduler directive
-q queue_name-p queue_nameQueue to 'queue_name'
-pe 64-n 64Processor count of 64
-l h_rt=[ss]-t [minutes]
or
-t [days-hh:mm:ss]
Max wall run time
-o file_name-o file_nameSTDOUT output file
-e file_name-e file_nameSTDERR output file
-J job_name--job-name=job_nameJob name
-l exclusive--exclusiveExclusive node usage for this job - i.e. no other jobs on same nodes
-l mem_free=128M
or
-l mem_free=1G
--mem-per-cpu=128M
or
--mem-per-cpu=1G
Memory requirement
Set by parallel environment config--tasks-per-node=16Processes per node
-P proj_code--account=proj_codeProject account to charge job to
-t "[array_spec]"--array=array_specJob array declaration

Job Environment Variables

SGESlurmDescription
$JOBID$SLURM_JOBIDJob ID
$SGE_O_WORKDIR$SLURM_SUBMIT_DIRSubmit directory
$JOBID$SLURM_ARRAY_JOB_IDJob Array Parent
$SGE_TASK_ID$SLURM_ARRAY_TASK_IDJob Array Index
$SGE_O_HOST$SLURM_SUBMIT_HOSTSubmission Host
cat $PE_HOSTLIST (this is a file)$SLURM_JOB_NODELISTAllocated compute nodes
$NSLOTS$SLURM_NTASKS
(mpirun can automatically pick this up from Slurm, it does not need to be specified)
Number of processors allocated
$QUEUE$SLURM_JOB_PARTITIONQueue

Much more detail available in the Slurm documentation.