LSF to Slurm Very Quick Reference

Commands

LSFSlurmDescription
bsub < script_filesbatch script_fileSubmit a job from script_file
bkill 123scancel 123Cancel job 123
bjobssqueueList user's pending and running jobs
bqueuessinfo

sinfo -s
Cluster status with partition (queue) list

With '-s' a summarised partition list, which is shorter and simpler to interpret.

Job Specification

LSFSlurmDescription
#BSUB#SBATCHScheduler directive
-q queue_name-p queue_nameQueue to 'queue_name'
-n 64-n 64Processor count of 64
-W [hh:mm:ss]-t [minutes]
or
-t [days-hh:mm:ss]
Max wall run time
-o file_name-o file_nameSTDOUT output file
-e file_name-e file_nameSTDERR output file
-J job_name--job-name=job_nameJob name
-x--exclusiveExclusive node usage for this job - i.e. no other jobs on same nodes
-M 128--mem-per-cpu=128M
or
--mem-per-cpu=1G
Memory requirement
-R "span[ptile=16]"--tasks-per-node=16Processes per node
-P proj_code--account=proj_codeProject account to charge job to
-J "job_name[array_spec]"--array=array_specJob array declaration

Job Environment Variables

LSFSlurmDescription
$LSB_JOBID$SLURM_JOBIDJob ID
$LSB_SUBCWD$SLURM_SUBMIT_DIRSubmit directory
$LSB_JOBID$SLURM_ARRAY_JOB_IDJob Array Parent
$LSB_JOBINDEX$SLURM_ARRAY_TASK_IDJob Array Index
$LSB_SUB_HOST$SLURM_SUBMIT_HOSTSubmission Host
$LSB_HOSTS
$LSB_MCPU_HOST
$SLURM_JOB_NODELISTAllocated compute nodes
$LSB_DJOB_NUMPROC$SLURM_NTASKS
(mpirun can automatically pick this up from Slurm, it does not need to be specified)
Number of processors allocated
$SLURM_JOB_PARTITIONQueue

 

Much more detail available in the Slurm documentation.