{"id":33,"date":"2015-07-21T10:52:58","date_gmt":"2015-07-21T09:52:58","guid":{"rendered":"https:\/\/portal.supercomputing.wales\/?page_id=33"},"modified":"2018-06-29T15:30:43","modified_gmt":"2018-06-29T14:30:43","slug":"slurm","status":"publish","type":"page","link":"https:\/\/portal.supercomputing.wales\/index.php\/index\/slurm\/","title":{"rendered":"Slurm"},"content":{"rendered":"<p><img loading=\"lazy\" decoding=\"async\" class=\"alignright wp-image-49\" src=\"https:\/\/portal.supercomputing.wales\/wp-content\/uploads\/2015\/07\/slurm_logo.png\" alt=\"slurm_logo\" width=\"218\" height=\"200\" \/>All SCW compute systems run a single software stack in which the job scheduler is SchedMD&#8217;s Simple Linux Utility for Resource Management (Slurm).<\/p>\n<p>Slurm is a scalable, resilient, feature full, customisable and open source professional package that is used on many of the world&#8217;s most powerful supercomputers. Using Slurm is similar to using other job schedulers. The user provides a job (batch) script which is submitted to Slurm. Slurm then schedules the job to run on the partition specified and this maps to specific hardware. For SCW, Slurm provides a number of capabilities in terms of node monitoring, memory allocation and job control &amp; accounting that benefit both users and system administrators alike.<\/p>\n<ul>\n<li><a href=\"https:\/\/portal.supercomputing.wales\/index.php\/index\/slurm\/submitting-jobs\/\">Slurm: Submitting, Monitoring and Killing Jobs<\/a> &#8211; general use of the Slurm environment<\/li>\n<li><a href=\"https:\/\/portal.supercomputing.wales\/index.php\/index\/slurm\/migrating-jobs\/\">More On Slurm Jobs<\/a> &#8211; including Slurm specifics and migration information<\/li>\n<li>\n<p id=\"page-title\">Advanced Slurm:<\/p>\n<ul>\n<li><a href=\"https:\/\/portal.supercomputing.wales\/index.php\/index\/slurm\/batch-submission-of-mpi-and-openmp\/\">MPI + OpenMP Job Submission Parameters<\/a><\/li>\n<li><a href=\"https:\/\/portal.supercomputing.wales\/index.php\/index\/slurm\/interactive-use-job-arrays\/interactive-use\/\">Interactive Use<\/a><\/li>\n<li><a href=\"https:\/\/portal.supercomputing.wales\/index.php\/index\/slurm\/interactive-use-job-arrays\/x11-gui-forwarding\/\">X11 GUI Forwarding<\/a><\/li>\n<li><a href=\"https:\/\/portal.supercomputing.wales\/index.php\/index\/slurm\/interactive-use-job-arrays\/job-arrays\/\">Job Arrays<\/a><\/li>\n<li><a href=\"https:\/\/portal.supercomputing.wales\/index.php\/index\/slurm\/interactive-use-job-arrays\/custom-parallel-task-geometry\/\">Custom Parallel Task Geometry<\/a><\/li>\n<li><a href=\"https:\/\/portal.supercomputing.wales\/index.php\/index\/slurm\/interactive-use-job-arrays\/batch-submission-of-serial-jobs-for-parallel-execution\/\">Batch Submission of Serial Jobs for Parallel Execution<\/a><\/li>\n<\/ul>\n<\/li>\n<li>Migrating from other job schedulers:\n<ul>\n<li><a href=\"https:\/\/portal.supercomputing.wales\/index.php\/index\/slurm\/lsf-to-slurm-ref\/\">LSF to Slurm: Quick Reference<\/a> &#8211; a very quick cheat sheet<\/li>\n<li><a href=\"https:\/\/portal.supercomputing.wales\/index.php\/index\/slurm\/pbs-pro-to-slurm-very-quick-reference\/\">PBS Pro to Slurm: Quick Reference<\/a> &#8211; a very quick cheat sheet<\/li>\n<\/ul>\n<\/li>\n<li><a href=\"https:\/\/portal.supercomputing.wales\/index.php\/index\/slurm\/slurm-faq\/\">FAQ<\/a><\/li>\n<li><a href=\"https:\/\/slurm.schedmd.com\/\">Slurm Package Documentation<\/a><\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>All SCW compute systems run a single software stack in which the job scheduler is SchedMD&#8217;s Simple Linux Utility for Resource Management (Slurm). Slurm is a scalable, resilient, feature full, customisable and open source professional package that is used on many of the world&#8217;s most powerful supercomputers. Using Slurm is similar to using other job [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":49,"parent":5,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"page-nosidebar.php","meta":{"_lmt_disableupdate":"","_lmt_disable":"","footnotes":""},"class_list":["post-33","page","type-page","status-publish","has-post-thumbnail","hentry"],"_links":{"self":[{"href":"https:\/\/portal.supercomputing.wales\/index.php\/wp-json\/wp\/v2\/pages\/33","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/portal.supercomputing.wales\/index.php\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/portal.supercomputing.wales\/index.php\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/portal.supercomputing.wales\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/portal.supercomputing.wales\/index.php\/wp-json\/wp\/v2\/comments?post=33"}],"version-history":[{"count":30,"href":"https:\/\/portal.supercomputing.wales\/index.php\/wp-json\/wp\/v2\/pages\/33\/revisions"}],"predecessor-version":[{"id":616,"href":"https:\/\/portal.supercomputing.wales\/index.php\/wp-json\/wp\/v2\/pages\/33\/revisions\/616"}],"up":[{"embeddable":true,"href":"https:\/\/portal.supercomputing.wales\/index.php\/wp-json\/wp\/v2\/pages\/5"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/portal.supercomputing.wales\/index.php\/wp-json\/wp\/v2\/media\/49"}],"wp:attachment":[{"href":"https:\/\/portal.supercomputing.wales\/index.php\/wp-json\/wp\/v2\/media?parent=33"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}