{"id":250,"date":"2016-01-29T09:48:19","date_gmt":"2016-01-29T09:48:19","guid":{"rendered":"https:\/\/portal.supercomputing.wales\/?page_id=250"},"modified":"2018-06-28T18:28:43","modified_gmt":"2018-06-28T17:28:43","slug":"x11-gui-forwarding","status":"publish","type":"page","link":"https:\/\/portal.supercomputing.wales\/index.php\/index\/slurm\/interactive-use-job-arrays\/x11-gui-forwarding\/","title":{"rendered":"X11 GUI Forwarding"},"content":{"rendered":"<p>Some applications provide the capability to interact with a graphical user interface (GUI). It is not typical of parallel jobs, but large-memory applications and computationally steered applications can offer such capability.<\/p>\n<h3>Setting up X Forwarding<\/h3>\n<p>First we must login to SCW with X Forwarding enabled.<br \/>\n<pre class=\"preserve-code-formatting\">$ ssh -X username@hawklogin.cf.ac.uk<\/pre><br \/>\nWindows users will need to go to the Connection-&gt;SSH-&gt;X11 options and enable &#8220;Enable X11 Forwarding&#8221;. You will also need an X server such as <a href=\"https:\/\/sourceforge.net\/projects\/vcxsrv\/files\/latest\/download\">VcXsrv<\/a> or <a href=\"https:\/\/sourceforge.net\/projects\/xming\/files\/latest\/download\">XMing<\/a>.<br \/>\nMac users will need to download <a href=\"https:\/\/www.xquartz.org\/\">Xquartz<\/a>.<\/p>\n<p>There are two ways to run graphical jobs on the system:<\/p>\n<h3>SSH into the node<\/h3>\n<p>Alternatively you can SSH directly into a node you&#8217;ve been allocated via <em>salloc<\/em>. By adding the <em>-X<\/em> option to this the X session will be forwarded from the compute node back to the login node. This is shown in the example below.<br \/>\n<pre class=\"preserve-code-formatting\">[username@cl2 ~]$ salloc -n 1\nsalloc: Granted job allocation 30937\nsalloc: Waiting for resource configuration\nsalloc: Nodes ccs0046 are ready for job\n\n[username@cl2 ~]$ ssh -X ccs0046 xterm\n\n<\/pre><\/p>\n<h3>Use srun<\/h3>\n<p>Note: this method is not currently working<\/p>\n<p>With Slurm, once a resource allocation is granted for an interactive session (or a batch job when the submitting terminal if left logged in), we can use <em>srun<\/em> to provide X11 graphical forwarding all the way from the compute nodes to our desktop using <em>srun -x11 &lt;application&gt;.<\/em><br \/>\n<pre class=\"preserve-code-formatting\">[username@cl2 ~]$ salloc -n 1\nsalloc: Granted job allocation 30937\nsalloc: Waiting for resource configuration\nsalloc: Nodes ccs0046 are ready for job\n\n[username@cl2 ~]$ srun -x11 xterm<\/pre><br \/>\nNote that the user must have X11 forwarded to the login node for this to work &#8211; this can be checked by running <em>xclock<\/em> at the command line.<\/p>\n<p>Additionally, the <em>-x11<\/em> argument can be augmented in this fashion <em>&#8211;x11=[batch|first|last|all]<\/em> to the following effects:<\/p>\n<ul>\n<li><em>-x11=first <\/em>This is the default, and provides X11 forwarding to the first compute hosts allocated.<\/li>\n<li><em>-x11=last<\/em> This provides X11 forwarding to the last of the compute hosts allocated.<\/li>\n<li><em>-x11=all<\/em> This provides X11 forwarding from all allocated compute hosts, which can be quite resource heavy and is an extremely rare use-case.<\/li>\n<li><em>-x11=batch<\/em> This supports use in a batch job submission, and will provide X11 forwarding to the first node allocated to a batch job. The user must leave open the X11 forwarded login node session where they submitted the job.<\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>Some applications provide the capability to interact with a graphical user interface (GUI). It is not typical of parallel jobs, but large-memory applications and computationally steered applications can offer such capability. Setting up X Forwarding First we must login to SCW with X Forwarding enabled. $ ssh -X username@hawklogin.cf.ac.uk Windows users will need to go [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"parent":42,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"page-nosidebar.php","meta":{"_lmt_disableupdate":"","_lmt_disable":"","footnotes":""},"class_list":["post-250","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/portal.supercomputing.wales\/index.php\/wp-json\/wp\/v2\/pages\/250","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/portal.supercomputing.wales\/index.php\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/portal.supercomputing.wales\/index.php\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/portal.supercomputing.wales\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/portal.supercomputing.wales\/index.php\/wp-json\/wp\/v2\/comments?post=250"}],"version-history":[{"count":3,"href":"https:\/\/portal.supercomputing.wales\/index.php\/wp-json\/wp\/v2\/pages\/250\/revisions"}],"predecessor-version":[{"id":597,"href":"https:\/\/portal.supercomputing.wales\/index.php\/wp-json\/wp\/v2\/pages\/250\/revisions\/597"}],"up":[{"embeddable":true,"href":"https:\/\/portal.supercomputing.wales\/index.php\/wp-json\/wp\/v2\/pages\/42"}],"wp:attachment":[{"href":"https:\/\/portal.supercomputing.wales\/index.php\/wp-json\/wp\/v2\/media?parent=250"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}