{"id":768,"date":"2018-07-18T11:11:29","date_gmt":"2018-07-18T10:11:29","guid":{"rendered":"https:\/\/portal.supercomputing.wales\/?page_id=768"},"modified":"2018-07-18T16:03:18","modified_gmt":"2018-07-18T15:03:18","slug":"rapid-start-for-raven-users-migrating-to-scw-hawk","status":"publish","type":"page","link":"https:\/\/portal.supercomputing.wales\/index.php\/rapid-start-for-raven-users-migrating-to-scw-hawk\/","title":{"rendered":"Rapid Start for Raven Users Migrating to SCW Hawk"},"content":{"rendered":"<p>For users of the ARCCA Raven services who are migrating to the SCW service, please be aware of these important notes in order to get up-and-running as quickly &amp; effectively as possible.<\/p>\n<h4><\/h4>\n<h4>User Credentials<\/h4>\n<ul>\n<li>These have changed to be linked to your institutional account.<\/li>\n<li>The method to access your new credentials and get started with Hawk is covered on the <a href=\"https:\/\/portal.supercomputing.wales\/index.php\/getting-access\/\" target=\"_blank\" rel=\"noopener\">Getting Access<\/a> page \u2013 please follow it closely.\n<ul>\n<li>Contact Support if you have any issues.<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<h4>Projects<\/h4>\n<ul>\n<li>Existing projects and their memberships from Raven have been migrated to Hawk.<\/li>\n<li>You can apply for new projects through the MySCW system, see <a href=\"https:\/\/portal.supercomputing.wales\/index.php\/getting-access\/\" target=\"_blank\" rel=\"noopener\">Getting Access<\/a>.<\/li>\n<li>It will soon become necessary to specify which of your project memberships to account a compute job against.<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<h4>Data<\/h4>\n<ul>\n<li>Home directories from Raven have been cloned to a read-only area on Hawk for quick access.\n<ul>\n<li>These will remain for approximately 6 months only and then be removed.<\/li>\n<li>They are being re-synchronised from Raven every day.<\/li>\n<li>These cloned home directories are available on the Hawk login nodes at:<strong><em>\/migrate\/raven\/RAVEN_USERID<\/em><\/strong><\/li>\n<\/ul>\n<\/li>\n<li>New home directory quotas for all are 50GB per user and 100GB per project share \u2013 extensions are quickly available on justified request.<\/li>\n<li><em>Project Shared Areas<\/em> are a way for users of a particular project to share data. They are separate from user home directories.\n<ul>\n<li>If a Project Shared area does not exist for your project and one would be useful, please request from Support.<\/li>\n<\/ul>\n<\/li>\n<li>We can also create <em>Collaboration Shared\u00a0Areas<\/em> that cross project and institutional boundaries as separate areas from home directories for data sharing among multiple users.\n<ul>\n<li>Please contact Support to discuss if this would be useful.<\/li>\n<\/ul>\n<\/li>\n<li>The scratch file system will see automatic cleanup of data that is not accessed for 60 days.\n<ul>\n<li>Exceptions available by justified request to Support.<\/li>\n<\/ul>\n<\/li>\n<li>Raven is still an active system.<\/li>\n<li>Cardiff User home directories on Hawk are currently being cloned to a backup system, but there are no historical archives kept thereof. This backup\u00a0is done on a best efforts basis.<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<h4>Gluster Storage<\/h4>\n<ul>\n<li>Gluster storage will be available as it was on Raven via mountpoints on the login nodes.<\/li>\n<li>Access to Gluster storage is via membership of relevant groups on Hawk. These groups are mapped from the Cardiff university groups used for Gluster storage.<\/li>\n<li>To simplify administration and keep things better organised all mounts are under \/gluster.<\/li>\n<li>N.B. \/gluster\/neurocluster contains its own group of mountpoints.<\/li>\n<li>Users are expected to copy data to scratch for processing \u2013 Gluster moutpoints will not be accessible to cluster nodes due to network routing.<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<h4>Jobs &amp; Applications<\/h4>\n<ul>\n<li style=\"list-style-type: none;\">\n<ul>\n<li style=\"list-style-type: none;\">\n<ul>\n<li>Once migrated and <span lang=\"en-GB\">you have validated the operation &amp; correctness of software you use and all is doing as you would expect<\/span>, we request that you no longer submit jobs on Raven.<\/li>\n<li>At this stage, a slowly-increasing number of the most utilised application codes have been re-built in an optimized form on the new systems.\n<ul>\n<li>These can be seen and accessed from the default output of <strong><em>module avail<\/em><\/strong>.<\/li>\n<\/ul>\n<\/li>\n<li>For other codes that have _not_ been re-built in optimal form at launch, the Raven software stack is made available by first performing <em><strong>module load raven<\/strong><\/em>.\n<ul>\n<li>Once the <em><strong>raven<\/strong><\/em> module is loaded, the Raven software stack is displayed by <em><strong>module avail<\/strong><\/em> and modules are loaded in the usual way using <em><strong>module load &lt;modulename&gt;<\/strong><\/em>.<\/li>\n<li>The large majority of the Raven software stack will function on Hawk just fine.\n<ul>\n<li>A few things won\u2019t. If you find one, please highlight to Support and a new solution will be prioritised.<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<\/li>\n<li>We will monitor usage of old codes in order to prioritise those to be re-built in optimal form for the new system, and also those which will at some point be decommissioned due to zero usage.<\/li>\n<li>Job submission scripts from Raven will need to be modified to use the Slurm scheduler deployed on Hawk.\n<ul>\n<li>The <a href=\"https:\/\/portal.supercomputing.wales\/index.php\/index\/slurm\/pbs-pro-to-slurm-very-quick-reference\/\">PBS Pro to Slurm Migration Reference<\/a> highlights the normal changes needed as part of this.\n<ul>\n<li>For more complex job scripts, please see the other documentation on this site regarding interactive use, array jobs, etc.<\/li>\n<\/ul>\n<\/li>\n<li>To make the most of the new system\u2019s greater processor per node count and also target the right partition (queue).\n<ul>\n<li>Where previous systems had 12 or 16 processor cores per node, Hawk has <em><strong>40<\/strong><\/em>.\n<ul>\n<li>It is therefore more important to use Hawk\u2019s nodes efficiently as proportionally more resource can be easily wasted.<\/li>\n<li>Be aware to correctly populate the Slurm directive <em><strong>#SBATCH \u2013tasks-per-node<\/strong><\/em> in migrated submission scripts.<\/li>\n<\/ul>\n<\/li>\n<li>To better reflect the wide diversity of jobs and improve the user experience, partition naming and use has been re-worked. Please see the output of\u00a0<em><strong>sinfo<\/strong><\/em> and the below.\n<ul>\n<li>Standard parallel jobs to the default <em><strong>compute<\/strong><\/em> partition.<\/li>\n<li>High-memory tasks to the <em><strong>highmem<\/strong><\/em> partition.<\/li>\n<li>GPU accelerated tasks to the <em><strong>gpu<\/strong><\/em> partition.<\/li>\n<li>Serial \/ high-throughput tasks to the\u00a0<em><strong>htc<\/strong><\/em> partition.<\/li>\n<li>Small and short development tasks (up to 40 processor cores, 30 minutes runtime) to the\u00a0<em><strong>dev<\/strong><\/em> partition.<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<\/li>\n<li>A newly refreshed training tarball, full of example jobs across a variety of applications is available. Please see <a href=\"https:\/\/portal.supercomputing.wales\/index.php\/use-of-software-examples\/\">here<\/a>.<\/li>\n<li>Please don&#8217;t hesitate to <a href=\"https:\/\/portal.supercomputing.wales\/index.php\/index\/submit-support-ticket\/\">contact us with any questions, issues or comments<\/a>.<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>For users of the ARCCA Raven services who are migrating to the SCW service, please be aware of these important notes in order to get up-and-running as quickly &amp; effectively as possible. User Credentials These have changed to be linked to your institutional account. The method to access your new credentials and get started with [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"page-nosidebar.php","meta":{"_lmt_disableupdate":"","_lmt_disable":"","footnotes":""},"class_list":["post-768","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/portal.supercomputing.wales\/index.php\/wp-json\/wp\/v2\/pages\/768","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/portal.supercomputing.wales\/index.php\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/portal.supercomputing.wales\/index.php\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/portal.supercomputing.wales\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/portal.supercomputing.wales\/index.php\/wp-json\/wp\/v2\/comments?post=768"}],"version-history":[{"count":6,"href":"https:\/\/portal.supercomputing.wales\/index.php\/wp-json\/wp\/v2\/pages\/768\/revisions"}],"predecessor-version":[{"id":779,"href":"https:\/\/portal.supercomputing.wales\/index.php\/wp-json\/wp\/v2\/pages\/768\/revisions\/779"}],"wp:attachment":[{"href":"https:\/\/portal.supercomputing.wales\/index.php\/wp-json\/wp\/v2\/media?parent=768"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}