Hawk is located at Cardiff University and is available for Cardiff and Bangor users of Supercomputing Wales. Aberystwyth and Swansea users should use the Swansea Sunbird system instead.
Guest users from other institutions will be allowed access on the basis of the project & institutions they are working with.
280 nodes, totalling 12,736 cores, 68.224 TBytes total memory
- 2x Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz with 20 cores each
- 2x AMD(R) Epyc 7502 2.5GHz with 32 cores each
- 192 GB on Intel nodes
- 256 GB on AMD nodes
- 384 GB on high memory and GPU nodes
- 26 Nvidia P100 GPUs with 16GB of RAM on 13 nodes
- 30 Nvidia V100 GPUs with 16GB of RAM on 15 nodes
- 1192TB (usable) scratch space on a Lustre filesystem
- 420TB of home directory space over NFS
The available compute hardware is managed by the Slurm job scheduler and organised into ‘partitions’ of similar type/purpose. Jobs – both batch and interactive – should be targeted at the appropriate one. Limits on the partition are just defaults, we can provide access to different limits, for example to enable array jobs a higher number of submitted jobs maybe required. Please contact support if required.
|Partition Name||Number of Nodes||Purpose||Maximum running jobs per user||Maximum submitted jobs per user|
|compute||134||Parallel and MPI jobs||10||30|
|compute_amd||64||Parallel and MPI jobs using AMD EPYC 7502||10||30|
|highmem||26||Large memory (384GB) jobs||10||20|
|gpu||13||GPU (CUDA) jobs - P100||5||10|
|gpu_v100||15||GPU (CUDA) jobs - V100||5||10|
|HTC||26||High Throughput Serial jobs||10||40|
|dev||2||Testing and development||1||2|