Singularity
What is Singularity
Singularity [1] is a container system that doesn’t need root access. Containers are a set of applications and a minimal operating system bundled together in a single file. They are a good way to ensure that anyone running the software is using the exact same set of libraries and dependencies. This is helpful to reproducible science where we want somebody else to be able to recreate our results. They also allow us to run newer/older/different operating systems than the one installed on the host system.
Loading singularity
Run the command (or add it to your submission script):
module load singularity
Obtaining Containers
Singularity Hub [2] contains a number of pre-made containers which you can download and use. To download these run:
singularity pull shub:///:
This will then download the file to <username>-singularity_hub-master-<image name>.simg
The Super Computing Wales base container can be obtained by running:
singularity pull shub://SupercomputingWales/singularity_hub:base_image
Running a shell in a container
The singularity shell command will run a shell inside the container. You can then execute any commands from software installed in the container. To do this run the command “singularity shell”.
singularity shell
If you downloaded the Super Computing Wales container, you can now get a shell inside it by running:
singularity shell SupercomputingWales-singularity_hub-master-base_image.simg
Running the container’s default actions
Most containers specify a default command which they will run. The “singularity run” command will execute this.
singularity run
Accessing the host file system from inside the container
You can mount paths from outside the container inside it. /tmp, /home, /scratch and /apps are already mounted.
singularity shell -B /:/
This will mount a directory from the host inside the container. The container will only be able to access any files which the user running it has permissions for.
Writing your own containers
To make your own containers you’ll probably have to install singularity on your own computer, as building a container typically requires root access.
This example takes an ubuntu 16.04 image as a base, then installs the program cowsay. This is done when the container is built. When the container is run it executes cowsay with the arguments given on the command line.
bootstrap: docker From:ubuntu:16.04
%help Example container for Cowsay
%labels MAINTAINER Super Computing Wales
%environment #configure our locale, without this we'll get locale errors export LC_ALL=C #cowsay installs to /usr/games, but this isn't in the path by default export PATH=/usr/games:$PATH %post apt-get update apt-get -y install cowsay
%runscript cowsay $@
To build the container save the above example in a file called Singularity and run:
sudo singularity build cowsay.simg Singularity
This will create an image file called cowsay.simg containing all the required software to run cowsay in an Ubuntu 16.04 operating system. You can now copy this image file to Super Computing Wales and run it on an HPC.
Publishing a Container
- Publish the Singularity file on Github.
- Create an account on Singularity Hub.
- Add this repository to Singularity Hub and it will be automatically built by singularity hub and made available for download using the singularity pull command.
Using Containers with GPU applications
Singularity can use the system’s NVidia libraries by running with the in “–nv” option, this means you don’t need to install the NVidia libraries inside the container.
Building GPU containers
In order to compile software when building the container you will need to install some libraries. We will need to know which hardware version we are going to target. On Super Computing Wales the GPU hardware is version 396.37, this can be found by running the command “nvidia-smi” on a GPU node. The system building the container does not need to have its own GPU.
$ nvidia-smi +-----------------------------------------------------------------------------+ | NVIDIA-SMI 396.37 Driver Version: 396.37 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 Tesla V100-PCIE... On | 00000000:3B:00.0 Off | 0 | | N/A 35C P0 25W / 250W | 0MiB / 16160MiB | 0% Default | +-------------------------------+----------------------+----------------------+ | 1 Tesla V100-PCIE... On | 00000000:D8:00.0 Off | 0 | | N/A 37C P0 27W / 250W | 0MiB / 16160MiB | 0% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | No running processes found |
The NVidia docker image (nvidia/cuda:9.0-devel) pre-supplies the Cuda libraries, these will need to match what is installed on the host system. Super Computing Wales has Cuda versions 8.0, 9.0, 9.1 and 9.2 installed. We need to manually install the version of libcuda1 which matches our hardware version, in our case this will be libcuda1-396. We can build versions of our program for other hardware versions if we require portability to other systems with different GPU hardware. Unfortunately these complications do make GPU applications less portable than their CPU only counterparts.
Bootstrap: docker From: nvidia/cuda:9.0-devel %post mkdir /usr/lib/nvidia apt-get -y install libcuda1-396 . . . apt-get -y purge libcuda1-396
Running GPU containers
The container needs to be run with the –nv option, this will use the system’s Cuda libraries and GPU configuration.
singularity run --nv