Current Docs
See: https://github.com/fasrc/User_Codes/tree/master/Singularity_Containers
Older Docs
Online Trainings Materials
Introduction
Containerization of workloads has become popular, particularly using Docker. However, Docker is not suitable for HPC applications due to security reasons. There are a couple of alternatives for HPC containers, with Singularity being the one that covers a large set of cases. Singularity has been deployed on the cluster, and can also import Docker containers.
This page provides information on how to use Singularity on the cluster. Singularity enables users to have full control of their operating system environment. This allows a non-privileged user to “swap out” the Linux operating system and environment on the host machine for a Linux OS and computing environment that they can control. For instance, if the host system runs CentOS Linux but your application requires Ubuntu Linux with a specific software stack, you can create an Ubuntu image, install your software into that image, copy the created image to the cluster, and run your application on that host in its native Ubuntu environment.
Singularity leverages the resources of the host system, such as high-speed interconnect (e.g., InfiniBand), high-performance parallel file systems (e.g., Lustre /n/holyscratch01
and /n/holylfs
filesystems), GPUs, and other resources (e.g., licensed Intel compilers).
Note for Windows and MacOS: Singularity only supports Linux containers. You cannot create images that use Windows or MacOS (this is a restriction of the containerization model rather than Singularity).
Why Singularity?
There are some important differences between Docker and Singularity:
- Docker and Singularity have their own container formats.
- Docker containers may be imported to run via Singularity.
- Docker containers need root privileges for full functionality which is not suitable for a shared HPC environment.
- Singularity allows working with containers as a regular user.
Singularity on the cluster
Singularity is available only on the compute nodes on the cluster. Therefore, to use it you need to either start an interactive job or submit a batch-job to the available SLURM queues.
In the below examples we illustrate the interactive use of Singularity in an interactive bash shell.
[user@holylogin01 ~]$ salloc -p test -c 1 -t 00-01:00 --mem=4000
[user@holyseas02 ~]$
Check Singularity version:
[user@holyseas02 ~]$ which singularity
/usr/bin/singularity
[user@holyseas02 ~]$ singularity --version
singularity version 3.1.1-1.el7
The most up-to-date help on Singularity comes from the command itself.
[user@holyseas02 ~]$ singularity --help
Linux container platform optimized for High Performance Computing (HPC) and
Enterprise Performance Computing (EPC)
Usage:
singularity [global options...]
Description:
Singularity containers provide an application virtualization layer enabling
mobility of compute via both application and environment portability. With
Singularity one is capable of building a root file system that runs on any
other Linux system where Singularity is installed.
Options:
-d, --debug print debugging information (highest verbosity)
-h, --help help for singularity
-q, --quiet suppress normal output
-s, --silent only print errors
-t, --tokenfile string path to the file holding your sylabs
authentication token (default
"/n/home12/nweeks/.singularity/sylabs-token")
-v, --verbose print additional information
--version version for singularity
Available Commands:
apps List available apps within a container
build Build a Singularity image
cache Manage the local cache
capability Manage Linux capabilities for users and groups
exec Run a command within a container
help Help about any command
inspect Show metadata for an image
instance Manage containers running as services
key Manage OpenPGP keys
oci Manage OCI containers
pull Pull an image from a URI
push Push a container to a Library URI
run Run the user-defined default command within a container
run-help Show the user-defined help for an image
search Search a Library for images
shell Run a shell within a container
sign Attach a cryptographic signature to an image
test Run the user-defined tests within a container
verify Verify cryptographic signatures attached to an image
version Show the version for Singularity
Examples:
$ singularity help
Additional help for any Singularity subcommand can be seen by appending
the subcommand name to the above command.
For additional help or support, please visit https://www.sylabs.io/docs/
Getting existing images onto the cluster
Singularity uses container images which you can scp
or rsync
to the cluster as you would do with any other file. See Copying Data to & from the cluster using SCP or SFTP for more information.
Note: For larger Singularity images, please use the available scratch filesystems, such as /n/holyscratch01/my_lab/username
and /n/holylfs/LABS/my_lab/username
.
You can also use the pull
or build
commands to download pre-built images from external resources, such as Singularity Hub (as of April 26th 2021, Singularity Hub is a read-only archive), the Sylabs Container Library or Docker Hub. For instance, you can download a native Singularity image with its default name from Singularity Hub with:
[user@holyseas02 ~]$ singularity pull shub://vsoch/hello-world
62.32 MiB / 62.32 MiB [================================] 100.00% 43.02 MiB/s 1
The downloaded image file is hello-world_latest.sif
.
You can also pull the image with a customized name; e.g., hello.sif
:
[user@holyseas02 ~]$ singularity pull --name hello.sif shub://vsoch/hello-world
62.32 MiB / 62.32 MiB [================================] 100.00% 57.26 MiB/s 1s
Similarly, you can pull images from Docker Hub:
[user@holyseas02 ~]$ singularity pull docker://godlovedc/lolcow
INFO: Starting build...
Getting image source signatures
Copying blob sha256:9fb6c798fa41e509b58bccc5c29654c3ff4648b608f5daa67c1aab6a7d02c118
...
Writing manifest to image destination
Storing signatures
INFO: Creating SIF file...
INFO: Build complete: lolcow_latest.sif
See official Singularity documentation for more information.
BioContainers
Cluster nodes automount a CernVM-File System at /cvmfs/singularity.galaxyproject.org/
. This provides a universal file system namespace to Singularity images for the BioContainers project, which comprises container images automatically generated from Bioconda software packages. The Singularity images are organized into a directory hierarchy following the convention:
/cvmfs/singularity.galaxyproject.org/FIRST_LETTER/SECOND_LETTER/PACKAGE_NAME:VERSION--CONDA_BUILD
For example:
singularity exec /cvmfs/singularity.galaxyproject.org/s/a/samtools:1.13--h8c37831_0 samtools --help
The Bioconda package index lists all software available in /cvmfs/singularity.galaxyproject.org/
, while the BioContainers registry provides a searchable interface.
NOTE: There will be a 10-30 second delay when first accessing /cvmfs/singularity.galaxyproject.org/
on a compute node on which it is not currently mounted; in addition, there will be a delay when accessing a Singularity image on a compute node where it has not already been accessed and cached to node-local storage.
Docker Rate Limiting
Docker rate limits the number of pulls anonymous accounts can make from Docker Hub. If you hit either an error of Too Many Requests
or pull rate limit
, you will need to create a Docker account to get a higher limit. See the Docker documentation for more details.
Once you have a Docker account, you can authenticate with Docker Hub and then run a Docker container
# use this to login to Docker Hub
$ singularity remote login --username <dockerhub_username> docker://docker.io
# run the usual command
$ singularity run docker://godlovedc/lolcow
Working with images
When working with images you could either start an interactive session, or submit a Singularity job to the available queues. For these examples, we will use a hello-world.sif
in an interactive bash shell.
[user@holylogin01 ~]$ salloc -p test -c 1 -t 00-01:00 --mem=4000
[user@holyseas02 ~]$ singularity pull --name hello-world.sif shub://vsoch/hello-world
62.32 MiB / 62.32 MiB [================================] 100.00% 37.63 MiB/s 1s
Shell
With the shell
command, you can start a new shell within the container image and interact with it as if it were a small virtual machine. Note that the shell
command does not source ~/.bashrc
and ~/bash_profile
. Therefore, the shell
command can be useful if customizations in your ~/.bashrc
and ~/bash_profile
are applicable only on the host.
[user@holy7c24604 ~]$ singularity shell hello-world.sif
Singularity> pwd
/n/home06/pkrastev/holylfs/pgk/SINGULARITY/vol2
Singularity> ls
funny.sif gcc-7.2.0.sif hello-world.sif hello.sif lolcow.sif ubuntu.sif vsoch-hello-world-master-latest.sif
Singularity> id
uid=56139(pkrastev) gid=40273(rc_admin) groups=40273(rc_admin),10006(econh11),34539(fas_it),34540(cluster_users),402119(solexa_writers),402160(VPN_HELPMAN),402161(RT_Users),402854(wpdocs_users),403083(owncloud),403266(file-isi_microsoft-full-dlg),403284(gitlabint_users),403331(rc_class)
Singularity>
To exit the container, use the exit
command:
Singularity> exit
exit
[user@holy7c24604 ~]$
Commands within a container
You can use the exec
command to execute specific commands within the container. For instance, you can run the below command to display information about the native Linux OS of the image:
[user@holyseas02 ~]$ singularity exec hello-world.sif cat /etc/os-release
NAME="Ubuntu"
VERSION="14.04.5 LTS, Trusty Tahr"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 14.04.5 LTS"
VERSION_ID="14.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
Running containers
Singularity images contain run-scripts that can be triggered with the run
command to perform specific actions when the container is run. This can be done either by using the run
command, or by calling the container as if it were an executable, i.e.,
[user@holyseas02 ~]$ singularity run hello-world.sif
RaawwWWWWWRRRR!!
or
[user@holyseas02 ~]$ ./hello-world.sif
RaawwWWWWWRRRR!!
Sometimes you may have a container with several apps, each with its own set of run-scripts. You can use the apps
command to list the available apps within the container. For instance, if you have an image named my_image.simg
which has N apps (app_1
, app_2
,…, app_N
) you can do:
[user@holyseas02 ~]$ singularity apps my_image.sif
app_1
app_2
...
app_N
You can run a particular app with
[user@holyseas02 ~]$ singularity run --app app_2 my_image.sif
GPU example
First, start an interactive GPU job and then download the Singularity image hello-world.sif
:
[user@holylogin01 ~]$ salloc -p gpu --gres=gpu:1 --mem 1000 -n 4 -t 60
[user@holygpu7c26304 ~]$singularity pull --name hello-world.sif shub://vsoch/hello-world
To access Nvidia GPU card driver installed inside of Singularity container you need to use --nv
option while executing the container. To verify that you have access to the requested GPUs, run nvidia-smi
inside the container:
[user@holygpu7c26304 ~]$ singularity exec --nv hello-world.sif /bin/bash
Singularity> nvidia-smi
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 396.26 Driver Version: 396.26 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla K20Xm Off | 00000000:88:00.0 Off | 0 |
| N/A 37C P0 61W / 235W | 0MiB / 5700MiB | 65% Default |
+-------------------------------+----------------------+----------------------+
Accessing files from a container
Files and directories on the cluster are accessible from within the container. By default, directories under /n
, $HOME
, $PWD
, and /tmp
are available at runtime inside the container:
[user@holygpu7c26304 singularity_doc_tutorial]$ echo $PWD
/n/holyscratch01/user_lab/Everyone/user/singularity_doc_tutorial
[user@holygpu7c26304 singularity_doc_tutorial]$ echo $HOME
/n/home05/user
[user@holygpu7c26304 singularity_doc_tutorial]$ echo $SCRATCH
/n/holyscratch01
[user@holygpu7c26304 singularity_doc_tutorial]$ singularity exec --nv hello-world.sif /bin/bash
Singularity> echo $PWD
/n/holyscratch01/user_lab/Everyone/user/singularity_doc_tutorial
Singularity> echo $HOME
/n/home05/user
Singularity> echo $SCRATCH
/n/holyscratch01
You can specify additional directories to bind mount into your container with the --bind
option. For instance, if you first create a file hello.dat
in your /scratch
directory on the host system. Then, you can execute from within the container by bind mounting /scratch
to the /mnt
directory inside the container:
[user@holyseas02 ~]$ echo 'Hello from inside the container!' > /scratch/hello.dat
[user@holyseas02 ~]$ singularity exec --bind /scratch:/mnt hello-world.sif cat /mnt/hello.dat
Hello from inside the container!
Singularity containers as SLURM jobs
You can also use Singularity images within a non-interactive batch script as you would any other command. If your image contains a run-script then you can use singularity run
to execute the run-script in the job. You can also use singularity exec
to execute arbitrary commands (or scripts) within the image. Below is an example batch-job submission script using the hello-world.sif
to print out information about the native OS of the image.
#!/bin/bash
#SBATCH -J singularity_test
#SBATCH -o singularity_test.out
#SBATCH -e singularity_test.err
#SBATCH -p test
#SBATCH -t 0-00:30
#SBATCH -c 1
#SBATCH --mem=4000
# Singularity command line options
singularity exec hello-world.sif cat /etc/os-release
If the above batch-job script is named singularity.sbatch
, for instance, the jobs is submitted as usual with sbatch
:
[user@holylogin01 ~]$ sbatch singularity.sbatch
Upon the job completion, the STD output is located in the file singularity_test.out
.
[user@holylogin01 ~]$ cat singularity_test.out
NAME="Ubuntu"
VERSION="14.04.5 LTS, Trusty Tahr"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 14.04.5 LTS"
VERSION_ID="14.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
Building Singularity images
To build Singularity containers, you need root access to the build system. Therefore, you cannot build a Singularity container on the cluster. Depending on whether or not you have an access to a Linux machine, possible options are:
- If you have a Linux system to which you have root (admin) access, you can install Singularity and build your Singularity containers there. See Install Singularity on Linux for more information.
- If you don’t have a Linux system you could easily install one in a virtual machine using software like VirtualBox, Vagrant, or VMware. See Install on Windows or Mac for specific Windows or MacOS instructions.
- You can build a Singularity image from a local Docker image on Windows, Mac, or Linux with the docker2singularity Docker image.
- Build a Singularity image using Sylabs Cloud, which is explained below.
In addition to your own Linux environment, you will also need a definition file to build a Singularity container from scratch. You can find some simple definition files for a variety of Linux distributions in the /example
directory of the source code. Detailed documentation about building Singularity container images is available at the Singularity website.
Building Singularity images with Sylabs Cloud
Prerequisite: Create a Sylabs account
Singularity requires root access to build a Singularity image. Root access is not allowed in FASRC clusters. Sylabs cloud provides a free service where you can build a container.
To create a free Sylabs cloud account:
- Go to https://cloud.sylabs.io/library
- Click “Sign in” on the top right corner
- Select your method to sign in, with Google, GitLab, HitHub, or Microsoft
Prerequisite: Create a Sylabs access token
To access Sylabs cloud, you need an access token. To create a token follow these steps:
- Go to: https://cloud.sylabs.io/
- Click “Sign In” and follow the sign in steps.
- Click on your login idon the top right corner
- Select “Access Tokens” from the drop down menu.
- Enter a name for your new access token, such as “Cannon token”.
- Click the “Create a New Access Token” grey button.
Prerequisite: Singularity definition file
In order to build the Singularity container, you will need to have a definition file. In the example definition file below, centos7.def may have various headers that are indicated by the % sign. To add your own software installs, add the install commands under the %post
header. For more details, refer to the Singularity definition file documentation.
Bootstrap: yum
OSVersion: 7
MirrorURL: http://mirror.centos.org/centos-%{OSVERSION}/%{OSVERSION}/os/$basearch/
Include: yum
%help
This is Centos 7 Singularity container for my own programs to run in the Cannon cluster.
%post
yum -y install vim-minimal
yum -y install gcc
yum -y install gcc-gfortran
yum -y install gcc-c++
yum -y install which tar wget gzip bzip2
yum -y install make
yum -y install perl
Building the Singularity container
On the cluster, follow these steps:
# request an interactive node to use a compute node
$ salloc -p test --time=1:00:00 --mem=4000
# make sure you are in the directory where your centos7.def is located
$ ls -l
total 2660840
-rw-r--r-- 1 jharvard jharvard_lab 438 Jun 10 13:58 centos7.def
# login to Sylabs cloud: you will have to paste your copied token after the prompt
$ singularity remote login
Generate an access token at https://cloud.sylabs.io/auth/tokens, and paste it here.
Token entered will be hidden for security.
Access Token:
INFO: Access Token Verified!
INFO: Token stored in /n/home01/jharvard/.singularity/remote.yaml
# build the container
# depending on your container, this step can take 30+ minutes
$ singularity build --remote centos7.sif centos7.def
INFO: Remote "cloud.sylabs.io" added.
INFO: Access Token Verified!
INFO: Token stored in /root/.singularity/remote.yaml
INFO: Remote "cloud.sylabs.io" now in use.
INFO: Starting build...
INFO: Skipping GPG Key Import
INFO: Adding owner write permission to build path: /tmp/build-temp-2928954206/rootfs
INFO: Running post scriptlet
+ yum -y install vim-minimal
... omitted output ...
Complete!
INFO: Adding help info
INFO: Creating SIF file...
INFO: Build complete: /tmp/image-3627960411
WARNING: Skipping container verification
INFO: Uploading 217137152 bytes
INFO: Build complete: centos7.sif
Running the container
Now that you have built a container, refer to Working with images
Parallel computing with Singularity
For building Singularity images and running applications with openMP, Mpich, and OpenMPI refer to our Github documentation.
References