Search Docs by Keyword

Table of Contents

Singularity

Introduction

Singularity enables users to have full control of their operating system environment (OS). This allows a non-privileged user (e.g. non- root, sudo, administrator, etc.) to “swap out” the Linux operating system and environment on the host machine (i.e., the cluster’s OS) for another Linux OS and computing environment that they can control (i.e., the container’s OS). For instance, the host system runs Rocky Linux but your application requires CentOS or Ubuntu Linux with a specific software stack. You can create a CentOS or Ubuntu image containing your software and its dependencies, and run your application on that host in its native CentOS or Ubuntu environment. Singularity leverages the resources of the host system, such as high-speed interconnect (e.g., InfiniBand), high-performance parallel file systems (e.g., Lustre /n/netscratch and /n/holylfs filesystems), GPUs, and other resources (e.g., licensed Intel compilers).

Note for Windows and MacOS: Singularity only supports Linux containers. You cannot create images that use Windows or MacOS (this is a restriction of the containerization model rather than Singularity).

Why Singularity?

Podman (a Docker-compatible software tool for managing containers) is also supported on FASRC clusters. There are some important differences between Docker/Podman and Singularity:

  • Singularity allows running containers as a regular cluster user.
  • Docker/Podman and Singularity have their own container formats.
  • Docker/Podman (Open Container Initiative) containers may be imported to run via Singularity.

Singularity, SingularityCE, Apptainer

SingularityCE (Singularity Community Edition) and Apptainer are branches/children of the deprecated Singularity. SingularityCE is maintained by Sylabs while Apptainer is maintained by the Linux Foundation. By and large the two are interoperable with slightly different feature sets. The cluster uses SingularityCE, which we will refer to in this document as Singularity.

Singularity Glossary

  • SingularityCE or Apptainer or Podman or Docker: the containerization software
    • as in “SingularityCE 3.11” or “Apptainer 1.0”
  • Image: a compressed, usually read-only file that contains an OS and specific software stack
  • Container
    • The technology, e.g. “containers vs. virtual machines”
    • An instance of an image, e.g. “I will train a model using a Singularity container of PyTorch.”
  • Host: computer/supercomputer where the image is run

Singularity on the cluster

To use Singularity on the cluster one must first start a job, interactive, Open OnDemand, or batch. Then you simply run singularity:

[jharvard@holy2c04309 ~]$ singularity --version
singularity-ce version 4.2.2-1.el8

SingularityCE Documentation

The SingularityCE User Guide has the latest documentation. You can also see the most up-to-date help on SingularityCE from the command line:

[jharvard@holy2c04309 ~]$ singularity --help

Linux container platform optimized for High Performance Computing (HPC) and
Enterprise Performance Computing (EPC)

Usage:
  singularity [global options...]

Description:
  Singularity containers provide an application virtualization layer enabling
  mobility of compute via both application and environment portability. With
  Singularity one is capable of building a root file system that runs on any
  other Linux system where Singularity is installed.

Options:
  -c, --config string   specify a configuration file (for root or
                        unprivileged installation only) (default
                        "/etc/singularity/singularity.conf")
  -d, --debug           print debugging information (highest verbosity)
  -h, --help            help for singularity
      --nocolor         print without color output (default False)
  -q, --quiet           suppress normal output
  -s, --silent          only print errors
  -v, --verbose         print additional information
      --version         version for singularity

Available Commands:
  build       Build a Singularity image
  cache       Manage the local cache
  capability  Manage Linux capabilities for users and groups
  completion  Generate the autocompletion script for the specified shell
  config      Manage various singularity configuration (root user only)
  delete      Deletes requested image from the library
  exec        Run a command within a container
  help        Help about any command
  inspect     Show metadata for an image
  instance    Manage containers running as services
  key         Manage OpenPGP keys
  oci         Manage OCI containers
  overlay     Manage an EXT3 writable overlay image
  plugin      Manage Singularity plugins
  pull        Pull an image from a URI
  push        Upload image to the provided URI
  remote      Manage singularity remote endpoints, keyservers and OCI/Docker registry credentials
  run         Run the user-defined default command within a container
  run-help    Show the user-defined help for an image
  search      Search a Container Library for images
  shell       Run a shell within a container
  sif         Manipulate Singularity Image Format (SIF) images
  sign        Add digital signature(s) to an image
  test        Run the user-defined tests within a container
  verify      Verify digital signature(s) within an image
  version     Show the version for Singularity

Examples:
  $ singularity help <command> [<subcommand>]
  $ singularity help build
  $ singularity help instance start


For additional help or support, please visit https://www.sylabs.io/docs/

Working with Singularity Images

Singularity uses a portable, single-file container image format known as the Singularity Image Format (SIF). You can scp or rsync these to the cluster as you would do with any other file. See Copying Data to & from the cluster using SCP or SFTP for more information. You can also download them from various container registries or build your own.

When working with images you can:

  • Start an interactive session, or
  • Submit a slurm batch job to run Singularity

For more examples and details, see SingularityCE quick start guide.

Working with Singularity Images Interactively

Singularity syntax

singularity <command> [options] <container_image.sif>
Commands
  • shell: run an interactive shell inside the container
  • exec: execute a command
  • run: launch the runscript

For this example, we will use the laughing cow Singularity image from Sylabs library.

First, request interactive job (for more details about interactive jobs on Cannon, see and on FASSE see) and download the laughing cow lolcow_latest.sif Singularity image:

# request interative job
[jharvard@holylogin01 ~]$ salloc -p test -c 1 -t 00-01:00 --mem=4G

# pull image from Sylabs library
[jharvard@holy2c02302 sylabs_lib]$ singularity pull library://lolcow
FATAL:   Image file already exists: "lolcow_latest.sif" - will not overwrite
[jharvard@holy2c02302 sylabs_lib]$ rm lolcow_latest.sif
[jharvard@holy2c02302 sylabs_lib]$ singularity pull library://lolcow
INFO:    Downloading library image
90.4MiB / 90.4MiB [=====================================] 100 % 7.6 MiB/s 0s

shell

With the shell command, you can start a new shell within the container image and interact with it as if it were a small virtual machine.

Note that the shell command does not source ~/.bashrc and ~/bash_profile. Therefore, the shell command is useful if customizations in your ~/.bashrc and ~/bash_profile are not supposed to be sourced within the Singularity container.

# launch container with shell command
[jharvard@holy2c02302 sylabs_lib]$ singularity shell lolcow_latest.sif

# test some linux commands within container
Singularity> pwd
/n/holylabs/LABS/jharvard_lab/Users/jharvard/sylabs_lib
total 95268
-rwxr-xr-x 1 jharvard jharvard_lab  2719744 Mar  9 14:27 hello-world_latest.sif
drwxr-sr-x 2 jharvard jharvard_lab     4096 Mar  1 15:21 lolcow
-rwxr-xr-x 1 jharvard jharvard_lab 94824197 Mar  9 14:56 lolcow_latest.sif
drwxr-sr-x 2 jharvard jharvard_lab     4096 Mar  1 15:23 ubuntu22.04
Singularity> id
uid=21442(jharvard) gid=10483(jharvard_lab) groups=10483(jharvard_lab)
Singularity> cowsay moo
 _____
< moo >
 -----
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||

# exit the container
Singularity> exit
[jharvard@holy2c02302 sylabs_lib]$

exec

The exec command allows you to execute a custom command within a container by specifying the image file. For instance, to execute the cowsay program within the lolcow_latest.sif container:

[jharvard@holy2c02302 sylabs_lib]$ singularity exec lolcow_latest.sif cowsay moo
 _____
< moo >
 -----
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||
[jharvard@holy2c02302 sylabs_lib]$ singularity exec lolcow_latest.sif cowsay "hello FASRC"
 _____________
< hello FASRC >
 -------------
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||

run

Singularity containers may contain runscripts. These are user defined scripts that define the actions a container should perform when someone runs it. The runscript can be triggered with the run command, or simply by calling the container as though it were an executable.

Using the run command:

[jharvard@holy2c02302 sylabs_lib]$ singularity run lolcow_latest.sif
 _____________________________
< Thu Mar 9 15:15:56 UTC 2023 >
 -----------------------------
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||

Running as the container were an executable file:

[jharvard@holy2c02302 sylabs_lib]$ ./lolcow_latest.sif
 _____________________________
< Thu Mar 9 15:17:06 UTC 2023 >
 -----------------------------
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||

To view the runscript of a Singularity image:

[jharvard@holy2c02302 sylabs_lib]$ $ singularity inspect -r lolcow_latest.sif 

#!/bin/sh

    date | cowsay | lolcat

GPU Example

First, start an interactive job in the gpu or gpu_test partition and then download the Singularity image.

# request interactive job on gpu_test partition
[jharvard@holylogin01 gpu_example]$ salloc -p gpu_test --gres=gpu:1 --mem 8G -c 4 -t 60

# build singularity image by pulling container from Docker Hub
[jharvard@holygpu7c1309 gpu_example]$ singularity pull docker://tensorflow/tensorflow:latest-gpu
INFO:    Converting OCI blobs to SIF format
INFO:    Starting build...
Getting image source signatures
Copying blob 521d4798507a done
Copying blob 2798fbbc3b3b done
Copying blob 4d8ee731d34e done
Copying blob 92d2e1452f72 done
Copying blob 6aafbce389f9 done
Copying blob eaead16dc43b done
Copying blob 69cc8495d782 done
Copying blob 61b9b57b3915 done
Copying blob eac8c9150c0e done
Copying blob af53c5214ca1 done
Copying blob fac718221aaf done
Copying blob 2047d1a62832 done
Copying blob 9a9a3600909b done
Copying blob 79931d319b40 done
Copying config bdb8061f4b done
Writing manifest to image destination
Storing signatures
2023/03/09 13:52:18  info unpack layer: sha256:eaead16dc43bb8811d4ff450935d607f9ba4baffda4fc110cc402fa43f601d83
2023/03/09 13:52:19  info unpack layer: sha256:2798fbbc3b3bc018c0c246c05ee9f91a1ebe81877940610a5e25b77ec5d4fe24
2023/03/09 13:52:19  info unpack layer: sha256:6aafbce389f98e508428ecdf171fd6e248a9ad0a5e215ec3784e47ffa6c0dd3e
2023/03/09 13:52:19  info unpack layer: sha256:4d8ee731d34ea0ab8f004c609993c2e93210785ea8fc64ebc5185bfe2abdf632
2023/03/09 13:52:19  info unpack layer: sha256:92d2e1452f727e063220a45c1711b635ff3f861096865688b85ad09efa04bd52
2023/03/09 13:52:19  info unpack layer: sha256:521d4798507a1333de510b1f5474f85d3d9a00baa9508374703516d12e1e7aaf
2023/03/09 13:52:21  warn rootless{usr/lib/x86_64-linux-gnu/gstreamer1.0/gstreamer-1.0/gst-ptp-helper} ignoring (usually) harmless EPERM on setxattr "security.capability"
2023/03/09 13:52:54  info unpack layer: sha256:69cc8495d7822d2fb25c542ab3a66b404ca675b376359675b6055935260f082a
2023/03/09 13:52:58  info unpack layer: sha256:61b9b57b3915ef30727fb8807d7b7d6c49d7c8bdfc16ebbc4fa5a001556c8628
2023/03/09 13:52:58  info unpack layer: sha256:eac8c9150c0e4967c4e816b5b91859d5aebd71f796ddee238b4286a6c58e6623
2023/03/09 13:52:59  info unpack layer: sha256:af53c5214ca16dbf9fd15c269f3fb28cefc11121a7dd7c709d4158a3c42a40da
2023/03/09 13:52:59  info unpack layer: sha256:fac718221aaf69d29abab309563304b3758dd4f34f4dad0afa77c26912aed6d6
2023/03/09 13:53:00  info unpack layer: sha256:2047d1a62832237c26569306950ed2b8abbdffeab973357d8cf93a7d9c018698
2023/03/09 13:53:15  info unpack layer: sha256:9a9a3600909b9eba3d198dc907ab65594eb6694d1d86deed6b389cefe07ac345
2023/03/09 13:53:15  info unpack layer: sha256:79931d319b40fbdb13f9269d76f06d6638f09a00a07d43646a4ca62bf57e9683
INFO:    Creating SIF file...

Run the container with GPU support, see available GPUs, and check if tensorflow can detect them:

# run the container
[jharvard@holygpu7c1309 gpu_example]$ singularity shell --nv tensorflow_latest-gpu.sif
Singularity> nvidia-smi
Thu Mar  9 18:57:53 2023
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.85.12    Driver Version: 525.85.12    CUDA Version: 12.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  Tesla V100-PCIE...  On   | 00000000:06:00.0 Off |                    0 |
| N/A   35C    P0    25W / 250W |      0MiB / 32768MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   1  Tesla V100-PCIE...  On   | 00000000:2F:00.0 Off |                    0 |
| N/A   36C    P0    23W / 250W |      0MiB / 32768MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   2  Tesla V100-PCIE...  On   | 00000000:86:00.0 Off |                    0 |
| N/A   35C    P0    25W / 250W |      0MiB / 32768MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   3  Tesla V100-PCIE...  On   | 00000000:D8:00.0 Off |                    0 |
| N/A   33C    P0    23W / 250W |      0MiB / 32768MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

# check if `tensorflow` can see GPUs
Singularity> python
Python 3.8.10 (default, Jun 22 2022, 20:18:18)
[GCC 9.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from tensorflow.python.client import device_lib
2023-03-09 19:00:15.107804: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
>>> print(device_lib.list_local_devices())
2023-03-09 19:00:20.010087: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-03-09 19:00:24.024427: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1613] Created device /device:GPU:0 with 30960 MB memory:  -> device: 0, name: Tesla V100-PCIE-32GB, pci bus id: 0000:06:00.0, compute capability: 7.0
2023-03-09 19:00:24.026521: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1613] Created device /device:GPU:1 with 30960 MB memory:  -> device: 1, name: Tesla V100-PCIE-32GB, pci bus id: 0000:2f:00.0, compute capability: 7.0
2023-03-09 19:00:24.027583: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1613] Created device /device:GPU:2 with 30960 MB memory:  -> device: 2, name: Tesla V100-PCIE-32GB, pci bus id: 0000:86:00.0, compute capability: 7.0
2023-03-09 19:00:24.028227: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1613] Created device /device:GPU:3 with 30960 MB memory:  -> device: 3, name: Tesla V100-PCIE-32GB, pci bus id: 0000:d8:00.0, compute capability: 7.0

... omitted output ...

incarnation: 3590943835431918555
physical_device_desc: "device: 3, name: Tesla V100-PCIE-32GB, pci bus id: 0000:d8:00.0, compute capability: 7.0"
xla_global_id: 878896533
]

Running Singularity Images in Batch Jobs

You can also use Singularity images within a non-interactive batch script as you would any other command. If your image contains a run-script then you can use singularity run to execute the run-script in the job. You can also use singularity exec to execute arbitrary commands (or scripts) within the image.

Below is an example batch-job submission script using the laughing cow lolcow_latest.sif to print out information about the native OS of the image.

File singularity.sbatch:

#!/bin/bash
#SBATCH -J singularity_test
#SBATCH -o singularity_test.out
#SBATCH -e singularity_test.err
#SBATCH -p test
#SBATCH -t 0-00:10
#SBATCH -c 1
#SBATCH --mem=4G

# Singularity command line options
singularity exec lolcow_latest.sif cowsay "hello from slurm batch job"

Submit a slurm batch job:

[jharvard@holy2c02302 jharvard]$ sbatch singularity.sbatch

Upon the job completion, the standard output is located in the file singularity_test.out:

 [jharvard@holy2c02302 jharvard]$ cat singularity_test.out
  ____________________________
< hello from slurm batch job >
 ----------------------------
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||

GPU Example Batch Job

File singularity_gpu.sbatch (ensure to include the --nv flag after singularity exec):

#!/bin/bash
#SBATCH -J singularity_gpu_test
#SBATCH -o singularity_gpu_test.out
#SBATCH -e singularity_gpu_test.err
#SBATCH -p gpu
#SBATCH --gres=gpu:1
#SBATCH -t 0-00:10
#SBATCH -c 1
#SBATCH --mem=8G

# Singularity command line options
singularity exec --nv lolcow_latest.sif nvidia-smi
Submit a slurm batch job:
[jharvard@holy2c02302 jharvard]$ sbatch singularity_gpu.sbatch
Upon the job completion, the standard output is located in the file singularity_gpu_test.out:
$ cat singularity_gpu_test.out
Thu Mar  9 20:40:24 2023
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.85.12    Driver Version: 525.85.12    CUDA Version: 12.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  Tesla V100-PCIE...  On   | 00000000:06:00.0 Off |                    0 |
| N/A   35C    P0    25W / 250W |      0MiB / 32768MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

Accessing Files

Files and directories on the cluster are accessible from within the container. By default, directories under /n, $HOME, $PWD, and /tmp are available at runtime inside the container.

See these variables on the host operating system:

[jharvard@holy2c02302 jharvard]$ echo $PWD
/n/holylabs/LABS/jharvard_lab/Lab/jharvard
[jharvard@holy2c02302 jharvard]$ echo $HOME
/n/home01/jharvard
[jharvard@holy2c02302 jharvard]$ echo $SCRATCH
/n/netscratch
The same variables within the container:
[jharvard@holy2c02302 jharvard]$ singularity shell lolcow_latest.sif
Singularity> echo $PWD
/n/holylabs/LABS/jharvard_lab/Lab/jharvard
Singularity> echo $HOME
/n/home01/jharvard
Singularity> echo $SCRATCH
/n/netscratch
You can specify additional directories from the host system such that they can be accessible from the container. This process is called bind mount into your container and is done with the --bind option.

For instance, if you first create a file hello.dat in the /scratch directory on the host system. Then, you can execute from within the container by bind mounting /scratch to the /mnt directory inside the container:

[jharvard@holy2c02302 jharvard]$ echo 'Hello from file in mounted directory!' > /scratch/hello.dat
[jharvard@holy2c02302 jharvard]$ singularity shell --bind /scratch:/mnt lolcow_latest.sif
Singularity> cd /mnt/
Singularity> ls
cache  hello.dat
Singularity> cat hello.dat
Hello from file in mounted directory

If you don’t use the --bind option, the file will not be available in the directory /mnt inside the container:

[jharvard@holygpu7c1309 sylabs_lib]$ singularity shell lolcow_latest.sif
Singularity> cd /mnt/
Singularity> ls
Singularity>

Submitting Jobs Within a Singularity Container

Note: Submitting jobs from within a container may or may not work out of the box. This is due to possible environmental variable mismatch, as well as operating system and image library issues. It is important to validate that submitted jobs are properly constructed and operating as expected. If possible it is best to submit jobs outside the container in the host environment.

If you would like to submit slurm jobs from inside the container, you can bind the directories where the slurm executables are. The environmental variable SINGULARITY_BIND stores the directories of the host system that are accessible from inside the container. Thus, slurm commands can be accessible by adding the following code to you slurm batch script before the singularity execution:

export SINGULARITY_BIND=$(tr '\n' ',' <<END
/etc/nsswitch.conf
/etc/slurm
/etc/sssd/
/lib64/libnss_sss.so.2:/lib/libnss_sss.so.2
/slurm
/usr/bin/sacct
/usr/bin/salloc
/usr/bin/sbatch
/usr/bin/scancel
/usr/bin/scontrol
/usr/bin/scrontab
/usr/bin/seff
/usr/bin/sinfo
/usr/bin/squeue
/usr/bin/srun
/usr/bin/sshare
/usr/bin/sstat
/usr/bin/strace
/usr/lib64/libmunge.so.2
/usr/lib64/slurm
/var/lib/sss
/var/run/munge:/run/munge
END
)

Build Your Own Singularity Container

You can build or import a Singularity container in different ways. Common methods include:

  1. Download an existing container from the SingularityCE Container Library or another image repository. This will download an existing Singularity image to the FASRC cluster.
  2. Build a SIF from an OCI container image located in Docker Hub or another OCI container registry (e.g., quay.io, NVIDIA NGC Catalog, GitHub Container Registry). This will download the OCI container image and convert it into a Singularity container image on the FASRC cluster.
  3. Build a SIF file from a Singularity definition file directly on the FASRC cluster.
  4. Build an OCI-SIF from a local Dockerfile using option --oci. The resulting image can be pushed to an OCI container registry (e.g., Docker Hub) for distribution/use by other container runtimes such as Docker.

NOTE: for all options above, you need to be in a compute node. Singularity on the clusters shows how to request an interactive job on Cannon and FASSE.

Download Existing Singularity Container from Library or Registry

Download the laughing cow (lolcow) image from Singularity library with singularity pull:

[jharvard@holy2c02302 ~]$ singularity pull lolcow.sif library://lolcow
INFO:    Starting build...
INFO:    Using cached image
INFO:    Verifying bootstrap image /n/home05/jharvard/.singularity/cache/library/sha256.cef378b9a9274c20e03989909930e87b411d0c08cf4d40ae3b674070b899cb5b
INFO:    Creating SIF file...
INFO:    Build complete: lolcow.sif

Download a custom JupyterLab and Seaborn image from the Seqera Containers registry (which builds/hosts OCI and Singularity container images comprising user-selected conda and Python packages):

[jharvard@holy2c02302 ~]$ singularity pull oras://community.wave.seqera.io/library/jupyterlab_seaborn:a7115e98a9fc4dbe
INFO:    Downloading oras
287.0MiB / 287.0MiB [=======================================] 100 % 7.0 MiB/s 0s

Download Existing Container from Docker Hub

Build the laughing cow (lolcow) image from Docker Hub:

[jharvard@holy2c02302 ~]$ singularity pull lolcow.sif docker://sylabsio/lolcow
INFO:    Starting build...
Getting image source signatures
Copying blob 5ca731fc36c2 done
Copying blob 16ec32c2132b done
Copying config fd0daa4d89 done
Writing manifest to image destination
Storing signatures
2023/03/01 10:29:37  info unpack layer: sha256:16ec32c2132b43494832a05f2b02f7a822479f8250c173d0ab27b3de78b2f058
2023/03/01 10:29:38  info unpack layer: sha256:5ca731fc36c28789c5ddc3216563e8bfca2ab3ea10347e07554ebba1c953242e
INFO:    Creating SIF file...
INFO:    Build complete: lolcow.sif

Build the latest https://hub.docker.com/_/ubuntu from Docker Hub:

[jharvard@holy2c02302 ~]$ singularity pull ubuntu.sif docker://ubuntu
INFO:    Converting OCI blobs to SIF format
INFO:    Starting build...
INFO:    Fetching OCI image...
INFO:    Extracting OCI image...
INFO:    Inserting Singularity configuration...
INFO:    Creating SIF file...
[jharvard@holy2c02302 ~]$ singularity exec ubuntu.sif head -n 1 /etc/os-release
PRETTY_NAME="Ubuntu 24.04.1 LTS"

Note that to build images that are downloaded from Docker Hub or another OCI registry, you can either use the commands build or pull.

Build a Singularity Container from Singularity Definition File

Singularity supports building definition files using --fakeroot. This feature leverages rootless containers.

Step 1: Write/obtain a definition file. You will need a definition file specifying environment variables, packages, etc. Your SingularityCE image will be based on this file. See SingularityCE definition file docs for more details.

This is an example of the laughing cow definition file:

Bootstrap: docker
From: ubuntu:22.04

%post
    apt-get -y update
    apt-get -y install cowsay lolcat

%environment
    export LC_ALL=C
    export PATH=/usr/games:$PATH

%runscript
    date | cowsay | lolcat

Step 2: Build SingularityCE image

Build laughing cow image.

[jharvard@holy8a26602 jharvard]$ singularity build --fakeroot lolcow.sif lolcow.def
INFO: Starting build...
INFO: Fetching OCI image...
28.2MiB / 28.2MiB [=================================================================================================================================================] 100 % 27.9 MiB/s 0s
INFO: Extracting OCI image...
INFO: Inserting Singularity configuration...
INFO: Running post scriptlet
... omitted output ...

Running hooks in /etc/ca-certificates/update.d...
done.
INFO:    Adding environment to container
INFO:    Adding runscript
INFO:    Creating SIF file...
INFO:    Build complete: lolcow.sif

Building a Singularity container from a Dockerfile: OCI mode

SingularityCE supports building containers from Dockerfiles in OCI mode, using a bundled version of the BuildKit container image builder used in recent versions of Docker, resulting in an OCI-SIF image file (as opposed to native mode, which supports building a SIF image from a Singularity definition file). OCI mode enables the Docker-like –compat flag, enforcing a greater degree of isolation between the container and the host environments for Docker/Podman/OCI compatibility.

An example OCI Dockerfile:

FROM ubuntu:22.04 

RUN apt-get -y update \ 
 && apt-get -y install cowsay lolcat

ENV LC_ALL=C PATH=/usr/games:$PATH 

ENTRYPOINT ["/bin/sh", "-c", "date | cowsay | lolcat"]

Build the OCI-SIF (Note that on the FASRC cluster the XDG_RUNTIME_DIR environment variable currently needs to be explicitly set to a node-local user-writable directory, such as shown below):

[jharvard@holy2c02302 ~]$ XDG_RUNTIME_DIR=$(mktemp -d) singularity build --oci lolcow.oci.sif Dockerfile
INFO:    singularity-buildkitd: running server on /tmp/tmp.fCjzW2QnfV/singularity-buildkitd/singularity-buildkitd-3709445.sock
... omitted ouptput ...
INFO:    Terminating singularity-buildkitd (PID 3709477)
WARNING: removing singularity-buildkitd temporary directory /tmp/singularity-buildkitd-2716062861                                                                           
INFO:    Build complete: lolcow.oci.sif

To run the ENTRYPOINT command (equivalent to the Singularity definition file runscript):

[jharvard@holy2c02302 ~]$ singularity run --oci lolcow.oci.sif

OCI mode limitations

  • (As of SingularityCE 4.2) If the Dockerfile contains “USER root” as the last USER instruction, the singularity exec/run –fakeroot or –no-home options must be specified to use the OCI-SIF, or a tmpfs error will result.
  • Portability note: Apptainer does not support OCI mode, and OCI-SIF files cannot be used with Apptainer.

BioContainers

Cluster nodes automount a CernVM-File System at /cvmfs/singularity.galaxyproject.org/. This provides a universal file system namespace to Singularity images for the BioContainers project, which comprises container images automatically generated from Bioconda software packages. The Singularity images are organized into a directory hierarchy following the convention:

/cvmfs/singularity.galaxyproject.org/FIRST_LETTER/SECOND_LETTER/PACKAGE_NAME:VERSION--CONDA_BUILD

For example:

singularity exec /cvmfs/singularity.galaxyproject.org/s/a/samtools:1.13--h8c37831_0 samtools --help

The Bioconda package index lists all software available in /cvmfs/singularity.galaxyproject.org/, while the BioContainers registry provides a searchable interface.

NOTE: There will be a 10-30 second delay when first accessing /cvmfs/singularity.galaxyproject.org/ on a compute node on which it is not currently mounted; in addition, there will be a delay when accessing a Singularity image on a compute node where it has not already been accessed and cached to node-local storage.

BioContainer Images in Docker Hub

A small number of Biocontainers images are available only in DockerHub under the biocontainers organization, and are not available on Cannon under /cvmfs/singularity.galaxyproject.org/.

See BioContainers GitHub for a complete list of BioContainers images available in DockerHub (note that many of the applications listed in that GitHub repository have since been ported to Bioconda, and are thus available in /cvmfs/singularity.galaxyproject.org, but a subset are still only available in DockerHub).

These images can be fetched and built on Cannon using the singularity pull command:

singularity docker://biocontainers/<image>:<tag>
For example, for the container cellpose with tag 2.1.1_vc1 (cellpose Docker Hub page):
singularity pull --disable-cache docker://biocontainers/cellpose:2.1.1_cv1
[jharvard@holy2c02302 bio]$ singularity pull --disable-cache docker://biocontainers/cellpose:2.1.1_cv1
INFO:    Converting OCI blobs to SIF format
INFO:    Starting build...
2023/03/13 15:58:16  info unpack layer: sha256:a603fa5e3b4127f210503aaa6189abf6286ee5a73deeaab460f8f33ebc6b64e2
INFO:    Creating SIF file...
The sif image file cellpose_2.1.1_cv1.sif will be created:
[jharvard@holy2c02302 bio]$ ls -lh
total 2.5G
-rwxr-xr-x 1 jharvard jharvard_lab 2.4G Mar 13 15:59 cellpose_2.1.1_cv1.sif
-rwxr-xr-x 1 jharvard jharvard_lab  72M Mar 13 12:06 lolcow_latest.sif

BioContainer and Package Tips

  • The registry https://biocontainers.pro may be slow
  • We recommend to first check the Bioconda package index, as it quickly provides a complete list of Bioconda packages, all of which have a corresponding biocontainers image in /cvmfs/singularity.galaxyproject.org/
  • If an image doesn’t exist there, then there is a small chance there might be one generated from a Dockerfile in BioContainer GitHub
  • If your package is listed in the BioContainer GitHub, search for the package in Docker Hub, under the biocontainers organization(e.g. search for biocontainers/<package>)

Parallel computing with Singularity

Singularity is capable of both OpenMP and MPI parallelization. OpenMP is mostly trivial, you simply need a OpenMP enabled code and compiler and then properly set the normal variables. We have an example code on our User Codes repo. MPI on the other hand is much more involved.

MPI Applications

The goal of these follow instructions is to help you run Message Passing Interface (MPI) programs using Singularity containers on the FAS RC cluster. The MPI standard is used to implement distributed parallel applications across compute nodes of a single HPC cluster, such as Cannon, or across multiple compute systems. The two major open-source implementations of MPI are Mpich (and its derivatives, such as Mvapich), and OpenMPI. The most widely used MPI implementation on Cannon is OpenMPI.

There are several ways of developing and running MPI applications using Singularity containers, where the most popular method relies on the MPI implementation available on the host machine. This approach is named Host MPI or the Hybrid model since it uses both the MPI implementation on the host and the one in the container.

The key idea behind the Hybrid method is that when you execute a Singularity container with a MPI application, you call mpiexec, mpirun, or srun, e.g., when using the Slurm scheduler, on the singularity command itself. Then the MPI process outside of the container will work together with MPI inside the container to initialize the parallel job. Therefore, it is very important that the MPI flavors and versions inside the container and on the host match.

Code examples below can be found on our User Codes repo.

Example MPI Code

To illustrate how Singularity can be used with MPI applications, we will use a simple MPI code implemented in Fortran 90, mpitest.f90:

!=====================================================
! Fortran 90 MPI example: mpitest.f90
!=====================================================
program mpitest
  implicit none
  include 'mpif.h'
  integer(4) :: ierr
  integer(4) :: iproc
  integer(4) :: nproc
  integer(4) :: i
  call MPI_INIT(ierr)
  call MPI_COMM_SIZE(MPI_COMM_WORLD,nproc,ierr)
  call MPI_COMM_RANK(MPI_COMM_WORLD,iproc,ierr)
  do i = 0, nproc-1
     call MPI_BARRIER(MPI_COMM_WORLD,ierr)
     if ( iproc == i ) then
        write (6,*) 'Rank',iproc,'out of',nproc
     end if
  end do
  call MPI_FINALIZE(ierr)
  if ( iproc == 0 ) write(6,*)'End of program.'
  stop
end program mpitest

Singularity Definition File

To build Singularity images you need to write a Definition File, where the the exact implementation will depend on the available MPI flavor on the host machine.

OpenMPI

If you intend to use OpenMPI, the definition file could look like, e.g., the one below:

Bootstrap: yum
OSVersion: 7
MirrorURL: http://mirror.centos.org/centos-%{OSVERSION}/%{OSVERSION}/os/$basearch/
Include: yum

%files
  mpitest.f90 /home/

%environment
  export OMPI_DIR=/opt/ompi
  export SINGULARITY_OMPI_DIR=$OMPI_DIR
  export SINGULARITYENV_APPEND_PATH=$OMPI_DIR/bin
  export SINGULAIRTYENV_APPEND_LD_LIBRARY_PATH=$OMPI_DIR/lib

%post
  yum -y install vim-minimal
  yum -y install gcc
  yum -y install gcc-gfortran
  yum -y install gcc-c++
  yum -y install which tar wget gzip bzip2
  yum -y install make
  yum -y install perl

  echo "Installing Open MPI ..."
  export OMPI_DIR=/opt/ompi
  export OMPI_VERSION=4.1.1
  export OMPI_URL="https://download.open-mpi.org/release/open-mpi/v4.1/openmpi-$OMPI_VERSION.tar.bz2"
  mkdir -p /tmp/ompi
  mkdir -p /opt
  # --- Download ---
  cd /tmp/ompi
  wget -O openmpi-$OMPI_VERSION.tar.bz2 $OMPI_URL && tar -xjf openmpi-$OMPI_VERSION.tar.bz2
  # --- Compile and install ---
  cd /tmp/ompi/openmpi-$OMPI_VERSION
  ./configure --prefix=$OMPI_DIR && make -j4 && make install
  # --- Set environmental variables so we can compile our application ---
  export PATH=$OMPI_DIR/bin:$PATH
  export LD_LIBRARY_PATH=$OMPI_DIR/lib:$LD_LIBRARY_PATH
  export MANPATH=$OMPI_DIR/share/man:$MANPATH
  # --- Compile our application ---
  cd /home
  mpif90 -o mpitest.x mpitest.f90 -O2
MPICH

If you intend to use MPICH, the definition file could look like, e.g., the one below:

Bootstrap: yum
OSVersion: 7
MirrorURL: http://mirror.centos.org/centos-%{OSVERSION}/%{OSVERSION}/os/$basearch/
Include: yum

%files
  /n/home06/pkrastev/holyscratch01/Singularity/MPI/mpitest.f90 /home/

%environment
  export SINGULARITY_MPICH_DIR=/usr

%post
  yum -y install vim-minimal
  yum -y install gcc
  yum -y install gcc-gfortran
  yum -y install gcc-c++
  yum -y install which tar wget gzip
  yum -y install make
  cd /root/
  wget http://www.mpich.org/static/downloads/3.1.4/mpich-3.1.4.tar.gz
  tar xvfz mpich-3.1.4.tar.gz
  cd mpich-3.1.4/
  ./configure --prefix=/usr && make -j2 && make install
  cd /home
  mpif90 -o mpitest.x mpitest.f90 -O2
  cp mpitest.x /usr/bin/

Building Singularity Image

You can use the below commands to build your Singularity images, e.g.:

# --- Building the OpenMPI based image ---
$ singularity build openmpi_test.simg openmpi_test_centos7.def
# --- Building the based Mpich image ---
$ singularity build mpich_test.simg mpich_test.def

These will generate the Singularity image files openmpi_test.simg and mpich_test.simg respectively.

Executing MPI Applications with Singularity

On the FASRC cluster the standard way to execute MPI applications is through a batch-job submission script. Below are two examples, one using OpenMPI, and another MPICH.

OpenMPI
#!/bin/bash
#SBATCH -p test
#SBATCH -n 8
#SBATCH -J mpi_test
#SBATCH -o mpi_test.out
#SBATCH -e mpi_test.err
#SBATCH -t 30
#SBATCH --mem-per-cpu=1000

# --- Set up environment ---
export UCX_TLS=ib
export PMIX_MCA_gds=hash
export OMPI_MCA_btl_tcp_if_include=ib0
module load gcc/10.2.0-fasrc01 
module load openmpi/4.1.1-fasrc01

# --- Run the MPI application in the container ---
srun -n 8 --mpi=pmix singularity exec openmpi_test.simg /home/mpitest.x

Note: Please notice that the version of the OpenMPI implementation used on the host need to match the one in the Singularity container. In this case this is version 4.1.1.

If the above script is named run.sbatch.ompi, the MPI Singularity job is submitted as usual with:

sbatch run.sbatch.ompi
MPICH
#!/bin/bash
#SBATCH -p test
#SBATCH -n 8
#SBATCH -J mpi_test
#SBATCH -o mpi_test.out
#SBATCH -e mpi_test.err
#SBATCH -t 30
#SBATCH --mem-per-cpu=1000

# --- Set up environment ---
module load python/3.8.5-fasrc01
source activate python3_env1

# --- Run the MPI application in the container ---
srun -n 8 --mpi=pmi2 singularity exec mpich_test.simg /usr/bin/mpitest.x

If the above script is named run.sbatch.mpich, the MPI Singularity job is submitted as usual with:

$ sbatch run.sbatch.mpich

Note: Please notice that we don’t have Mpich installed as a software module on the FASRC cluster and therefore this example assumes that Mpich is installed in your user, or lab, environment. The easiest way to do this is through a conda environment. You can find more information on how to set up conda environments in our computing environment here.

Provided you have set up and activated a conda environment named, e.g., python3_env1, Mpich version 3.1.4 can be installed with:

$ conda install mpich==3.1.4

Example Output

$ cat mpi_test.out
 Rank           0 out of           8
 Rank           1 out of           8
 Rank           2 out of           8
 Rank           3 out of           8
 Rank           4 out of           8
 Rank           5 out of           8
 Rank           6 out of           8
 Rank           7 out of           8
 End of program.

Compiling Code with OpenMPI inside Singularity Container

To compile inside the Singularity container, we need to request a compute node to run Singularity:

$ salloc -p test --time=0:30:00 --mem=1000 -n 1

Using the file compile_openmpi.sh, you can compile mpitest.f90 by executing bash compile_openmpi.sh inside the container openmpi_test.simg

$ cat compile_openmpi.sh
#!/bin/bash

export PATH=$OMPI_DIR/bin:$PATH
export LD_LIBRARY_PATH=$OMPI_DIR/lib:$LD_LIBRARY_PATH

# compile fortran program
mpif90 -o mpitest.x mpitest.f90 -O2

# compile c program
mpicc -o mpitest.exe mpitest.c

$ singularity exec openmpi_test.simg bash compile_openmpi.sh

In compile_openmpi.sh, we also included the compilation command for a c program.

Online Trainings Materials

References

© The President and Fellows of Harvard College
Except where otherwise noted, this content is licensed under Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International license.