Search Docs by Keyword
Podman
Introduction
Podman is a Open Containers Initiative (OCI) container toolchain developed by RedHat. Unlike its popular OCI cousin Docker, it is daemonless making it easier to use with resource schedulers like Slurm. Podman maintains command line interface (CLI) that is very similar to the Docker. On the FASRC cluster the docker
command runs podman
under the hood and many commands just work with podman though with some exceptions. Note that this document uses the term container to mean OCI container. Besides Podman containers FASRC also supports Singularity.
Normally podman requires privileged access. However on the FASRC clusters we have enabled rootless podman, alleviating the requirement. We recommend reading our document on rootless containers before proceeding further so you understand how it works and its limitations.
Podman Documentation
The official Podman Documentation provides the latest information on how to use Podman. On this page we will merely highlight specific useful commands and features/quirks specific to the FASRC cluster. You can get command line help pages by running man podman
or podman --help
.
Working with Podman
To start working with podman, first get an interactive session either via salloc or via Open OnDemand. Once you have that session then you can start working with your container image. The basic commands we will cover here are:
- pull: Download a container image from a container registry
- images: List downloaded images
- run: Run a command in a new container
- build: Create a container image from a Dockerfile/Containerfile
- push: push a container image to a container registry
For these examples we will use the lolcow and ubuntu images from DockerHub.
pull
podman pull
fetches the specified container image and extracts it into node-local storage (/tmp/container-user-<uid>
by default on the FASRC cluster). This step is optional, as podman will automatically download an image specified in a podman run
, podman build
, or podman shell
command.
[jharvard@holy8a26601 ~]$ podman pull docker://godlovedc/lolcow Trying to pull docker.io/godlovedc/lolcow:latest... Getting image source signatures Copying blob 8e860504ff1e done | Copying blob 9fb6c798fa41 done | Copying blob 3b61febd4aef done | Copying blob 9d99b9777eb0 done | Copying blob d010c8cf75d7 done | Copying blob 7fac07fb303e done | Copying config 577c1fe8e6 done | Writing manifest to image destination 577c1fe8e6d84360932b51767b65567550141af0801ff6d24ad10963e40472c5
images
podman images
lists the images that are already available on the node (in /tmp/container-user-<uid>
)
[jharvard@holy8a26601 ~]$ REPOSITORY TAG IMAGE ID CREATED SIZE docker.io/godlovedc/lolcow latest 577c1fe8e6d8 7 years ago 248 MB
run
Podman containers may contain an entrypoint script that will execute when the container is run. To run the container:
[jharvard@holy8a26601 ~]$ podman run -it docker://godlovedc/lolcow _______________________________________ / Your society will be sought by people \ \ of taste and refinement. / --------------------------------------- \ ^__^ \ (oo)\_______ (__)\ )\/\ ||----w | || ||
To view the entrypoint script for a podman container:
[jharvard@holy8a26601 ~]$ podman inspect -f 'Entrypoint: {{.Config.Entrypoint}}\nCommand: {{.Config.Cmd}}' lolcow Entrypoint: [/bin/sh -c fortune | cowsay | lolcat] Command: []
shell
To start a shell inside a new container, specify the podman run -it --entrypoint bash
options. -it
effectively provides an interactive session, while --entrypoint bash
invokes the bash shell (bash
can be substituted with another shell program that exists in the container image).
[jharvard@holy8a26601 ~]$ podman run -it --entrypoint bash docker://godlovedc/lolcow root@holy8a26601:/#
GPU Example
First, start an interactive job on a gpu partition. Then invoke podman run
with the --device nvidia.com/gpu=all
option:
[jharvard@holygpu7c26306 ~]$ podman run --rm --device nvidia.com/gpu=all nvidia/cuda:11.0.3-base-ubuntu20.04 nvidia-smi Wed Jan 22 15:41:58 2025 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 550.54.15 Driver Version: 550.54.15 CUDA Version: 12.4 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA A100-SXM4-40GB On | 00000000:CA:00.0 Off | On | | N/A 27C P0 66W / 400W | N/A | N/A Default | | | | Enabled | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | MIG devices: | +------------------+----------------------------------+-----------+-----------------------+ | GPU GI CI MIG | Memory-Usage | Vol| Shared | | ID ID Dev | BAR1-Usage | SM Unc| CE ENC DEC OFA JPG | | | | ECC| | |==================+==================================+===========+=======================| | 0 2 0 0 | 37MiB / 19968MiB | 42 0 | 3 0 2 0 0 | | | 0MiB / 32767MiB | | | +------------------+----------------------------------+-----------+-----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | No running processes found | +-----------------------------------------------------------------------------------------+ WARN[0001] Failed to add pause process to systemd sandbox cgroup: dbus: couldn't determine address of session bus
Batch Jobs
Podman containers can also be executed as part of a normal batch job as you would any other command. Simply include the command as part of the sbatch script. As an example here is a sample podman.sbatch
:
#!/bin/bash #SBATCH -J podman_test #SBATCH -o podman_test.out #SBATCH -e podman_test.err #SBATCH -p test #SBATCH -t 0-00:10 #SBATCH -c 1 #SBATCH --mem=4G # Podman command line options podman run docker://godlovedc/lolcow
When submitted to the cluster as a batch job:
[jharvard@holylogin08 ~]$ sbatch podman.sbatch
Generates the podman_test.out which contains:
[jharvard@holylogin08 ~]$ cat podman_test.out ____________________________________ < Don't read everything you believe. > ------------------------------------ \ ^__^ \ (oo)\_______ (__)\ )\/\ ||----w | || ||
Accessing Files
Each podman container operates within its own isolated filesystem tree in /tmp/container-user-<uid>/storage
. However, if needed, host file systems can be explicitly shared with containers by using the --volume
option when starting a container. This option allows you to bind-mount a directory or file from the host into the container, granting the container access to that path. To access files on the host from inside the container, bind host file(s)/directory(ies) into the container using the --volume
option. For instance, to access netscratch from the container:
[jharvard@holy8a26602 ~]$ podman run -it --entrypoint bash --volume /n/netscratch:/n/netscratch docker://ubuntu root@holy8a26602:/# df -h Filesystem Size Used Avail Use% Mounted on overlay 397G 6.5G 391G 2% / tmpfs 64M 0 64M 0% /dev netscratch-ib01.rc.fas.harvard.edu:/netscratch/C 3.6P 1.8P 1.9P 49% /n/netscratch /dev/mapper/vg_root-lv_scratch 397G 6.5G 391G 2% /run/secrets shm 63M 0 63M 0% /dev/shm devtmpfs 504G 0 504G 0% /dev/tty
Ownership of files as seen from the host at are created by a process in the container depend on the user ID (UID) of the creating process in the container:
- The host (cluster) user, if the container user is:
- root (UID 0) – this is often the default
podman run –userns=keep-id
is specified, so the UID inside and (similar to SingularityCE in the default native mode)podman run –userns=keep-id:uid=<container-uid>,gid=<container-gid>
is specified, mapping the specified UID/GID in the container to the host/cluster user’s UID/GID (see example below)
- Otherwise the subuid/subgid is associated with the container-uid/container-gid (see rootless containers). Only filesystems that can resolve your subuid’s can be written to from a podman container (e.g. NFS, home directories, or the local filesystem; but not Lustre filesystems like holylabs) and only locations with “other” read/write/execute permissions can be utilized (e.g. the Everyone directory).
Environment Variables
A Podman container does not inherit environment variables from the host environment. Any environment variables that are not defined by the container image must be explicitly set with the –env option:
[jharvard@holy8a26602 ~]$ podman run -it --rm --env MY_VAR=test python:3.13-alpine python3 -c 'import os; print(os.environ["MY_VAR"])' test
Building Your Own Podman Container
You can build or import a Podman container in several different ways. Common methods include:
- Download an existing OCI container image located in Docker Hub or another OCI container registry (e.g., quay.io, NVIDIA NGC Catalog, GitHub Container Registry).
- Build a Podman image from a Containerfile/Dockerfile.
Images are stored by default at /tmp/containers-user-<uid>/storage
. You can find out more about the specific paths by running the podman info
command.
Since the default path is in /tmp
that means that containers will only exist for the duration of the job and then the system will clean up the space. If you want to maintain images for longer you will need to override the default configuration. You can do this by putting configuration settings in $HOME/.config/containers/storage.conf
. Note that due to subuid you will need to select a storage location that your subuids can access. Documentation for storage.conf can be found here.
Downloading OCI Container Image From Registry
To download a OCI container image from a registry simply use the pull command:
[jharvard@holy8a26602 ~]$ podman pull docker://godlovedc/lolcow Trying to pull docker.io/godlovedc/lolcow:latest... Getting image source signatures Copying blob 8e860504ff1e done | Copying blob 9fb6c798fa41 done | Copying blob 3b61febd4aef done | Copying blob 9d99b9777eb0 done | Copying blob d010c8cf75d7 done | Copying blob 7fac07fb303e done | Copying config 577c1fe8e6 done | Writing manifest to image destination 577c1fe8e6d84360932b51767b65567550141af0801ff6d24ad10963e40472c5 WARN[0006] Failed to add pause process to systemd sandbox cgroup: dbus: couldn't determine address of session bus [jharvard@holy8a26602 ~]$ podman image ls REPOSITORY TAG IMAGE ID CREATED SIZE docker.io/godlovedc/lolcow latest 577c1fe8e6d8 7 years ago 248 MB
Build Podman Image From Containerfile
Podman can use both Containerfiles and Dockerfiles to build images. To build first write your Containerfile:
FROM ubuntu:22.04 RUN apt-get -y update \ && apt-get -y install cowsay lolcat ENV LC_ALL=C PATH=/usr/games:$PATH ENTRYPOINT ["/bin/sh", "-c", "date | cowsay | lolcat"]
Then run the build command:
[jharvard@holy8a26602 ~]$ podman build -f Containerfile STEP 1/4: FROM ubuntu:22.04 Resolved "ubuntu" as an alias (/etc/containers/registries.conf.d/000-shortnames.conf) Trying to pull docker.io/library/ubuntu:22.04... Getting image source signatures Copying blob 6414378b6477 done | Copying config 97271d29cb done | Writing manifest to image destination STEP 2/4: RUN apt-get -y update && apt-get -y install cowsay lolcat ... omitted ouptput ... Running hooks in /etc/ca-certificates/update.d... done. --> a41765f5337a STEP 3/4: ENV LC_ALL=C PATH=/usr/games:$PATH --> e9eead916e20 STEP 4/4: ENTRYPOINT ["/bin/sh", "-c", "date | cowsay | lolcat"] COMMIT --> 51e919dd571f 51e919dd571f1c8a760ef54c746dcb190659bdd353cbdaa1d261ba8d50694d24
References
Bookmarkable Links