Singularity
Note | |
---|---|
This page is actively maintained by the Grid'5000 team. If you encounter problems, please report them (see the Support page). Additionally, as it is a wiki page, you are free to make minor corrections yourself if needed. If you would like to suggest a more fundamental change, please contact the Grid'5000 team. |
Singularity is a popular container solution for HPC systems. It natively supports GPU and high performance networks in containers and is compatible with docker images. Grid'5000 supports the Singularity containers. It is available using module and does not requires root privileges. More info at: https://sylabs.io/docs/.
Basic usage
Load singularity module :
Just run the singularity
command to use it :
The Singularity user documentation is available at https://sylabs.io/guides/latest/user-guide. It describes the various ways to run programs inside a container and how to build your own container image.
Building a singularity image
Recent versions of Singularity allow building images without root access (see https://docs.sylabs.io/guides/latest/user-guide/fakeroot.html). However this has limitations, so it is better to build images as root. It can be performed on your own laptop or on a Grid'5000 node using "sudo-g5k" :
For more information about building Singularity containers, see https://docs.sylabs.io/guides/latest/user-guide/build_a_container.html
Using docker containers with Singularity
Singularity can also be used to start docker containers. For instance:
Running singularity containers in OAR submission
Singularity containers can also be run in a OAR submission (none-interactive batch job). For instance:
frontend :
|
oarsub -l core=1 "module load singularity && singularity run library://sylabsed/examples/lolcow " |
Running MPI programs in Singularity containers
MPI programs may be run in Singularity containers, by leveraging both the MPI implementation available in the host, i.e. a Grid'5000 physical node (which has a direct access to the high peformance network hardware if present), and the MPI library that must be installed inside the container.
MPI programs in the Singularity container can then be started using the the mpirun command on the host.
See https://sylabs.io/guides/latest/user-guide/mpi.html for more information.
For instance, to submit such a MPI job under OAR, assuming Singularity image named my_mpi_image.sif
in your home directory, use:
frontend :
|
oarsub -l nodes=2 "module load singularity && mpirun -hostfile \$OAR_NODE_FILE --mca orte_rsh_agent oarsh -- `which singularity` exec my_mpi_image.sif /opt/mpitest " |
Using GPUs in Singularity containers
GPUs available in the host can be made available inside the container by using the --nv option (for Nvidia GPUs only).
For instance, to start an interactive tensorflow environment with one GPU, first submit the job reserving 1 GPU:
Then on that node:
More info at: https://sylabs.io/guides/latest/user-guide/gpu.html
Using Apptainer (instead of Singularity)
As Sylabs forked the Singularity project
without renaming their fork, the Singularity project decided to move into the Linux Foundation and rename their project Apptainer
. See the official announcement for more information.
As Singularity, Apptainer is available through module. To use it, just load the module and execute your container:
Example: Using Singularity to port a software environment between HPC infrastructures
Using Singularity is a good way to port software environments between HPC infrastructures, for example, between Grid'5000 and IDRIS' Jean Zay.
The following example describes how Singularity could be used together with Conda to share your software environment on two different HPC infrastructures (it is inspired from this stackoverflow question).
- Step 1 - On Grid'5000, create a Docker container with your Conda environment
(based on the micromamba Quick Start guide)
Create a env.yaml file to describe your Conda environment:
name: base channels: - conda-forge dependencies: - tensorflow-gpu
Create a Dockerfile:
FROM mambaorg/micromamba:latest COPY --chown=$MAMBA_USER:$MAMBA_USER env.yaml /tmp/env.yaml RUN micromamba install -y -n base -f /tmp/env.yaml && \ micromamba clean --all --yes
Create a Docker image using this environment: (see Docker)
Check your docker image:
node :
|
docker run -it --rm my_app python3 -c "import tensorflow as tf; print('Num GPUs Available:', len(tf.config.list_physical_devices('GPU')))" |
- Step 2 - On Grid'5000, convert your Docker image to a Singularity image
Export your Docker image:
Convert it to a Singularity image:
Test your Singularity image:
$ singularity shell --nv my_app.sif Singularity> eval "$(micromamba shell hook --shell bash)" Singularity> micromamba activate (base) Singularity> python -c "import tensorflow as tf; print('Num GPUs Available:', len(tf.config.list_physical_devices('GPU')))" Num GPUs Available: 2 (base) Singularity>
- Step 3 - Copy your Singularity image to Jean Zay (using scp/rsync) and run it there
Note that unless you added Grid'5000's external addresses to your IDRIS account as described in the FAQ, you must copy your image locally and then copy it to Jean Zay.
Specific information about running Singularity containers on Jean Zay is available at http://www.idris.fr/eng/jean-zay/cpu/jean-zay-utilisation-singularity-eng.html.
$ module load singularity $ idrcontmgr cp my_app.sif 1 file copied. $ singularity shell --nv $SINGULARITY_ALLOWED_DIR/my_app.sif Singularity> eval "$(micromamba shell hook --shell bash)" Singularity> micromamba activate (base) Singularity> python -c "import tensorflow as tf; print('Num GPUs Available:', len(tf.config.list_physical_devices('GPU')))" Num GPUs Available: 1 (base) Singularity>