Modules: Difference between revisions

From Grid5000
Jump to navigation Jump to search
m (Pringot moved page Environment modules to Modules: On décide de passer sur le nom Modules)
No edit summary
Line 3: Line 3:
{{Pages|HPC}}
{{Pages|HPC}}
{{TutorialHeader}}
{{TutorialHeader}}
Grid'5000 provides a set of software (mainly scientific-related) using '''[http://modules.sourceforge.net/ Environment modules]''', thanks to the '''module''' command line tool.
Grid'5000 provides a set of software (mainly scientific-related) using '''[https://lmod.readthedocs.io/en/latest/ Lmod]''', thanks to the '''module''' command line tool.


They are available from Grid5000 frontends or cluster's nodes (only on '''<code class="file">debianXX-big</code>''' and '''<code class="file">debianXX-nfs</code>''' environment if deployment is used).
They are available from Grid5000 frontends or cluster's nodes (only on '''<code class="file">debianXX-big</code>''' and '''<code class="file">debianXX-nfs</code>''' environment if deployment is used).
Line 13: Line 13:
To get started, list available software using:  
To get started, list available software using:  


  $ module avail
  $ module available
  ----------------------------------------- /grid5000/spack/share/spack/modules/linux-debian10-x86_64 ------------------------------------------
 
   aocc/3.1.0_gcc-8.3.0                           hsa-rocr-dev/4.2.0_gcc-8.3.0              rocm-dbgapi/4.2.0_gcc-8.3.0
  ---------------------------------------------- /grid5000/spack/v1/share/spack/modules/linux-debian11-x86_64_v2 ----------------------------------------------
   atmi/4.2.0_gcc-8.3.0                          hsa-rocr-dev/4.3.1_gcc-8.3.0       (D)   rocm-dbgapi/4.3.1_gcc-8.3.0     (D)
   anaconda3/2022.05_gcc-10.4.0                         mpich/4.0.2_gcc-10.4.0
  binutils/2.32_gcc-8.3.0                       hwloc/2.5.0_gcc-8.3.0               (D)    rocm-opencl/4.2.0_gcc-8.3.0
   apptainer/1.0.2_gcc-10.4.0                          mumps/5.4.1_gcc-10.4.0-intelmpi
   boost/1.73.0_gcc-8.3.0                 (D)    libevent/2.1.12_gcc-8.3.0                 rocm-opencl/4.3.1_gcc-8.3.0    (D)
   cmake/3.23.3_gcc-10.4.0                             mvapich2/2.3.7_gcc-10.4.0
   clingo/master_gcc-8.3.0                        llvm-amdgpu/4.2.0_gcc-8.3.0               rocm-smi-lib/4.2.0_gcc-8.3.0
  comgr/5.2.0_gcc-10.4.0                               netlib-lapack/3.10.1_gcc-10.4.0
   cuda/10.2.89_gcc-8.3.0                        llvm-amdgpu/4.3.1_gcc-8.3.0         (D)    rocm-smi-lib/4.3.1_gcc-8.3.0   (D)
   cuda/11.4.0_gcc-10.4.0                               netlib-scalapack/2.2.0_gcc-10.4.0-openmpi
   cuda/11.0.2_gcc-8.3.0                          miopen-hip/4.2.0_gcc-8.3.0                 rocminfo/4.2.0_gcc-8.3.0
   cuda/11.6.2_gcc-10.4.0                               opa-psm2/11.2.230_gcc-10.4.0
   cuda/11.3.1_gcc-8.3.0                         miopen-hip/4.3.1_gcc-8.3.0         (D)    rocminfo/4.3.1_gcc-8.3.0       (D)
   cuda/11.7.1_gcc-10.4.0                       (D)    openblas/0.3.20_gcc-10.4.0
   cuda/11.4.0_gcc-8.3.0                   (D)    miopen-opencl/4.2.0_gcc-8.3.0             rocprofiler-dev/4.2.0_gcc-8.3.0
   cudnn/8.2.4.15-11.4_gcc-10.4.0                       openjdk/1.8.0_265-b01_gcc-10.4.0
   cudnn/7.6.5.32-10.1-linux-x64_gcc-8.3.0       miopen-opencl/4.3.1_gcc-8.3.0       (D)    rocprofiler-dev/4.3.1_gcc-8.3.0 (D)
   cudnn/8.4.0.27-11.6_gcc-10.4.0               (D)    openmpi/4.1.4_gcc-10.4.0
   cudnn/8.2.4.15-11.4_gcc-8.3.0          (D)   mivisionx/4.2.0_gcc-8.3.0                 rocrand/4.2.0_gcc-8.3.0
   gcc/10.4.0_gcc-10.4.0                               petsc/3.17.4_gcc-10.4.0-intelmpi
   gcc/8.3.0_gcc-8.3.0                           namd3-cuda/3.0alpha9_gcc-8.3.0            rocrand/4.3.1_gcc-8.3.0         (D)
  gcc/12.2.0_gcc-10.4.0                         (D)   pmix/4.1.2_gcc-10.4.0
   gcc/11.1.0_gcc-8.3.0                    (D)    opa-psm2/11.2.173_gcc-8.3.0               rocsolver/4.2.0_gcc-8.3.0
  glpk/4.65_gcc-10.4.0                                 py-numpy/1.23.3_gcc-10.4.0-intelmpi-python-3.9.13
   hip/4.2.0_gcc-8.3.0                           openblas/0.3.9_gcc-8.3.0            (D)    rocsolver/4.3.1_gcc-8.3.0       (D)
   go/1.18_gcc-10.4.0                                  py-pyopencl/2020.2.2_gcc-10.4.0-intelmpi-python-3.9.13
   hip/4.3.1_gcc-8.3.0                    (D)    openmpi/4.1.1_gcc-8.3.0            (D)    rocsparse/4.2.0_gcc-8.3.0
   hip/5.2.0_gcc-10.4.0                                 python/3.9.13_gcc-10.4.0
   hipblas/4.2.0_gcc-8.3.0                       papi/5.7.0_gcc-8.3.0               (D)   rocsparse/4.3.1_gcc-8.3.0       (D)
   hipblas/5.2.0_gcc-10.4.0                             rdma-core/41.0_gcc-10.4.0
  hipblas/4.3.1_gcc-8.3.0                 (D)   python/3.8.11_gcc-8.3.0                   rocthrust/4.2.0_gcc-8.3.0
   hsa-rocr-dev/5.2.0_gcc-10.4.0                       rocblas/5.2.0_gcc-10.4.0
   hipcub/4.2.0_gcc-8.3.0                         rccl/4.2.0_gcc-8.3.0                       rocthrust/4.3.1_gcc-8.3.0       (D)
   hsakmt-roct/5.2.0_gcc-10.4.0                         rocm-cmake/5.2.0_gcc-10.4.0
  hipcub/4.3.1_gcc-8.3.0                 (D)   rccl/4.3.1_gcc-8.3.0               (D)    roctracer-dev/4.2.0_gcc-8.3.0
   intel-oneapi-advisor/2022.1.0_gcc-10.4.0            rocm-opencl/5.2.0_gcc-10.4.0
   hipfft/4.2.0_gcc-8.3.0                         rocblas/4.2.0_gcc-8.3.0                   roctracer-dev/4.3.1_gcc-8.3.0   (D)
   intel-oneapi-ccl/2021.6.0_gcc-10.4.0-intelmpi        rocm-openmp-extras/5.2.0_gcc-10.4.0
  hipfft/4.3.1_gcc-8.3.0                 (D)   rocblas/4.3.1_gcc-8.3.0             (D)    singularity/3.5.2_gcc-8.3.0
   intel-oneapi-compilers/2022.1.0_gcc-10.4.0           rocm-smi-lib/5.2.3_gcc-10.4.0
   hipfort/4.2.0_gcc-8.3.0                       rocfft/4.2.0_gcc-8.3.0                    singularity/3.7.2_gcc-8.3.0
   intel-oneapi-dpl/2021.7.0_gcc-10.4.0                 rocminfo/5.2.0_gcc-10.4.0
   hipfort/4.3.1_gcc-8.3.0                 (D)    rocfft/4.3.1_gcc-8.3.0             (D)   singularity/3.8.3_gcc-8.3.0     (D)
   intel-oneapi-inspector/2022.1.0_gcc-10.4.0           rocprofiler-dev/5.2.0_gcc-10.4.0
  hipify-clang/4.2.0_gcc-8.3.0                   rocm-bandwidth-test/4.2.0_gcc-8.3.0       ucx/1.11.2_gcc-8.3.0
  intel-oneapi-ipp/2021.6.0_gcc-10.4.0                 rocsolver/5.2.0_gcc-10.4.0
  hipify-clang/4.3.1_gcc-8.3.0            (D)    rocm-bandwidth-test/4.3.1_gcc-8.3.0 (D)    vmd/1.9.4_gcc-8.3.0
   intel-oneapi-mkl/2022.1.0_gcc-10.4.0-intelmpi        roctracer-dev-api/5.2.0_gcc-10.4.0
   hipsparse/4.2.0_gcc-8.3.0                     rocm-cmake/4.2.0_gcc-8.3.0
   intel-oneapi-mkl/2022.1.0_gcc-10.4.0-openmpi  (D)    scalasca/2.6_gcc-10.4.0-openmpi
  hipsparse/4.3.1_gcc-8.3.0              (D)    rocm-cmake/4.3.1_gcc-8.3.0         (D)
  intel-oneapi-mpi/2021.6.0_gcc-10.4.0                 scotch/7.0.1_gcc-10.4.0-intelmpi
 
   intel-oneapi-vtune/2022.3.0_gcc-10.4.0               singularity/3.8.5_gcc-10.4.0
   julia/1.8.2_gcc-10.4.0                               starpu/1.3.9_gcc-10.4.0
   libfabric/1.15.1_gcc-10.4.0                         ucx/1.13.1_gcc-10.4.0
   llvm/13.0.1_gcc-10.4.0                               valgrind/3.19.0_gcc-10.4.0-intelmpi
  llvm-amdgpu/5.2.0_gcc-10.4.0                         valgrind/3.19.0_gcc-10.4.0-openmpi                    (D)
   metis/5.1.0_gcc-10.4.0                               vtk/9.0.3_gcc-10.4.0-intelmpi
   miniconda3/4.10.3_gcc-10.4.0                         vtk/9.0.3_gcc-10.4.0-openmpi                          (D)
   miniconda3/22.11.1_gcc-10.4.0                 (D)
   Where:
   Where:
   D:  Default Module
   D:  Default Module
Use "module spider" to find all possible modules.
Use "module keyword key1 key2 ..." to search for all possible modules matching any of the "keys".


{{Note|text=If you need additional software to be installed, feel free to contact [mailto:support-starff@lists.grid5000.fr Grid5000 support team] and we can look into it.}}
{{Note|text=If you need additional software to be installed, feel free to contact [mailto:support-starff@lists.grid5000.fr Grid5000 support team] and we can look into it.}}
Line 52: Line 65:
  $ module load gcc
  $ module load gcc
  $ gcc --version
  $ gcc --version
  gcc (Spack GCC) 11.1.0
  gcc (Spack GCC) 12.2.0


By default, module loads the latest version available of a software (sorted by lexicographical order). You can also specify the version you need:
By default, module loads the latest version available of a software (sorted by lexicographical order). You can also specify the version you need:


  $ module load gcc/8.3.0_gcc-8.3.0
  $ module load gcc/10.4.0_gcc-10.4.0
  $ gcc --version
  $ gcc --version
  gcc (Spack GCC) 8.3.0
  gcc (Spack GCC) 10.4.0


You can also find out more information about the package using ''whatis'' or ''show'' commands:
You can also find out more information about the package using ''whatis'' or ''show'' commands:
Line 85: Line 98:
{{Term|location=fnancy|cmd=<code class="command">oarsub</code> <code class="command">". /etc/profile; module load gcc; gcc --version"</code>}}
{{Term|location=fnancy|cmd=<code class="command">oarsub</code> <code class="command">". /etc/profile; module load gcc; gcc --version"</code>}}


== Packages that require connection to license server ==
== Modules that require connection to license server ==
 
Some packages available with module are licensed and require a connection to a license server (e.g. the Intel tools). If your institution provides a license server for this software, you may forward the connection to that license server from the node where the software will be used.


For instance, to use Intel compilers from an Inria institute network (or when using Inria's VPN), you can use jetons.inria.fr license server by forwarding connections to ports 29030 and 34430 from your node using an SSH tunnel. Intel compilers must be configured to use ''localhost:29030'' as the license server for connections to be forwarded in the tunnel. For instance, use the following commands:
Some modules are licensed and require a connection to a license server. If your institution provides a license server for this software, you may forward the connection to that license server from the node where the software will be used.


# Assuming that your connecting from a network where jetons.inria.fr is available
Our [[Matlab|Matlab page]] for instance, shows how to connect a Matlab process running on a node to a institutional license server.
laptop: ssh -R 29030:jetons.inria.fr:29030 -R 34430:jetons.inria.fr:34430 <your_node>.g5k
node: module load intel-parallel-studio
node: export INTEL_LICENSE_FILE=29030@127.0.0.1
node: icc -v


== Sharing modules between users ==
== Sharing modules between users ==
Line 116: Line 123:
Finally, the modules can be loaded.
Finally, the modules can be loaded.
{{Term|location=node|cmd=<code class="command">module load</code> <code class="replace">my-personal-module</code>}}
{{Term|location=node|cmd=<code class="command">module load</code> <code class="replace">my-personal-module</code>}}
If you are not familiar with modules, you will find [https://gitlab.inria.fr/grid5000/team-modules here] a sample setup to build and share a set of modules of your own.

Revision as of 10:36, 5 April 2023

Note.png Note

This page is actively maintained by the Grid'5000 team. If you encounter problems, please report them (see the Support page). Additionally, as it is a wiki page, you are free to make minor corrections yourself if needed. If you would like to suggest a more fundamental change, please contact the Grid'5000 team.

Grid'5000 provides a set of software (mainly scientific-related) using Lmod, thanks to the module command line tool.

They are available from Grid5000 frontends or cluster's nodes (only on debianXX-big and debianXX-nfs environment if deployment is used).

The modules command

General usage

The module system is designed to load software and make it available by modifying environment (such as your PATH variable).

To get started, list available software using:

$ module available
 
---------------------------------------------- /grid5000/spack/v1/share/spack/modules/linux-debian11-x86_64_v2 ----------------------------------------------
  anaconda3/2022.05_gcc-10.4.0                         mpich/4.0.2_gcc-10.4.0
  apptainer/1.0.2_gcc-10.4.0                           mumps/5.4.1_gcc-10.4.0-intelmpi
  cmake/3.23.3_gcc-10.4.0                              mvapich2/2.3.7_gcc-10.4.0
  comgr/5.2.0_gcc-10.4.0                               netlib-lapack/3.10.1_gcc-10.4.0
  cuda/11.4.0_gcc-10.4.0                               netlib-scalapack/2.2.0_gcc-10.4.0-openmpi
  cuda/11.6.2_gcc-10.4.0                               opa-psm2/11.2.230_gcc-10.4.0
  cuda/11.7.1_gcc-10.4.0                        (D)    openblas/0.3.20_gcc-10.4.0
  cudnn/8.2.4.15-11.4_gcc-10.4.0                       openjdk/1.8.0_265-b01_gcc-10.4.0
  cudnn/8.4.0.27-11.6_gcc-10.4.0                (D)    openmpi/4.1.4_gcc-10.4.0
  gcc/10.4.0_gcc-10.4.0                                petsc/3.17.4_gcc-10.4.0-intelmpi
  gcc/12.2.0_gcc-10.4.0                         (D)    pmix/4.1.2_gcc-10.4.0
  glpk/4.65_gcc-10.4.0                                 py-numpy/1.23.3_gcc-10.4.0-intelmpi-python-3.9.13
  go/1.18_gcc-10.4.0                                   py-pyopencl/2020.2.2_gcc-10.4.0-intelmpi-python-3.9.13
  hip/5.2.0_gcc-10.4.0                                 python/3.9.13_gcc-10.4.0
  hipblas/5.2.0_gcc-10.4.0                             rdma-core/41.0_gcc-10.4.0
  hsa-rocr-dev/5.2.0_gcc-10.4.0                        rocblas/5.2.0_gcc-10.4.0
  hsakmt-roct/5.2.0_gcc-10.4.0                         rocm-cmake/5.2.0_gcc-10.4.0
  intel-oneapi-advisor/2022.1.0_gcc-10.4.0             rocm-opencl/5.2.0_gcc-10.4.0
  intel-oneapi-ccl/2021.6.0_gcc-10.4.0-intelmpi        rocm-openmp-extras/5.2.0_gcc-10.4.0
  intel-oneapi-compilers/2022.1.0_gcc-10.4.0           rocm-smi-lib/5.2.3_gcc-10.4.0
  intel-oneapi-dpl/2021.7.0_gcc-10.4.0                 rocminfo/5.2.0_gcc-10.4.0
  intel-oneapi-inspector/2022.1.0_gcc-10.4.0           rocprofiler-dev/5.2.0_gcc-10.4.0
  intel-oneapi-ipp/2021.6.0_gcc-10.4.0                 rocsolver/5.2.0_gcc-10.4.0
  intel-oneapi-mkl/2022.1.0_gcc-10.4.0-intelmpi        roctracer-dev-api/5.2.0_gcc-10.4.0
  intel-oneapi-mkl/2022.1.0_gcc-10.4.0-openmpi  (D)    scalasca/2.6_gcc-10.4.0-openmpi
  intel-oneapi-mpi/2021.6.0_gcc-10.4.0                 scotch/7.0.1_gcc-10.4.0-intelmpi
  intel-oneapi-vtune/2022.3.0_gcc-10.4.0               singularity/3.8.5_gcc-10.4.0
  julia/1.8.2_gcc-10.4.0                               starpu/1.3.9_gcc-10.4.0
  libfabric/1.15.1_gcc-10.4.0                          ucx/1.13.1_gcc-10.4.0
  llvm/13.0.1_gcc-10.4.0                               valgrind/3.19.0_gcc-10.4.0-intelmpi
  llvm-amdgpu/5.2.0_gcc-10.4.0                         valgrind/3.19.0_gcc-10.4.0-openmpi                     (D)
  metis/5.1.0_gcc-10.4.0                               vtk/9.0.3_gcc-10.4.0-intelmpi
  miniconda3/4.10.3_gcc-10.4.0                         vtk/9.0.3_gcc-10.4.0-openmpi                           (D)
  miniconda3/22.11.1_gcc-10.4.0                 (D)

 Where:
  D:  Default Module

Use "module spider" to find all possible modules.
Use "module keyword key1 key2 ..." to search for all possible modules matching any of the "keys".


Note.png Note

If you need additional software to be installed, feel free to contact Grid5000 support team and we can look into it.

To load something into your environment use the load command:

$ module load gcc
$ gcc --version
gcc (Spack GCC) 12.2.0

By default, module loads the latest version available of a software (sorted by lexicographical order). You can also specify the version you need:

$ module load gcc/10.4.0_gcc-10.4.0
$ gcc --version
gcc (Spack GCC) 10.4.0

You can also find out more information about the package using whatis or show commands:

$ module whatis gcc
$ module show gcc

If you want to unload one or all the currently loaded modules, you can use:

$ module unload gcc
$ module purge

The full documentation of module command is available at: http://modules.sourceforge.net/

Using modules in jobs

The module command is not a real executable, but a shell function.

If it is not available from your shell (it might be the case if you use zsh), ensure that /etc/profile.d/lmod.sh is sourced.

In addition, module must be executed in an actual shell to work: a simple oarsub "module load gcc" will fail, you must use either:

Terminal.png fnancy:
oarsub 'bash -l -c "module load gcc; gcc --version"'

or alternatively:

Terminal.png fnancy:
oarsub ". /etc/profile; module load gcc; gcc --version"

Modules that require connection to license server

Some modules are licensed and require a connection to a license server. If your institution provides a license server for this software, you may forward the connection to that license server from the node where the software will be used.

Our Matlab page for instance, shows how to connect a Matlab process running on a node to a institutional license server.

Sharing modules between users

Modules can be shared between users by using a group storage.

Note.png Note

This documentation assume that you already know what is a group storage, and that you already have one for storing your modules. If it's not the case, check the Group Storage Documentation.

Put the modules in a dedicated folder of the group storage, so that the modules are reachable by any user belonging to the group storage.

Terminal.png access:
rsync modules-dir/ /srv/my-storage@storage_fqdn/modules-dir

After adding the directory to the MODULEPATH environment variable, the modules inside this directory will be loadable by other users of the group. This can be done by using one of the two following commands.

Terminal.png node:
module use /srv/my-storage@storage_fqdn/modules-dir

or

Terminal.png node:
export MODULEPATH=/srv/my-storage@storage_fqdn/modules-dir:$MODULEPATH


Finally, the modules can be loaded.

Terminal.png node:
module load my-personal-module


If you are not familiar with modules, you will find here a sample setup to build and share a set of modules of your own.