Modules

From Grid5000
Revision as of 16:17, 28 November 2024 by Amerlin (talk | contribs) (→‎Modules that require connection to license server)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search
Note.png Note

This page is actively maintained by the Grid'5000 team. If you encounter problems, please report them (see the Support page). Additionally, as it is a wiki page, you are free to make minor corrections yourself if needed. If you would like to suggest a more fundamental change, please contact the Grid'5000 team.

Grid'5000 provides a set of software (mainly scientific-related) using Lmod, thanks to the module command line tool.

They are available from Grid5000 frontends or cluster's nodes (only on debianXX-big and debianXX-nfs environment if deployment is used).

The modules command

General usage

The module system is designed to load software and make it available by modifying environment (such as your PATH variable).

To get started, list available software using:

 $ module av
 
 ----------------------------------------- /grid5000/spack/v1/share/spack/modules/linux-debian11-x86_64_v2 -----------------------------------------------------------
  apptainer/1.0.2_gcc-10.4.0                           intel-oneapi-ipp/2021.6.0_gcc-10.4.0                 py-numpy/1.23.3_gcc-10.4.0-intelmpi-python-3.9.13
  cmake/3.23.3_gcc-10.4.0                              intel-oneapi-ipp/2021.7.0_gcc-10.4.0          (D)    py-pyopencl/2020.2.2_gcc-10.4.0-intelmpi-python-3.9.13
  comgr/5.2.0_gcc-10.4.0                               intel-oneapi-mkl/2022.1.0_gcc-10.4.0-intelmpi        python/3.9.13_gcc-10.4.0
  cube/4.6_gcc-10.4.0                                  intel-oneapi-mkl/2022.1.0_gcc-10.4.0-openmpi         rdma-core/41.0_gcc-10.4.0
  cuda/11.4.0_gcc-10.4.0                               intel-oneapi-mkl/2023.0.0_gcc-10.4.0-intelmpi (D)    rocblas/5.2.0_gcc-10.4.0
  cuda/11.6.2_gcc-10.4.0                               intel-oneapi-mpi/2021.6.0_gcc-10.4.0                 rocm-cmake/5.2.0_gcc-10.4.0
  cuda/11.7.1_gcc-10.4.0                        (D)    intel-oneapi-mpi/2021.8.0_gcc-10.4.0          (D)    rocm-opencl/5.2.0_gcc-10.4.0
  cudnn/8.2.4.15-11.4_gcc-10.4.0                       intel-oneapi-tbb/2021.8.0_gcc-10.4.0                 rocm-openmp-extras/5.2.0_gcc-10.4.0
  cudnn/8.4.0.27-11.6_gcc-10.4.0                (D)    intel-oneapi-vtune/2022.3.0_gcc-10.4.0               rocm-smi-lib/5.2.3_gcc-10.4.0
  dbus/1.13.6_gcc-10.4.0                               intel-oneapi-vtune/2023.0.0_gcc-10.4.0        (D)    rocminfo/5.2.0_gcc-10.4.0
  gcc/10.4.0_gcc-10.4.0                                julia/1.8.2_gcc-10.4.0                               rocprofiler-dev/5.2.0_gcc-10.4.0
  gcc/12.2.0_gcc-10.4.0                         (D)    libfabric/1.15.1_gcc-10.4.0                          rocsolver/5.2.0_gcc-10.4.0
  glpk/4.65_gcc-10.4.0                                 llvm/13.0.1_gcc-10.4.0                               roctracer-dev-api/5.2.0_gcc-10.4.0
  go/1.18_gcc-10.4.0                                   llvm-amdgpu/5.2.0_gcc-10.4.0                         rust/1.65.0_gcc-10.4.0
  hip/5.2.0_gcc-10.4.0                                 metis/5.1.0_gcc-10.4.0                               scalasca/2.6_gcc-10.4.0-openmpi
  hipblas/5.2.0_gcc-10.4.0                             mpich/4.0.2_gcc-10.4.0-cuda                          scorep/7.1_gcc-10.4.0-openmpi
  hsa-rocr-dev/5.2.0_gcc-10.4.0                        mpich/4.0.2_gcc-10.4.0                        (D)    scotch/7.0.1_gcc-10.4.0-intelmpi
  hsakmt-roct/5.2.0_gcc-10.4.0                         mumps/5.4.1_gcc-10.4.0-intelmpi                      singularity/3.8.5_gcc-10.4.0
  intel-oneapi-advisor/2022.1.0_gcc-10.4.0             mvapich2/2.3.7_gcc-10.4.0                            singularity/3.8.7_gcc-10.4.0                           (D)
  intel-oneapi-advisor/2023.0.0_gcc-10.4.0      (D)    netlib-lapack/3.10.1_gcc-10.4.0                      starpu/1.3.9_gcc-10.4.0
  intel-oneapi-ccl/2021.6.0_gcc-10.4.0-intelmpi        netlib-scalapack/2.2.0_gcc-10.4.0-openmpi            ucx/1.13.1_gcc-10.4.0-compat
  intel-oneapi-ccl/2021.8.0_gcc-10.4.0-intelmpi (D)    opa-psm2/11.2.230_gcc-10.4.0                         ucx/1.13.1_gcc-10.4.0                                  (D)
  intel-oneapi-compilers/2022.1.0_gcc-10.4.0           openblas/0.3.20_gcc-10.4.0                           valgrind/3.19.0_gcc-10.4.0-intelmpi
  intel-oneapi-compilers/2023.0.0_gcc-10.4.0    (D)    openjdk/1.8.0_265-b01_gcc-10.4.0                     valgrind/3.19.0_gcc-10.4.0-openmpi
  intel-oneapi-dpl/2021.7.0_gcc-10.4.0                 openmpi/4.1.4_gcc-10.4.0                             valgrind/3.20.0_gcc-10.4.0-intelmpi                    (D)
  intel-oneapi-dpl/2022.0.0_gcc-10.4.0          (D)    openmpi/4.1.5_gcc-10.4.0                      (D)    vtk/9.0.3_gcc-10.4.0-intelmpi
  intel-oneapi-inspector/2022.1.0_gcc-10.4.0           petsc/3.17.4_gcc-10.4.0-intelmpi                     vtk/9.0.3_gcc-10.4.0-openmpi                           (D)
  intel-oneapi-inspector/2023.0.0_gcc-10.4.0    (D)    pmix/4.1.2_gcc-10.4.0
 
 ----------------------------------------- /grid5000/spack/modules-others/modules --------------------------------------------------------------------------------------
  conda/23.3.1    conda/23.5.0 (D)    ddt/20.1.2    matlab/R2022a    matlab/R2022b (D)    matlab-runtime/R2022a    matlab-runtime/R2022b (D)
 
   Where:
    D:  Default Module
 
 Use "module spider" to find all possible modules.
 Use "module keyword key1 key2 ..." to search for all possible modules matching any of the "keys".


Note.png Note

The complete list of modules with quick description and installed versions is available on the page Modules List.
If you need additional software to be installed, feel free to contact Grid5000 support team and we can look into it.


To load something into your environment use the load command:

$ module load gcc
$ gcc --version
gcc (Spack GCC) 12.2.0

By default, module loads the latest version available of a software (sorted by lexicographical order). You can also specify the version you need:

$ module load gcc/10.4.0_gcc-10.4.0
$ gcc --version
gcc (Spack GCC) 10.4.0

You can also find out more information about the package using whatis or show commands:

$ module whatis gcc
$ module show gcc

If you want to unload one or all the currently loaded modules, you can use:

$ module unload gcc
$ module purge

The full documentation of module command is available at: http://modules.sourceforge.net/

Modules that require connection to license server

Some modules are licensed and require a connection to a license server. We currently propose two license protected modules, Matlab and DDT/Map, with easier access to some institutional license servers. To use them, you just need to join the correct group from the User Management Service.

Group Software Server Usage Allowed users
matlab-inria-sophia matlab matlab-sam.inria.fr export LM_LICENSE_FILE=29030@jetons.inria.fr MatLab access for members of teams and services of Centre Inria d'Université Côte d'Azur
matlab-inria-rennes matlab licence.irisa.fr export LM_LICENSE_FILE=1731@licence.irisa.fr MatLab access for members of teams and services of either Centre Inria de l'Université de Rennes or UMR IRISA
matlab-inria-nancy matlab flexlm1.univ-lorraine.fr export LM_LICENSE_FILE=27000@flexlm1.univ-lorraine.fr MatLab access for members of teams and services of Centre Inria de l’Université de Lorraine
matlab-inria-lyon matlab jetons.inria.fr export LM_LICENSE_FILE=29030@jetons.inria.fr MatLab access for members of teams and services of Centre Inria de Lyon
matlab-inria-grenoble matlab jetons.inria.fr export LM_LICENSE_FILE=29030@jetons.inria.fr MatLab access for members of teams and services of Centre Inria de l'Université Grenoble Alpes
ddt-inria-all ddt jetons.inria.fr - Access to DDT for members of Inria teams and services

Last generated from the Grid'5000 API on 2023-11-13

We plan to add other institutional license servers on this list.


MatLab example

  • Make sure you are a member of a MatLab group in the User Management Service. If not, ask to join the group.
  • Set up the link to the MatLab licenses server (example for matlab-inria-grenoble, see the table above for your group):
 (node):~$ export LM_LICENSE_FILE=29030@jetons.inria.fr
  • Load the Matlab module named matlab (with version accordingly to your provided license), for example:
 (node):~$ module load matlab/R2022b
  • Start MatLab without graphical environment:
 (node):~$ matlab -nodisplay -nodesktop -nosplash
Warning.png Warning

As of today, this is only working when a node is entirely reserved in your job (all CPU cores). Please see the note below for an alternative.

Note.png Note

If you have access to other licenses from your workstation, you can also make a ssh port redirection. For example, if you have access to the Université de Lorraine license, you can:

  • Connect from your workstation to your previously reserved node with a ssh port redirection.
 (workstation):~$ ssh -R 27000:flexlm1.univ-lorraine.fr:27000 -R 27001:flexlm1.univ-lorraine.fr:27001 <node>.<site>.g5k
  • Set up the link to your MatLab License server through your ssh connection:
 (node):~$ export LM_LICENSE_FILE=27000@127.0.0.1
  • Load the MatLab module and start it (see above)

Note: No MatLab License server is needed to use MatLab Runtime on nodes.

DDT/Map example

In the following example, we are using the Inria license server to run ddt because, currently, the DDT/Map module is supported only for Inria members.

  • Make sure you are a member of the ddt-inria-all in the User Management Service. If not, ask to join the group.
  • Connect from your workstation to your previously reserved node.
 (workstation) % ssh -YC <node>.<site>.g5k
  • Load the Matlab module named ddt (with version accordingly to your provided license), for example:
 (node):~$ module load ddt
  • Start DDT using the GUI:
 (node):~$ ddt
Note.png Note

If you want to specify other licenses, you can place them in ~/ddt/Licence.client.ddt and ~/ddt/Licence.client.map

Sharing modules between users

Modules can be shared between users by using a group storage.

Note.png Note

This documentation assume that you already know what is a group storage, and that you already have one for storing your modules. If it's not the case, check the Group Storage Documentation.

Put the modules in a dedicated folder of the group storage, so that the modules are reachable by any user belonging to the group storage.

Terminal.png access:
rsync modules-dir/ /srv/my-storage@storage_fqdn/modules-dir

After adding the directory to the MODULEPATH environment variable, the modules inside this directory will be loadable by other users of the group. This can be done by using one of the two following commands.

Terminal.png node:
module use /srv/my-storage@storage_fqdn/modules-dir

or

Terminal.png node:
export MODULEPATH=/srv/my-storage@storage_fqdn/modules-dir:$MODULEPATH


Finally, the modules can be loaded.

Terminal.png node:
module load my-personal-module


If you are not familiar with modules, you will find here a sample setup to build and share a set of modules of your own.