Grid5000:Home

From Grid5000
Jump to navigation Jump to search
Grid'5000

Grid'5000 is a large-scale and flexible testbed for experiment-driven research in all areas of computer science, with a focus on parallel and distributed computing including Cloud, HPC and Big Data and AI.

Key features:

  • provides access to a large amount of resources: 15000 cores, 800 compute-nodes grouped in homogeneous clusters, and featuring various technologies: PMEM, GPU, SSD, NVMe, 10G and 25G Ethernet, Infiniband, Omni-Path
  • highly reconfigurable and controllable: researchers can experiment with a fully customized software stack thanks to bare-metal deployment features, and can isolate their experiment at the networking layer
  • advanced monitoring and measurement features for traces collection of networking and power consumption, providing a deep understanding of experiments
  • designed to support Open Science and reproducible research, with full traceability of infrastructure and software changes on the testbed
  • a vibrant community of 500+ users supported by a solid technical team


Read more about our teams, our publications, and the usage policy of the testbed. Then get an account, and learn how to use the testbed with our Getting Started tutorial and the rest of our Users portal.

Grid'5000 is merging with FIT to build the SILECS Infrastructure for Large-scale Experimental Computer Science. Read an Introduction to SILECS (April 2018)


Recently published documents and presentations:

Older documents:


Grid'5000 is supported by a scientific interest group (GIS) hosted by Inria and including CNRS, RENATER and several Universities as well as other organizations. Inria has been supporting Grid'5000 through ADT ALADDIN-G5K (2007-2013), ADT LAPLACE (2014-2016), and IPL HEMERA (2010-2014).


Current status (at 2025-04-02 03:14): 2 current events, 6 planned (details)


Random pick of publications

Five random publications that benefited from Grid'5000 (at least 2758 overall):

  • Lucien Ngale, Eddy Caron, Huaxi Zhang, Mélanie Fontaine. Simulation-Based Estimation of Resource Needs in Fog Robotics Infrastructures. 13th International Conference on Cloud Computing and Services Science, Apr 2023, Prague, Czech Republic. pp.100-111, 10.5220/0012031300003488. hal-04151787 view on HAL pdf
  • Zhejiayu Ma, Frédéric Giroire, Guillaume Urvoy-Keller, Soufiane Roubia. Q-learning for Waiting Time Control in CDN/V2V Live streaming. 2023 IFIP Networking Conference (IFIP Networking), Jun 2023, Barcelona, Spain. pp.1-9, 10.23919/IFIPNetworking57963.2023.10186429. hal-04309215 view on HAL pdf
  • Hugo Thomas, Guillaume Gravier, Pascale Sébillot. Recherche de relation à partir d’un seul exemple fondée sur un modèle N-way K-shot : une histoire de distracteurs. 35èmes Journées d'Études sur la Parole (JEP 2024) 31ème Conférence sur le Traitement Automatique des Langues Naturelles (TALN 2024) 26ème Rencontre des Étudiants Chercheurs en Informatique pour le Traitement Automatique des Langues (RECITAL 2024), Jul 2024, Toulouse, France. pp.157-168. hal-04623015 view on HAL pdf
  • Maxime Lanvin, Pierre-François Gimenez, Yufei Han, Frédéric Majorczyk, Ludovic Mé, et al.. Towards understanding alerts raised by unsupervised network intrusion detection systems. The 26th International Symposium on Research in Attacks, Intrusions and Defenses (RAID ), Oct 2023, Hong Kong China, France. pp.135-150, 10.1145/3607199.3607247. hal-04172470 view on HAL pdf
  • Etienne Le Louet, Antoine Blin, Julien Sopena, Ahmed Amamou, Kamel Haddadou. Effects of secured DNS transport on resolver performance. 2023 IEEE Symposium on Computers and Communications (ISCC), Jul 2023, Gammarth, Tunisia. pp.238-244, 10.1109/ISCC58397.2023.10217887. hal-04220131 view on HAL pdf


Latest news

Rss.svgChange of default queue based on platform

Until now, Abaca (production) users had to specify `-q production` when reserving Abaca resources with OAR.

This is no longer necessary as your default queue is now automatically selected based on the platform your default group is associated to, as shown at https://api.grid5000.fr/explorer/selector/ and in the message displayed when connecting to a frontend.

For SLICES-FR users, there is no change since the correct queue was already selected by default.

Additionally, the "production" queue has been renamed to "abaca", although "production" will continue to work for the foreseeable future.

Please note one case where this change may affect your workflow:

When an Abaca user reserves a resource from SLICES-FR (a non-production resource), they must explicitly specify they want to use the SLICES-FR queue, which is called "default", by adding `-q default` the OAR command.

-- Abaca Grid'5000 Team 10:10, 31 March 2025 (CEST)

Rss.svgCluster "musa" with Nvidia H100 GPUs is available in production queue

We are pleased to announce that a new cluster named "musa" is available in the production queue¹ of Abaca.

This cluster has been funded by Inria DSI as a shared computing resource.

It is accessible to all Abaca users. Users affiliated with Inria have access with the same level of priority, regardless of the research center to which they are attached.

This cluster is composed of six HPE Proliant DL385 Gen11 nodes² with 2 AMD EPYC 9254 24-Core Processor, 512 GiB of RAM, 2 x Nvidia H100 NVL (94 GiB) with NVLink, one 6 TB SSD NVME and 25 Gbps Ethernet Connexion

Please note that in order to share it efficiently, walltime is limited:

  • 6 hours for the first two nodes
  • 24 hours for the next two
  • 48 hours for the last two
  • The cluster "musa" is located at Sophia, hosted in the datacenter of Inria Centre at Université Côte d’Azur.

    ¹: https://api.grid5000.fr/explorer/hardware/sophia/#musa

    ²: the nodes are named musa-1, musa-2,.., musa-6

    -- Grid'5000 Team 13:30, 19 March 2025 (CEST)

    Rss.svgCluster "Hydra" is now in the testing queue in Lyon

    We are pleased to announce that the hydra[1] cluster of Lyon is now available in the testing queue.

    Hydra is a cluster composed of 4 NVIDIA Grace-Hopper servers[2].

    Each node features:

  • 1 Nvidia Grace ARM64 CPU with 72 cores (Neoverse-V2)
  • 1 Nvidia Hopper GPU
  • 512GB LPDDR5 memory
  • 96GB HBM memory
  • 1x1To SSD NVME + 1x1.92To SCSI disk
  • Due to its bleeding edge hardware, usual Grid'5000 environments are not supported by default for this cluster.

    (Hydra requires system environments featuring a Linux kernel >= 6.6). The default system on the hydra nodes is based on Debian11, but **does not provide functional GPU**. However, users may deploy the ubuntugh2404-arm64-big environment, which is similar to official Nvidia image provided for this machine and provides GPU support.

    To submit a job on this cluster, the following command may be used:

    oarsub -q testing -t exotic -p hydra

    This cluster is funded by INRIA and by Laboratoire de l'Informatique du Parallélisme with ENS Lyon support.

    [1] Hydra is the largest of the modern constellations according to Wikipedia: https://en.wikipedia.org/wiki/Hydra_(constellation)

    [2] https://developer.nvidia.com/blog/nvidia-grace-hopper-superchip-architecture-in-depth/

    -- Grid'5000 Team 16:10, 11 March 2025 (CEST)

    Rss.svgNew Access Rules for Production Queue

    We are introducing new access rules for clusters in the production queue. Clusters in this queue are now accessed based on priority levels that reflect their funding sources. Jobs submitted at higher priority levels are scheduled before those at lower levels and may also have longer maximum durations.

    For more detailed information, please visit: https://www.grid5000.fr/w/Production#Using_production_resources . However, an important thing to note is that job submission commands that worked previously will still work after that change (but you might get higher or lower priority depending on your resources selection).

    You can check your priority level for each cluster using https://api.grid5000.fr/explorer . Currently, this tool only displays basic information; however, we plan to add more features soon.

    Please be aware that these changes apply only to the production clusters, which are currently available only in Nancy and Rennes. There are no changes to the "default" queue.

    If you encounter any issues or have feedback regarding this new feature, or if you believe your priority level on specific resources is not adequate, please contact us at <support-staff@lists.grid5000.fr>.

    -- Grid'5000 Team 9:25, June 4th 2024 (CEST)


    Read more news

    Grid'5000 sites

    Current funding

    As from June 2008, Inria is the main contributor to Grid'5000 funding.

    INRIA

    Logo INRIA.gif

    CNRS

    CNRS-filaire-Quadri.png

    Universities

    IMT Atlantique
    Université Grenoble Alpes, Grenoble INP
    Université Rennes 1, Rennes
    Institut National Polytechnique de Toulouse / INSA / FERIA / Université Paul Sabatier, Toulouse
    Université Bordeaux 1, Bordeaux
    Université Lille 1, Lille
    École Normale Supérieure, Lyon

    Regional councils

    Aquitaine
    Auvergne-Rhône-Alpes
    Bretagne
    Champagne-Ardenne
    Provence Alpes Côte d'Azur
    Hauts de France
    Lorraine