Grid5000:Home

From Grid5000
Jump to navigation Jump to search
Grid'5000

Grid'5000 is a large-scale and flexible testbed for experiment-driven research in all areas of computer science, with a focus on parallel and distributed computing including Cloud, HPC and Big Data and AI.

Key features:

  • provides access to a large amount of resources: 15000 cores, 800 compute-nodes grouped in homogeneous clusters, and featuring various technologies: PMEM, GPU, SSD, NVMe, 10G and 25G Ethernet, Infiniband, Omni-Path
  • highly reconfigurable and controllable: researchers can experiment with a fully customized software stack thanks to bare-metal deployment features, and can isolate their experiment at the networking layer
  • advanced monitoring and measurement features for traces collection of networking and power consumption, providing a deep understanding of experiments
  • designed to support Open Science and reproducible research, with full traceability of infrastructure and software changes on the testbed
  • a vibrant community of 500+ users supported by a solid technical team


Read more about our teams, our publications, and the usage policy of the testbed. Then get an account, and learn how to use the testbed with our Getting Started tutorial and the rest of our Users portal.

Grid'5000 is merging with FIT to build the SILECS Infrastructure for Large-scale Experimental Computer Science. Read an Introduction to SILECS (April 2018)


Recently published documents and presentations:

Older documents:


Grid'5000 is supported by a scientific interest group (GIS) hosted by Inria and including CNRS, RENATER and several Universities as well as other organizations. Inria has been supporting Grid'5000 through ADT ALADDIN-G5K (2007-2013), ADT LAPLACE (2014-2016), and IPL HEMERA (2010-2014).


Current status (at 2024-07-16 02:03): 2 current events, 2 planned (details)


Random pick of publications

Five random publications that benefited from Grid'5000 (at least 2515 overall):

  • Mohamed Maouche, Brij Mohan Lal Srivastava, Nathalie Vauquier, Aurélien Bellet, Marc Tommasi, et al.. Enhancing speech privacy with slicing. Interspeech 2022 - Human and Humanizing Speech Technology, Sep 2022, Incheon, South Korea. hal-03369137v2 view on HAL pdf
  • Guillaume Briffoteaux, Nouredine Melab, Mohand Mezmaz, Daniel Tuyttens. Hybrid Acquisition Processes in Surrogate-based Optimization. Application to Covid-19 Contact Reduction. BIOMA 2022 - International Conference on Bioinspired Optimisation Methods and Their Applications, Nov 2022, Maribor, Slovenia. hal-03770742 view on HAL pdf
  • Igor Fontana de Nardin, Patricia Stolf, Stéphane Caux. Evaluation of Heuristics to Manage a Data Center Under Power Constraints. 13th International Green and Sustainable Computing Conference (IGSC 2022), Oct 2022, Pullman, United States. 10.1109/IGSC55832.2022.9969362. hal-03841713 view on HAL pdf
  • Clément Courageux-Sudan, Anne-Cécile Orgerie, Martin Quinson. A Wi-Fi Energy Model for Scalable Simulation. WoWMoM 2023 - 24th IEEE International Symposium on a World of Wireless, Mobile and Multimedia Networks, Jun 2023, Boston (MA), United States. pp.1-10. hal-04055720 view on HAL pdf
  • Marie Delavergne, Geo Johns Antony, Adrien Lebre. Cheops, a service to blow away Cloud applications to the Edge. Research Report RR-9486, Inria Rennes - Bretagne Atlantique. 2022, pp.1-16. hal-03770492v2 view on HAL pdf


Latest news

Rss.svgNew Access Rules for Production Queue

We are introducing new access rules for clusters in the production queue. Clusters in this queue are now accessed based on priority levels that reflect their funding sources. Jobs submitted at higher priority levels are scheduled before those at lower levels and may also have longer maximum durations.

For more detailed information, please visit: https://www.grid5000.fr/w/Production#Using_production_resources . However, an important thing to note is that job submission commands that worked previously will still work after that change (but you might get higher or lower priority depending on your resources selection).

You can check your priority level for each cluster using https://api.grid5000.fr/explorer . Currently, this tool only displays basic information; however, we plan to add more features soon.

Please be aware that these changes apply only to the production clusters, which are currently available only in Nancy and Rennes. There are no changes to the "default" queue.

If you encounter any issues or have feedback regarding this new feature, or if you believe your priority level on specific resources is not adequate, please contact us at <support-staff@lists.grid5000.fr>.

-- Grid'5000 Team 9:25, June 4th 2024 (CEST)

Rss.svgCluster "estats" is now in the default queue in Toulouse

We are pleased to announce that the estats cluster of Toulouse (the name refers to Pica d'Estats) is now available in the default queue.

As a reminder, estats is composed of 12 edge-class nodes powered by Nvidia AGX Xavier SoCs. Each node features:

  • 1 ARM64 CPU (Nvidia Carmel micro-arch) with 8 cores
  • 1 Nvidia GPU (Nvidia Volta micro-arch)
  • 32 GB RAM shared between CPU and GPU
  • 1 NVMe of 2TB
  • 1 Gbps NIC
  • Since it is not a cluster of server-class machines (unlike all current other Grid'5000 nodes), estats runs a different default system environment, but other common functionalities are the same (kadeploy etc., except kavlan which is not supported yet).

    For the experimentations, it is recommended to deploy Ubuntu L4T.

    More information in the Jetson page.

    The cluster was funded by a CNRS grant.

    -- Grid'5000 Team 9:51, March 6th 2024 (CEST)

    Rss.svgThe big variant of Debian 12 "Bookworm" environments is ready for deployments

    We are pleased to inform you that the big variant of Debian 12 (Bookworm) environments is now supported for deployments in Grid'5000. Check `kaenv3 -l debian12%` for detailed information.

    Notably, the NVIDIA driver has been updated to version 535.129.03, and CUDA has been upgraded to version 12.2.2_535.104.05_linux for the amd64 architecture.

    The default environment available on nodes will continue to be debian11-std for the foreseeable future.

    Please refer to the updated wiki documentation¹ for guidance on Debian 12-min|nfs|big usage.

    ¹: https://www.grid5000.fr/w/Getting_Started#On_Grid.275000_reference_environments

    -- Grid'5000 Team 14:21, Jan 22nd 2024 (CEST)

    Rss.svgCluster "montcalm" is now in the default queue in Toulouse

    We have the pleasure to announce that the "montaclm" cluster is now available in the default queue of the Toulouse site, which makes the site full-fledged again!

    This cluster consists of 10 HPE Proliant DL360 Gen10+ nodes with 2 CPUs Intel Xeon Silver 4314 (16 cores per CPUs), 256 GB of DDR4 RAM, and 894GB SSD.

    Jobs submitted on the Toulouse site will run by default on this cluster.

    Beside the "montcalm" cluster, the "edge-class" cluster "estats" is still available in the testing queue for now.

    In order to support the SLICES-FR project, the site infrastructure has been funded by CNRS/INS2I and the "montcalm" cluster has been funded by University Paul Sabatier (UT3).

    -- Grid'5000 Team 10:30, 18 Jan 2024 (CET)


    Read more news

    Grid'5000 sites

    Current funding

    As from June 2008, Inria is the main contributor to Grid'5000 funding.

    INRIA

    Logo INRIA.gif

    CNRS

    CNRS-filaire-Quadri.png

    Universities

    IMT Atlantique
    Université Grenoble Alpes, Grenoble INP
    Université Rennes 1, Rennes
    Institut National Polytechnique de Toulouse / INSA / FERIA / Université Paul Sabatier, Toulouse
    Université Bordeaux 1, Bordeaux
    Université Lille 1, Lille
    École Normale Supérieure, Lyon

    Regional councils

    Aquitaine
    Auvergne-Rhône-Alpes
    Bretagne
    Champagne-Ardenne
    Provence Alpes Côte d'Azur
    Hauts de France
    Lorraine