Grid5000:Home
Grid'5000 is a large-scale and flexible testbed for experiment-driven research in all areas of computer science, with a focus on parallel and distributed computing including Cloud, HPC and Big Data and AI. Key features:
Grid'5000 is merging with FIT to build the SILECS Infrastructure for Large-scale Experimental Computer Science. Read an Introduction to SILECS (April 2018)
Older documents:
|
Random pick of publications
Five random publications that benefited from Grid'5000 (at least 2758 overall):
- Ali Golmakani, Mostafa Sadeghi, Xavier Alameda-Pineda, Romain Serizel. A weighted-variance variational autoencoder model for speech enhancement. ICASSP 2024 - International Conference on Acoustics Speech and Signal Processing, IEEE, Apr 2024, Seoul (Korea), South Korea. pp.1-5, 10.1109/ICASSP48485.2024.10446294. hal-03833827v2 view on HAL pdf
- Ismaël Tankeu, Geoffray Bonnin. Towards Characterising Induced Emotions: Exploiting Physiological Data and Investigating the Effect of Music Familiarity. MuRS 2024: 2nd Music Recommender Systems Workshop, Oct 2024, Bari, Italy. hal-04703972 view on HAL pdf
- Pierre-François Gimenez, Jérôme Mengin. Learning Conditional Preference Networks: an Approach Based on the Minimum Description Length Principle. IJCAI 2024 - 33rd International Joint Conference on Artificial Intelligence, Aug 2024, Jeju, South Korea. pp.3395-3403, 10.24963/ijcai.2024/376. hal-04572196 view on HAL pdf
- Tom Hubrecht, Claude-Pierre Jeannerod, Paul Zimmermann, Laurence Rideau, Laurent Théry. Towards a correctly-rounded and fast power function in binary64 arithmetic. 2024. hal-04159652v2 view on HAL pdf
- Rahma Hellali, Zaineb Chelly Dagdia, Karine Zeitouni. A Multi-Objective Multi-Agent Interactive Deep Reinforcement Learning Approach for Feature Selection. International conference on neural information processing, Dec 2024, Auckland (Nouvelle Zelande), New Zealand. pp.15. hal-04723314 view on HAL pdf
Latest news
Cluster "musa" with Nvidia H100 GPUs is available in production queue
We are pleased to announce that a new cluster named "musa" is available in the production queue¹ of Abaca.
This cluster has been funded by Inria DSI as a shared computing resource.
It is accessible to all Abaca users. Users affiliated with Inria have access with the same level of priority, regardless of the research center to which they are attached.
This cluster is composed of six HPE Proliant DL385 Gen11 nodes² with 2 AMD EPYC 9254 24-Core Processor, 512 GiB of RAM, 2 x Nvidia H100 NVL (94 GiB) with NVLink, one 6 TB SSD NVME and 25 Gbps Ethernet Connexion
Please note that in order to share it efficiently, walltime is limited:
The cluster "musa" is located at Sophia, hosted in the datacenter of Inria Centre at Université Côte d’Azur.
¹: https://api.grid5000.fr/explorer/hardware/sophia/#musa
²: the nodes are named musa-1, musa-2,.., musa-6
-- Grid'5000 Team 13:30, 19 March 2025 (CEST)
Cluster "Hydra" is now in the testing queue in Lyon
We are pleased to announce that the hydra[1] cluster of Lyon is now available in the testing queue.
Hydra is a cluster composed of 4 NVIDIA Grace-Hopper servers[2].
Each node features:
Due to its bleeding edge hardware, usual Grid'5000 environments are not supported by default for this cluster.
(Hydra requires system environments featuring a Linux kernel >= 6.6). The default system on the hydra nodes is based on Debian11, but **does not provide functional GPU**. However, users may deploy the ubuntugh2404-arm64-big environment, which is similar to official Nvidia image provided for this machine and provides GPU support.
To submit a job on this cluster, the following command may be used:
oarsub -q testing -t exotic -p hydra
This cluster is funded by INRIA and by Laboratoire de l'Informatique du Parallélisme with ENS Lyon support.
[1] Hydra is the largest of the modern constellations according to Wikipedia: https://en.wikipedia.org/wiki/Hydra_(constellation)
[2] https://developer.nvidia.com/blog/nvidia-grace-hopper-superchip-architecture-in-depth/
-- Grid'5000 Team 16:10, 11 March 2025 (CEST)
New Access Rules for Production Queue
We are introducing new access rules for clusters in the production queue. Clusters in this queue are now accessed based on priority levels that reflect their funding sources. Jobs submitted at higher priority levels are scheduled before those at lower levels and may also have longer maximum durations.
For more detailed information, please visit: https://www.grid5000.fr/w/Production#Using_production_resources . However, an important thing to note is that job submission commands that worked previously will still work after that change (but you might get higher or lower priority depending on your resources selection).
You can check your priority level for each cluster using https://api.grid5000.fr/explorer . Currently, this tool only displays basic information; however, we plan to add more features soon.
Please be aware that these changes apply only to the production clusters, which are currently available only in Nancy and Rennes. There are no changes to the "default" queue.
If you encounter any issues or have feedback regarding this new feature, or if you believe your priority level on specific resources is not adequate, please contact us at <support-staff@lists.grid5000.fr>.
-- Grid'5000 Team 9:25, June 4th 2024 (CEST)
Cluster "estats" is now in the default queue in Toulouse
We are pleased to announce that the estats cluster of Toulouse (the name refers to Pica d'Estats) is now available in the default queue.
As a reminder, estats is composed of 12 edge-class nodes powered by Nvidia AGX Xavier SoCs. Each node features:
Since it is not a cluster of server-class machines (unlike all current other Grid'5000 nodes), estats runs a different default system environment, but other common functionalities are the same (kadeploy etc., except kavlan which is not supported yet).
For the experimentations, it is recommended to deploy Ubuntu L4T.
More information in the Jetson page.
The cluster was funded by a CNRS grant.
-- Grid'5000 Team 9:51, March 6th 2024 (CEST)
Grid'5000 sites
Current funding
As from June 2008, Inria is the main contributor to Grid'5000 funding.
INRIA |
CNRS |
UniversitiesIMT Atlantique |
Regional councilsAquitaine |