Grid5000:Home: Difference between revisions
Lpouilloux (talk | contribs) No edit summary |
No edit summary |
||
Line 1: | Line 1: | ||
__NOTOC__ __NOEDITSECTION__ | __NOTOC__ __NOEDITSECTION__ | ||
{|width="95%" | |||
|- valign="top" | |||
|bgcolor="#888888" style="border:1px solid #cccccc;padding:2em;padding-top:1em;"| | |||
[[File:Slices-ri-white-color.png|260px|left]] | |||
<b>Grid'5000 is a precursor infrastructure of [http://www.slices-ri.eu SLICES-RI], Scientific Large Scale Infrastructure for Computing/Communication Experimental Studies.</b> | |||
<br/> | |||
Content on this website is partly outdated. Technical information stays relevant. | |||
|} | |||
{|width="95%" | {|width="95%" | ||
|- valign="top" | |- valign="top" | ||
|bgcolor="#f5fff5" style="border:1px solid #cccccc;padding:1em;padding-top:0.5em;"| | |bgcolor="#f5fff5" style="border:1px solid #cccccc;padding:1em;padding-top:0.5em;"| | ||
[[Image:g5k-backbone.png|thumbnail|260px|right|Grid'5000]] | [[Image:g5k-backbone.png|thumbnail|260px|right|Grid'5000]] | ||
'''Grid'5000 is a large-scale and flexible testbed for experiment-driven research in all areas of computer science, with a focus on parallel and distributed computing including Cloud, HPC | '''Grid'5000 is a large-scale and flexible testbed for experiment-driven research in all areas of computer science, with a focus on parallel and distributed computing, including Cloud, HPC, Big Data and AI.''' | ||
Key features: | Key features: | ||
Line 15: | Line 24: | ||
<br> | <br> | ||
Read more about our [[Team|teams]], our [[Publications|publications]], and the [[Grid5000:UsagePolicy|usage policy]] of the testbed. Then [[Grid5000:Get_an_account|get an account]], and learn how to use the testbed with our [[Getting_Started|Getting Started tutorial]] and the rest of our [[:Category:Portal:User|Users portal]]. | Read more about our [[Team|teams]], our [[Publications|publications]], and the [[Grid5000:UsagePolicy|usage policy]] of the testbed. Then [[Grid5000:Get_an_account|get an account]], and learn how to use the testbed with our [[Getting_Started|Getting Started tutorial]] and the rest of our [[:Category:Portal:User|Users portal]]. | ||
<br> | <br> | ||
Published documents and presentations: | |||
* [[Media:Grid5000.pdf|Presentation of Grid'5000]] (April 2019) | * [[Media:Grid5000.pdf|Presentation of Grid'5000]] (April 2019) | ||
* [https://www.grid5000.fr/mediawiki/images/Grid5000_science-advisory-board_report_2018.pdf Report from the Grid'5000 Science Advisory Board (2018)] | * [https://www.grid5000.fr/mediawiki/images/Grid5000_science-advisory-board_report_2018.pdf Report from the Grid'5000 Science Advisory Board (2018)] | ||
Line 50: | Line 57: | ||
* [[Lille:Home|Lille]] | * [[Lille:Home|Lille]] | ||
* [[Luxembourg:Home|Luxembourg]] | * [[Luxembourg:Home|Luxembourg]] | ||
* [[Louvain:Home|Louvain]] | |||
|width="33%" bgcolor="#f5f5f5" style="border:1px solid #cccccc;padding:1em;padding-top:0.5em;"| | |width="33%" bgcolor="#f5f5f5" style="border:1px solid #cccccc;padding:1em;padding-top:0.5em;"| | ||
* [[Lyon:Home|Lyon]] | * [[Lyon:Home|Lyon]] | ||
* [[Nancy:Home|Nancy]] | * [[Nancy:Home|Nancy]] | ||
* [[Nantes:Home|Nantes]] | * [[Nantes:Home|Nantes]] | ||
* [[Rennes:Home|Rennes]] | |||
|width="33%" bgcolor="#f5f5f5" style="border:1px solid #cccccc;padding:1em;padding-top:0.5em;"| | |width="33%" bgcolor="#f5f5f5" style="border:1px solid #cccccc;padding:1em;padding-top:0.5em;"| | ||
* [[Sophia:Home|Sophia-Antipolis]] | * [[Sophia:Home|Sophia-Antipolis]] | ||
* [[Strasbourg:Home|Strasbourg]] | |||
* [[Toulouse:Home|Toulouse]] | * [[Toulouse:Home|Toulouse]] | ||
|- | |- | ||
Line 62: | Line 71: | ||
== Current funding == | == Current funding == | ||
{|width="100%" cellspacing="3" | {|width="100%" cellspacing="3" | ||
|- | |- |
Revision as of 00:37, 7 June 2025
Grid'5000 is a precursor infrastructure of SLICES-RI, Scientific Large Scale Infrastructure for Computing/Communication Experimental Studies.
|
Grid'5000 is a large-scale and flexible testbed for experiment-driven research in all areas of computer science, with a focus on parallel and distributed computing, including Cloud, HPC, Big Data and AI. Key features:
Older documents:
|
Random pick of publications
Five random publications that benefited from Grid'5000 (at least 2777 overall):
- Jean-Baptiste Decourcelle, Tu Dinh Ngoc, Boris Teabe, Daniel Hagimont. Fast VM Replication on Heterogeneous Hypervisors for Robust Fault Tolerance. Middleware '23: 24th International Middleware Conference, Dec 2023, Bologna Italy, France. 14 p., 10.1145/3590140.3592849. hal-04095419 view on HAL pdf
- Alexandre Sabbadin, Abdel Kader Chabi Sika Boni, Hassan Hassan, Khalil Drira. Optimizing network slice placement using Deep Reinforcement Learning (DRL) on a real platform operated by Open Source MANO (OSM). Tunisian-Algerian Conference on Applied Computing (TACC 2023), Nov 2023, Sousse, Tunisia. hal-04265140 view on HAL pdf
- Josefine Umlauft, Christopher W. Johnson, Philippe Roux, Daniel Taylor Trugman, Albanne Lecointre, et al.. Mapping Glacier Basal Sliding Applying Machine Learning. Journal of Geophysical Research: Earth Surface, 2023, 128 (11), 10.1029/2023JF007280. insu-04604354 view on HAL pdf
- Nicolas Hubert, Pierre Monnin, Armelle Brun, Davy Monticolo. Sem@K: Is my knowledge graph embedding model semantic-aware?. Semantic Web – Interoperability, Usability, Applicability, 2023, 14 (6), pp.1273-1309. 10.3233/SW-233508. hal-04344975 view on HAL pdf
- Jean-Eudes Ayilo, Mostafa Sadeghi, Romain Serizel. Diffusion-based speech enhancement with a weighted generative-supervised learning loss. International Conference on Acoustics Speech and Signal Processing (ICASSP), IEEE, Apr 2024, Seoul (Korea), South Korea. 10.48550/arXiv.2309.10457. hal-04210729v2 view on HAL pdf
Latest news
Cluster "vianden" is now in the default queue in Luxembourg
We are pleased to announce that the vianden[1] cluster of Luxembourg is now available in the default queue.
Vianden is a cluster of a single node with 8 MI300X AMD GPUs.
The node features:
The AMD MI300X GPUs are not supported by Grid'5000 default system (Debian 11). However, one can easily unlock full GPU functionality by deploying the ubuntu2404-rocm environment:
fluxembourg
$ oarsub
-t exotic
-t deploy
-p vianden
-I
fluxembourg
$ kadeploy3
-m vianden-1
ubuntu2404-rocm
More information in the Exotic page.
This cluster was funded by the University of Luxembourg.
[1] https://www.grid5000.fr/w/Luxembourg:Hardware#vianden
-- Grid'5000 Team 11:30, 27 June 2025 (CEST)
Cluster "hydra" is now in the default queue in Lyon
We are pleased to announce that the hydra[1] cluster of Lyon is now available in the default queue.
As a reminder, Hydra is a cluster composed of 4 NVIDIA Grace-Hopper servers[2].
Each node features:
Due to its bleeding-edge hardware, the usual Grid'5000 environments are not supported by default for this cluster.
(Hydra requires system environments featuring a Linux kernel >= 6.6). The default system on the hydra nodes is based on Debian 11, but **does not provide functional GPUs**. However, users may deploy the ubuntugh2404-arm64-big environment, which is similar to the official Nvidia image provided for this machine and provides GPU support.
To submit a job on this cluster, the following command may be used:
oarsub -t exotic -p hydra
This cluster is funded by INRIA and by Laboratoire de l'Informatique du Parallélisme with ENS Lyon support.
[1] Hydra is the largest of the modern constellations according to Wikipedia: https://en.wikipedia.org/wiki/Hydra_(constellation)
[2] https://developer.nvidia.com/blog/nvidia-grace-hopper-superchip-architecture-in-depth/
-- Grid'5000 Team 16:42, 12 June 2025 (CEST)
Cluster "estats" (Jetson nodes in Toulouse) is now kavlan capable
The network topology of the estats Jetson nodes can now be configured, just like for other clusters.
More info in the Network reconfiguration tutorial.
-- Grid'5000 Team 18:25, 21 May 2025 (CEST)
Cluster "chirop" is now in the default queue of Lille with energy monitoring.
Dear users,
We are pleased to announce that the Chirop[1] cluster of Lille is now available in the default queue.
This cluster consists of 5 HPE DL360 Gen10+ nodes with:
Energy monitoring[2] is also available for this cluster[3], provided by newly installed Wattmetres (similar to those already available at Lyon).
This cluster was funded by CPER CornelIA.
[1] https://www.grid5000.fr/w/Lille:Hardware#chirop
[2] https://www.grid5000.fr/w/Energy_consumption_monitoring_tutorial [3] https://www.grid5000.fr/w/Monitoring_Using_Kwollect#Metrics_available_in_Grid.275000
-- Grid'5000 Team 16:25, 05 May 2025 (CEST)
Grid'5000 sites
Current funding
INRIA |
CNRS |
UniversitiesIMT Atlantique |
Regional councilsAquitaine |