Hemera: Difference between revisions
Lpouilloux (talk | contribs) m (→Members) |
Lpouilloux (talk | contribs) |
||
Line 122: | Line 122: | ||
===Experiment planification=== | ===Experiment planification=== | ||
A new module has been added to [https://execo.gforge.inria.fr Execo] to directly access to the OAR database and retrieve the planning of the various ressources available on the Grid'5000 sites. This module can be used in a Python experimental script (see [http://execo.gforge.inria.fr/doc/latest-stable/userguide.html#check-cpu-performance-settings-of-each-grid5000-clusters Execo planning example). | |||
===Virtualization=== | ===Virtualization=== |
Revision as of 10:38, 25 November 2014
Overview
Héméra is a INRIA large scale initiative , started in 2010, that aims at demonstrating ambitious up-scaling techniques for large scale distributed computing by carrying out several dimensioning experiments on the Grid’5000 infrastructure, at animating the scientific community around Grid’5000 and at enlarging the Grid’5000 community by helping newcomers to make use of Grid’5000. It is not restricted to INRIA teams.
Publications around Grid'5000 are available as a HAL collection.
News
- 2014, December 3 : V. Villebonnet (Hemera Phd student between IRIT and Avalon teams) and L. Lefevre present "Towards Generalizing "Big.Little" for Energy Proportional HPC and Cloud Infrastructures" during [ http://www.swinflow.org/confs/susc2014/ Sustaincom 2014 conference]
- 2014, October 8 : B. Cornea (Hemera Postdoc between Myriads and Avalon teams) presents "Studying the energy consumption of data transfers in Clouds: the Ecofen" approach in IEEE CloudNet2014 Conference (Luxembourg)
- 2014, July 1-2 : Green Days@Rennes - Join and participate !
- 2014, May 5-7 : Joint activities on Ecofen : End2end Network Energy profiling (between Avalon and Myriads) presented during the GreenTouch meeting, Milano, Italy
- 2013, November 28-29 : GreenDays@Lille
- 2013, June 25: Overview of Hemera given at the Inria Scientific Days
- 2013, March 12-15: Workshop on tools for experiments held at Nancy (slides available)
- 2013, February 11: Mid-term project evaluation
- 2013, January 28-29: Green Days@Luxembourg : Energy efficiency : what else / what next ?
- 2012, November 3: Short presentation of Hemera of the Grid'5000 Winter School 2012.
- 2012, May 9-11: E2DC Ghislain Landry Tsafack (Hemera Phd student between INRIA RESO (Lyon) and IRIT (Toulouse)) presents his research activities on "DNA-inspired Scheme for Building the Energy Profile of HPC Systems" in the 1st International Workshop on Energy-Efficient Data Centres
- 2012, January 19-20: Green Days@Lyon (Slides of the presentation available)
- 2011, November 18: Support for Experimental Computer Science Workshop, SC'11, Seattle.
- 2011, November 14-18, Ghislain Landry Tsafack (Hemera Phd student between INRIA RESO (Lyon) and IRIT (Toulouse)) has received a student volunteer grant and will be present in the Supercomputing 2011 event in Seattle, USA
- 2011, October 7, Présentation of Héméra at the INRA-INRIA day on large scale data management and high performance computing
- 2011, July 7: Zenith joins Hemera.
- 2011, May 31-June 1: Green Days@Paris.
- 2011, April 20: BOF of the WG Methodology at the Grid'5000 school.
- 2011, April 19: Hemera presentation at the Grid'5000 school.
- 2011, April 18-21: Hemera co-organized the Grid'5000 School 2011 at Reims.
- 2011, April 6th: Presentation of Hemera at the INRIA Research Grid Day.
- 2011, April 1st: Creation of a new scientific challenge: Orchestrating Experiments on the gLite Production Grid Middleware.
- 2010 October 5th: The kickoff meeting was hold in Paris.
Scientific Challenges
Scientific challenges concentrate effort of some partners on well defined issues, such as a particular experiment on Grid’5000. Scientific challenges are expected to be bound in time. However, a scientific challenge may contain several sub-challenges, typically representing various steps toward the ultimate challenges.
Active scientific challenges
- System
- Energy Profiling of Large Scale Applications -- Laurent Lefèvre (AVALON), Jean-Marc Pierson (IRIT), Jean-Marc Menaud (ASCOLA)
- Robustness of Large Systems in Presence of High Churn -- Pierre Sens (REGAL), Jean-Marc Vincent (MESCAL)
- Programming Paradigm
- Large Scale Computing for Combinatorial Optimization Problems -- Bilel Derbel (DOLPHIN), Nouredine Melab (DOLPHIN)
- Scalable Distributed Processing Using the MapReduce Paradigm -- Gabriel Antoniu (KERDATA), Gilles Fedak (GRAAL)
- Domain Specific
- Thinking GRID for Electromagnetic Simulation of Oversized Structures -- Hervé Aubert (MINC-LAAS), Thierry Monteil (MRS-LAAS), Patricia Stolf (ASTRE-IRIT)
- Virtualization
- Large Scale Management of Virtual Machines -- Adrien Lèbre (ASCOLA)
Completed challenges
- Network
- Traffic Awareness -- Paulo Gonçalves (RESO), K. Avrachenkov (MAESTRO)
- System
- Orchestrating Experiments on the gLite Production Grid Middleware -- Lucas Nussbaum (ALGORILLE), Frédéric Suter (CC IN2P3).
- Programming Paradigm
- Low Level Component Model Enabling Performance Portability of HPC Applications -- Christian Perez (Avalon)
- Domain Specific
- Multi-parametric Intensive Stochastic Simulations for Hydrogeology -- Jocelyne Erhel (SAGE)
Working Groups
Working groups aims to animate a part of the community around scientific themes
The current list of working groups with their leaders is:
- Transparent, safe and efficient large scale computing -- Stéphane Genaud (ICPS), Fabrice Huet (OASIS)
- Energy Efficient Large Scale Experimental Distributed Systems -- Laurent Lefèvre (AVALON), Jean-Marc Menaud (ASCOLA)
- Bring Grids Power to Internet-Users thanks to Virtualization Technologies -- Adrien Lèbre (ASCOLA), Yvon Jégou (MYRIADS)
- Efficient exploitation of highly heterogeneous and hierarchical large-scale systems Olivier Beaumont (CEPAGE), Fré́dé́ric Vivien (GRAAL)
- Efficient management of very large volumes of information for data-intensive applications -- Gabriel Antoniu (KERDATA), Jean-Marc Pierson (ASTRE)
- Completing challenging experiments on Grid’5000 -- Lucas Nussbaum (ALGORILLE), Olivier Richard (MESCAL)
- Modeling Large Scale Systems and Validating their Simulators -- Martin Quinson (ALGORILLE), Arnaud Legrand (MESCAL)
- Network metrology and traffic characterization -- Paulo Gonçalves (RESO), Konstantin Avrachenkov (MAESTRO)
Matrix of participation in working groups.
Members
The current list of teams participating to Héméra is:
- ACADIE - Assistance à la Certification d’Applications DIstribuées et Embarquées
- ALGORILLE - Algorithms for the Grid
- APO - Algorithmes Parallèles et Optimisation
- ASAP - As Scalable As Possible: foundations of large scale dynamic distributed systems
- ASCOLA - Aspect and composition languages
- AVALON - Algorithms and Software Architectures for Distributed and HPC Platforms
- CC-IN2P3 - Equipe de recherche du Centre de Calcul de l'IN2P3
- CEPAGE - Chercher et Essaimer dans les Plates-formes A Grande Echelle
- DOLPHIN - Parallel Cooperative Multi-criteria Optimization
- GRAAL - Algorithms and Scheduling for Distributed Heterogeneous Platforms.
- GRAND-LARGE - Global parallel and distributed computing
- ICPS - Scientific Parallel Computing and Imaging
- KERDATA - Scalable Storage for Clouds and Beyond
- OASIS - Active objects, semantics, Internet and security.
- MAESTRO - Models for the performance analysis and the control of networks
- MESCAL - Middleware efficiently scalable
- MINC - MIcro et Nanosystèmes pour les Communications sans fils
- MRS – Modélisation et contrôle des Réseaux et Signaux
- MYRIADS - Design and Implementation of Autonomous Distributed Systems
- REGAL - Large-Scale Distributed Systems and Applications
- RESO - Protocols and Software for Very High-Performance Network
- RUNTIME - Efficient runtime systems for parallel architectures
- SAGE - Simulations and Algorithms on Grids for Environment
- SEPIA - Distributed Systems, from Architecture to Middleware (formerly ASTRE team)
- ZENITH - Scientific Data Management
Tools
Here we present the tools which have been developped by Hemera people to ease large scale experiments on Grid'5000.
Experimental engines
Experiment planification
A new module has been added to Execo to directly access to the OAR database and retrieve the planning of the various ressources available on the Grid'5000 sites. This module can be used in a Python experimental script (see [http://execo.gforge.inria.fr/doc/latest-stable/userguide.html#check-cpu-performance-settings-of-each-grid5000-clusters Execo planning example).