Production: Difference between revisions

From Grid5000
Jump to navigation Jump to search
No edit summary
 
Line 1: Line 1:
{{Portal|User}}
{{Portal|User}}
{{Note|text=2025-01-30 - '''A specific documentation Web site for Abaca will go live shortly. In the meantime, specific pages for “Production” use are hosted in the Grid'5000 documentation.'''}}


= Introduction =
= Introduction =
The Nancy and Rennes Grid'5000 sites also hosts clusters for production use (including clusters with GPUs). See [[Nancy:Hardware]] and [[Rennes:Hardware]] for details.


The usage rules differ from the rest of Grid'5000:
{{Note|text='''Abaca is the name of Inria's national computing infrastructure dedicated to production applications.'''
* Advance reservations (<code>oarsub -r</code>) are not allowed (to avoid fragmentation). Only submissions (and reservations that start immediately) are allowed.
Abaca clusters are hosted on Inria sites alongside clusters dedicated to the Grid'5000 platform. Abaca and Grid'5000 use the same technical management tools, and the Abaca and Grid'5000 support teams work together to administer both platforms.<br>
* All Grid'5000 users can use those nodes (provided they meet the conditions stated in [[Grid5000:UsagePolicy]]), but it is expected that users outside of LORIA / Centre Inria Nancy -- Grand Est and IRISA / Centre Inria de l'Université de Rennes will use their own local production resources in priority, and mostly use those resources for tasks that require Grid'5000 features. Examples of local production clusters are Cleps (Paris), Margaret (Saclay), Plafrim (Bordeaux), etc.
'''In the remainder of this document, “Production” refers to the use of the Abaca platform.'''}}
 
The Abaca usage rules differ from the rest of Grid'5000.


= Using the resources =
= Using Production resources =


== Getting an account ==
== Getting an account ==
Users from the '''Loria''' laboratory (LORIA/Centre Inria Nancy Grand-Est) and the '''Irisa''' (IRISA/Centre Inria de l'Université de Rennes) that want to access Grid'5000 primarily for a production usage must use that '''[[Special:G5KRequestAccountUMS|request form]]''' to open an account, like regular Grid'5000 users.
Users from the '''Inria''' [https://www.inria.fr/en/inria-research-centres research centres] that want to access for a production usage must use that '''[[Special:G5KRequestAccountUMS|request form]]''' to open an account, like regular Grid'5000 users.


* The following fields must be filled as follows:  
* The following fields must be filled as follows:  
** ''Group Granting Access'' (GGA): either the group '''named after the research team''', or if it does not belong to the team list below: '''<code>loria</code>''' (for Nancy) or '''<code>igrida</code>''' (for Rennes).
** ''Group Granting Access'' (GGA): either the group '''named after the research team'''
** ''Laboratory'': LORIA or IRISA
** ''Laboratory'': the name of your Inria research center or LORIA or IRISA
** ''Team'': INTUIDOC, SYNALP, LACODAM, MULTISPEECH, SERPICO, CARAMBA, CAPSID, SIROCCO, ORPAILLEUR, LARSEN, CIDRE, SEMAGRAMME, LINKMEDIA, SISR, TANGRAM...
** ''Team'': the name of your research team.


Other users from Nancy (not belonging to the Loria laboratory) can ask to join using the '''<code>nancy-misc</code>''' Group Granting Access while other users from Rennes (not belonging to the Irisa laboratory) can ask to join using the '''<code>rennes-misc</code>''' Group Granting Access.
Other users from Nancy (not belonging to the Loria laboratory) can ask to join using the '''<code>nancy-misc</code>''' Group Granting Access while other users from Rennes (not belonging to the Irisa laboratory) can ask to join using the '''<code>rennes-misc</code>''' Group Granting Access.
Line 22: Line 25:
* Users are automatically subscribed to the Grid'5000 users mailing lists: [mailto:users@lists.grid5000.fr users@lists.grid5000.fr]. This list is the user-to-user or user-to-admin communication mean to address help/support requests for Grid'5000. The technical team can be reached on [mailto:support-staff@lists.grid5000.fr support-staff@lists.grid5000.fr].
* Users are automatically subscribed to the Grid'5000 users mailing lists: [mailto:users@lists.grid5000.fr users@lists.grid5000.fr]. This list is the user-to-user or user-to-admin communication mean to address help/support requests for Grid'5000. The technical team can be reached on [mailto:support-staff@lists.grid5000.fr support-staff@lists.grid5000.fr].


== Learning to use Moyens de Calcul hosted by Grid'5000 ==
== Visualizing resources ==
Refer to the [[Production:Getting Started]] Production tutorial (derived from [[Getting Started]] Grid'5000 tutorial.
There are other tutorial listed on the [https://www.grid5000.fr/mediawiki/index.php/Category:Portal:User Users Home] page.


== Using deep learning software on Grid'5000 ==
{{Note|text='''At that date (2025-02-01), only the Nancy, Rennes, Grenoble and Sophia sites host clusters Production use (Abaca)'''. }}
A tutorial for using deep learning software on Grid'5000, written by Ismael Bada [[User:Ibada/Tuto_Deep_Learning|is also available]].
 
See [https://api.grid5000.fr/explorer/hardware/ Hardware] to learn about the site's resources and your priority access to resources.
 
== Using resources ==
 
The Production usage rules differ from the rest of Grid'5000:
* Advance reservations (<code>oarsub -r</code>) are not allowed (to avoid fragmentation). Only submissions (and reservations that start immediately) are allowed.
* All Grid'5000 users can use Production nodes (provided they meet the conditions stated in [[Grid5000:UsagePolicy]]), but it is expected that users will use their local Production resources in priority, and mostly use those resources for tasks that require Grid'5000 features.


== Access rules for production resources ==
To access production resources, you need to submit jobs to the production queue using the command <code>-q production</code>. Job submissions in the production queue are prioritized based on who funded the material. There are four levels of priority, each with a maximum job duration:
To access production resources, you need to submit jobs to the production queue using the command <code>-q production</code>. Job submissions in the production queue are prioritized based on who funded the material. There are four levels of priority, each with a maximum job duration:
* '''p1''' -- 168h (one week)
* '''p1''' -- 168h (one week)
Line 37: Line 44:
* You may also have access to the clusters on [[Production#Can_I_use_besteffort_jobs_in_production_?|besteffort]].
* You may also have access to the clusters on [[Production#Can_I_use_besteffort_jobs_in_production_?|besteffort]].


'''You can check your priority level for any cluster using''' https://api.grid5000.fr/explorer.
<br>


{{Note|text=Moreover, with '''p1''' priority, user can submit advanced reservation. More information about that in the [[Advanced_OAR#Batch_jobs_vs._advance_reservation_jobs|Advanced OAR Page]]. For example, to reserve one week from now: {{Term|location=fnancy|cmd=<code class="command">oarsub</code> <code>-q p1</code> <code>-r "$(date +'%F %T' --date='+1 week')"</code>}}  
{{Note|text=Moreover, with '''p1''' priority, user can submit advanced reservation. More information about that in the [[Advanced_OAR#Batch_jobs_vs._advance_reservation_jobs|Advanced OAR Page]]. For example, to reserve one week from now: {{Term|location=fnancy|cmd=<code class="command">oarsub</code> <code>-q p1</code> <code>-r "$(date +'%F %T' --date='+1 week')"</code>}}  
Line 42: Line 51:
}}
}}


{{Warning|text=These limits '''DO NOT''' replace the [[Production#I_submitted_a_job,_there_are_free_resources,_but_my_job_doesn't_start_as_expected!|maximum walltime per node]] which are still in effects.}}
{{Warning|text=These limits '''DO NOT''' replace the [[Production:FAQ#I_submitted_a_job,_there_are_free_resources,_but_my_job_doesn't_start_as_expected!|maximum walltime per node]] which are still in effects.}}
 
You can check your priority level for any cluster using https://api.grid5000.fr/explorer.


{{Note|text=As of today, the resources explorer only shows basic information. Additional information will be added in the near future.}}
{{Note|text=As of today, the resources explorer only shows basic information. Additional information will be added in the near future.}}
Line 51: Line 58:
{{Term|location=fnancy|cmd=<code class="command">oarsub</code> <code>-q production</code> <code>-I</code>}}
{{Term|location=fnancy|cmd=<code class="command">oarsub</code> <code>-q production</code> <code>-I</code>}}
''Using the command above will generally place your job at the lowest priority to allow usage of all clusters, even those where your priority is '''p4'''.''
''Using the command above will generally place your job at the lowest priority to allow usage of all clusters, even those where your priority is '''p4'''.''


When you specify a cluster, your job will be set to your highest priority level for that cluster:
When you specify a cluster, your job will be set to your highest priority level for that cluster:
{{Term|location=fnancy|cmd=<code class="command">oarsub</code> <code>-q production</code> <code class="replace">-p grele</code> <code>-I</code>}}
{{Term|location=fnancy|cmd=<code class="command">oarsub</code> <code>-q production</code> <code class="replace">-p grele</code> <code>-I</code>}}


You can also limit a job submission to a cluster at a specific priority level using <code>-q</code><code class="replace">PRIORITY LEVEL</code>:
You can also limit a job submission to a cluster at a specific priority level using <code>-q</code><code class="replace">PRIORITY LEVEL</code>:
Line 62: Line 67:
== Dashboards and status pages ==
== Dashboards and status pages ==


* [https://www.grid5000.fr/status/ planned and ongoing maintenances, events and issues on Grid'5000]
* [https://www.grid5000.fr/status/ planned and ongoing maintenances, events and issues on Abaca or Grid'5000 ]
 
=== Nancy ===
 
* [https://intranet.grid5000.fr/oar/Nancy/drawgantt-svg-prod/ DrawGantt: Gantt diagram of jobs on the cluster]
* [https://intranet.grid5000.fr/oar/Nancy/monika-prod.cgi Monika: currently running jobs]
 
=== Rennes ===
 
* [https://intranet.grid5000.fr/oar/Rennes/drawgantt-svg-prod/ DrawGantt: Gantt diagram of jobs on the cluster]
* [https://intranet.grid5000.fr/oar/Rennes/monika-prod.cgi Monika: currently running jobs]
 
= Contact information and support =
 
For support, see the [[Support]] page.
 
Contacts:
* The Grid'5000 ''responsable de site'' for Nancy is "Thomas Lambert" ([mailto:thomas.lambert@inria.fr thomas.lambert@inria.fr]) and for Rennes is "Anne Cécile Orgerie" ([mailto:anne-cecile.orgerie@irisa.fr anne-cecile.orgerie@irisa.fr])
* Local mailing lists: all Grid'5000 users from Nancy and Rennes are automatically subscribed to [mailto:nancy-users@lists.grid5000.fr nancy-users@lists.grid5000.fr] or [mailto:rennes-users@lists.grid5000.fr rennes-users@lists.grid5000.fr], respectively.
 
= FAQ =
== Data storage ==
 
Research teams, people of different teams, individuals can ask for different [[Group_Storage|Group storages]] in order to manage their data at the team level. The main benefit of using Group storages is that they allow for the members of the group to share their data (corpus, datasets, results ...) and to overcome easily the quota restrictions of the [[Storage#.2Fhome|home directories]].
 
Please remember that NFS servers (the home directories are also served by a NFS server) are quite slow when it comes to process a huge amount of small files during a computation, and if your are in this case, you may consider to do the major part of your I/Os on the nodes and copy back the results on the NFS server at the end of the experiment.
 
See [[Storage|here]] for other kind of storage available on the platform.
 
=== Nancy ===
 
Group storages are used to control the access to different storage spaces located on the '''storage[1-5].nancy.grid5000.fr''' NFS servers (more information about the maximum capacities of each of these server can be found [[Group_Storage#Available_servers_for_storage_spaces|here]]). Ask to your GGA leader if your team have access to one or more storage spaces (this is the case for instance for the following teams: '''Bird''', '''Capsid''', '''Caramba''', '''Heap''', '''Multispeech''', '''Optimist''', '''Orpailleur''', '''Semagramme''', '''Sisr''', '''Synalp''', '''Tangram''').
 
=== Rennes ===
Group storages are used to control the access to different storage spaces located on the '''storage2.rennes.grid5000.fr''' NFS server (more information about the maximum capacities of these server can be found [[Group_Storage#Available_servers_for_storage_spaces|here]]). Ask to your GGA leader if your team have access to one or more storage spaces (this is the case for instance for the following teams: '''cidre''' and '''sirocco (compactdisk storage)''').
 
== I am physically located in the LORIA/IRISA building, is there a shorter path to connect? ==
 
Where your are located in LORIA/IRISA building, you can benefit from a direct connection that does not go through Grid'5000 national access machines (access-south and access-north). To do so, use <code class="host">access.nancy</code> or  <code class="host">access.rennes</code> (instead of <code class="host">access</code>).
 
{{Term|location=mylaptop|cmd=<code class="command">ssh</code> <code class="replace">jdoe</code><code>@</code><code class="host">access.nancy.grid5000.fr</code>}}
 
{{Term|location=mylaptop|cmd=<code class="command">ssh</code> <code class="replace">jdoe</code><code>@</code><code class="host">access.rennes.grid5000.fr</code>}}
 
=== Configure an SSH alias for the local access  ===
 
To establish a connection to the Grid'5000  network from the local access, you can configure your SSH client as follows:
 
{{Term|location=laptop|cmd=editor <code class=file>~/.ssh/config</code>}}
Host <code class=host>g5kl</code>
  User <code class=replace>login</code>
  Hostname access.<code class=replace>site</code>.grid5000.fr
  ForwardAgent no
Host <code class=host>*.g5kl</code>
  User <code class=replace>login</code>
  ProxyCommand ssh g5k -W "$(basename %h .g5kl):%p"
  ForwardAgent no
'''Reminder:''' <code class=replace>login</code> is your Grid'5000 username and <code class=replace>site</code> is either nancy or rennes.
 
With such a configuration, you can:
* connect the frontend related to your local site
 
{{Term|location=laptop|cmd=ssh <code class=file>g5kl</code>}}
 
* transfer files from your laptop to your local frontend (with better bandwidth than using the national Grid'5000 access)
 
{{Term|location=laptop|cmd=scp <code class=replace>myFile</code> <code class=file>g5kl:~/</code>}}
 
* access the frontend of a different site:
 
{{Term|location=laptop|cmd=ssh <code class=file>grenoble.g5kl</code>}}
 
* transfer files from your laptop to your a different frontend
 
{{Term|location=laptop|cmd=scp <code class=replace>myFile</code> <code class=file>sophia.g5kl:~/</code>}}
 
== How to access data in hosted on Inria/Loria or Inria/Irisa serveurs ==
 
Grid'5000 network is not directly connected to Inria/Loria or Inria/Irisa internal servers. If you want to access from the Grid'5000 frontend and/or the Grid'5000 nodes, you need to use a local Bastion host.
If you need to regularly transfer data, it is highly recommanded to configure the SSH client on each Grid'5000 frontends.
 
{{Note|text=Please note that you have a different home directory on each Grid'5000 site, so you may need to replicate your SSH configuration across multiple sites.}}
 
=== Nancy ===
 
<code class="host">ssh-nge.loria.fr</code> is an access machine hosted on Loria side.
That machine can be used to access all services in the Inria/Loria environment.
 
{{Term|location=frontend|cmd=editor <code class=file>~/.ssh/config</code>}}
<pre>
Host accessloria
  Hostname ssh-nge.loria.fr
  User <code class=replace>jdoe</code> # to be replaced by your LORIA login
 
Host *.loria
  ProxyCommand ssh accessloria -W $(basename %h .loria):%p
  User <code class=replace>jdoe</code> # to be replaced by your LORIA login
</pre>
 
{{Note|text=Given that <code class="host">ssh-nge.loria.fr</code> only accepts logins using SSH key, you cannot simply connect with your LORIA password.}}
 
=== Rennes ===
 
<code class="host">ssh-rba.inria.fr</code> is an access machine hosted on Irisa side. That machine can be used to access all services in the Inria/Irisa environment.
{{Term|location=frontend|cmd=editor <code class=file>~/.ssh/config</code>}}
 
<pre>
Host ssh-rba
  Hostname ssh-rba.inria.fr
  User <code class=replace>jdoe</code> # to be replaced by your IRISA login
</pre>
 
Data hosted on Inria's NAS server is accessible on /nfs of <code class="host">ssh-rba.inria.fr</code>. Considering that you have set the configuration on Grenoble homedir:
 
{{Term|location=fgrenoble|cmd=<code class="command">scp</code> <code class="host">ssh-rba</code>:<code class="file">/nfs/nas4.irisa.fr/</code><code class=replace>repository</code> <code class="file">~/local_dir</code>}}
 
=== Transfer files to Grid'5000 storage ===
 
With that setup, you can now use :
* [https://www.grid5000.fr/mediawiki/index.php/Rsync Rsync ] to synchronize your data on Inria/Loria environment and data on your local home on Grid'5000 frontend
* [https://www.grid5000.fr/mediawiki/index.php/SSH#Mounting_remote_filesystem Sshfs] to mount directly your data directory on Inria/Loria environment under your local home. <=> mount your /user/my_team/my_username (origin = ssh-nge.loria.fr) on fnancy (destination = a folder on fnancy).
 
eg:


{{Term|location=fnancy|cmd=<code class="command">sshfs</code> <code>-o idmap=user</code> <code class="replace">jdoe</code><code>@</code><code class="host">tregastel.loria</code>:<code class="file">/users/myteam/jdoe ~/local_dir</code>}}
== Resources reservations (OAR) status ==
 
To unmount the remote filesystem:
{{Term|location=fnancy|cmd=<code class="command">fusermount</code> <code>-u</code> <code class="file">~/local_dir</code>}}


== I submitted a job, there are free resources, but my job doesn't start as expected! ==
{|
Most likely, this is because of our configuration of resources restriction per walltime.
|bgcolor="#aaaaaa" colspan="10"|
In order to make sure that someone requesting only a few nodes, for a small amount of time will be able to get soon enough, the nodes are split into categories. This depends on each cluster and is visible in the Gantt chart. An example of split is:
'''Drawgantt''' ''(past, current and future OAR jobs scheduling)''
* 20% of the nodes only accept jobs with walltime lower than 1h
|-
* 20% -- 2h
|bgcolor="#ffffff" valign="top" style="border:1px solid #cccccc;padding:1em;padding-top:0.5em;"|
* 20% -- 24h (1 day)
[https://intranet.grid5000.fr/oar/Grenoble/drawgantt-svg-prod/ '''Grenoble nodes (production)''']<br>
* 20% -- 48h (2 days)
|bgcolor="#ffffff" valign="top" style="border:1px solid #cccccc;padding:1em;padding-top:0.5em;"|
* 20% -- 168h (one week)
[https://intranet.grid5000.fr/oar/Nancy/drawgantt-svg-prod/ '''Nancy nodes (production)''']<br>
Note that ''best-effort'' jobs are excluded from those limitations.
|bgcolor="#ffffff" valign="top" style="border:1px solid #cccccc;padding:1em;padding-top:0.5em;"|
[https://intranet.grid5000.fr/oar/Rennes/drawgantt-svg-prod/ '''Rennes nodes (production)''']<br>
|bgcolor="#ffffff" valign="top" style="border:1px solid #cccccc;padding:1em;padding-top:0.5em;"|
[https://intranet.grid5000.fr/oar/Sophia/drawgantt-svg-prod/ '''Sophia nodes (production)''']<br>
|-
|bgcolor="#aaaaaa" colspan="10"|
'''Monika''' ''(current placement and queued jobs status)''
|-
|bgcolor="#ffffff" valign="top" style="border:1px solid #cccccc;padding:1em;padding-top:0.5em;"|
[https://intranet.grid5000.fr/oar/Grenoble/monika-prod.cgi '''Grenoble (production)''']
|bgcolor="#ffffff" valign="top" style="border:1px solid #cccccc;padding:1em;padding-top:0.5em;"|
[https://intranet.grid5000.fr/oar/Nancy/monika-prod.cgi '''Nancy (production)''']
|bgcolor="#ffffff" valign="top" style="border:1px solid #cccccc;padding:1em;padding-top:0.5em;"|
[https://intranet.grid5000.fr/oar/Rennes/monika-prod.cgi '''Rennes (production)''']
|bgcolor="#ffffff" valign="top" style="border:1px solid #cccccc;padding:1em;padding-top:0.5em;"|
[https://intranet.grid5000.fr/oar/Sophia/monika-prod.cgi '''Sophia (production)''']
|}


'''To see the exact walltime partition of each production cluster''', have a look at the [[Nancy:Hardware|Nancy Hardware page]] or [[Nancy:Hardware|Rennes Hardware page]].
== Learning to use Production ==


Another OAR feature that could impact the scheduling of your jobs is the OAR scheduling with fair-sharing, which is based on the notion of ''karma'': this feature assigns a dynamic priority to submissions based on the history of submissions by a specific user. With that feature, the jobs from users that rarely submit jobs will be generally scheduled earlier than jobs from heavy users.
Refer to the [[Production:Getting Started]] Production tutorial (derived from [[Getting Started]] Grid'5000 tutorial).


== I have an important demo, can I reserve all resources in advance? ==
= Information and support =
There's a special ''challenge'' queue that can be used to combine resources from the classic Grid'5000 clusters and the production clusters for special events. If you would like to use it, please ask for a [[Grid5000:SpecialUsage|special permission from the executive committee]].


==Can I use besteffort jobs in production ?==
{{Note|text=For the time being, access to support is common to both Abaca and Grid'5000....}}
Yes, you can submit a besteffort job on the production resources by using OAR <code>-t besteffort</code> option. Here is an exemple:
{{Term|location=fnancy|cmd=<code class="command">oarsub</code> <code class="replace">-t besteffort</code> <code>-q production</code><code class="file">./my_script.sh</code>}}
If you didn't specify the <code>-q production</code> option, your job could run on both production and non-production resources.


== How to cite / Comment citer ==
Before asking for support, you're advised to verify your issue is not documented somewhere on the Grid'5000 website. In particular, you should check:
* the [[Production:Getting_Started]], for general usage
* the [[Production:FAQ]]
* the [https://www.grid5000.fr/status/ events status page], for ongoing maintenances or incidents


If you use the Grid'5000 production clusters for your research and publish your work, please add this sentence in the acknowledgements section of your paper:
You may contact the Support staff directly by sending an e-mail to support-staff@lists.grid5000.fr.
<blockquote>
Experiments presented in this paper were carried out using the Grid'5000
testbed, supported by a scientific interest group hosted by
Inria and including CNRS, RENATER and several Universities as well as
other organizations (see https://www.grid5000.fr).
</blockquote>

Latest revision as of 18:29, 5 February 2025


Note.png Note

2025-01-30 - A specific documentation Web site for Abaca will go live shortly. In the meantime, specific pages for “Production” use are hosted in the Grid'5000 documentation.

Introduction

Note.png Note

Abaca is the name of Inria's national computing infrastructure dedicated to production applications.

Abaca clusters are hosted on Inria sites alongside clusters dedicated to the Grid'5000 platform. Abaca and Grid'5000 use the same technical management tools, and the Abaca and Grid'5000 support teams work together to administer both platforms.

In the remainder of this document, “Production” refers to the use of the Abaca platform.

The Abaca usage rules differ from the rest of Grid'5000.

Using Production resources

Getting an account

Users from the Inria research centres that want to access for a production usage must use that request form to open an account, like regular Grid'5000 users.

  • The following fields must be filled as follows:
    • Group Granting Access (GGA): either the group named after the research team
    • Laboratory: the name of your Inria research center or LORIA or IRISA
    • Team: the name of your research team.

Other users from Nancy (not belonging to the Loria laboratory) can ask to join using the nancy-misc Group Granting Access while other users from Rennes (not belonging to the Irisa laboratory) can ask to join using the rennes-misc Group Granting Access.

  • Users are automatically subscribed to the Grid'5000 users mailing lists: users@lists.grid5000.fr. This list is the user-to-user or user-to-admin communication mean to address help/support requests for Grid'5000. The technical team can be reached on support-staff@lists.grid5000.fr.

Visualizing resources

Note.png Note

At that date (2025-02-01), only the Nancy, Rennes, Grenoble and Sophia sites host clusters Production use (Abaca).

See Hardware to learn about the site's resources and your priority access to resources.

Using resources

The Production usage rules differ from the rest of Grid'5000:

  • Advance reservations (oarsub -r) are not allowed (to avoid fragmentation). Only submissions (and reservations that start immediately) are allowed.
  • All Grid'5000 users can use Production nodes (provided they meet the conditions stated in Grid5000:UsagePolicy), but it is expected that users will use their local Production resources in priority, and mostly use those resources for tasks that require Grid'5000 features.

To access production resources, you need to submit jobs to the production queue using the command -q production. Job submissions in the production queue are prioritized based on who funded the material. There are four levels of priority, each with a maximum job duration:

  • p1 -- 168h (one week)
  • p2 -- 96h (four days)
  • p3 -- 48h (two days)
  • p4 -- 24h (one day)
  • You may also have access to the clusters on besteffort.

You can check your priority level for any cluster using https://api.grid5000.fr/explorer.

Note.png Note

Moreover, with p1 priority, user can submit advanced reservation. More information about that in the Advanced OAR Page. For example, to reserve one week from now:

Terminal.png fnancy:
oarsub -q p1 -r "$(date +'%F %T' --date='+1 week')"
p1 priority level also allow to extend the duration of a job. The extension is only apply 24h before the end of the job and cannot be longer than 168h. More information about this feature can be found also on the Advance Oar Page.

Warning.png Warning

These limits DO NOT replace the maximum walltime per node which are still in effects.

Note.png Note

As of today, the resources explorer only shows basic information. Additional information will be added in the near future.

When submitting a job, by default, you will be placed at the highest priority level that allows you to maximize resources:

Terminal.png fnancy:
oarsub -q production -I

Using the command above will generally place your job at the lowest priority to allow usage of all clusters, even those where your priority is p4.

When you specify a cluster, your job will be set to your highest priority level for that cluster:

Terminal.png fnancy:
oarsub -q production -p grele -I

You can also limit a job submission to a cluster at a specific priority level using -qPRIORITY LEVEL:

Terminal.png fnancy:
oarsub -q p2 -l nodes=2,walltime=90 './yourScript.py'

Dashboards and status pages

Resources reservations (OAR) status

Drawgantt (past, current and future OAR jobs scheduling)

Grenoble nodes (production)

Nancy nodes (production)

Rennes nodes (production)

Sophia nodes (production)

Monika (current placement and queued jobs status)

Grenoble (production)

Nancy (production)

Rennes (production)

Sophia (production)

Learning to use Production

Refer to the Production:Getting Started Production tutorial (derived from Getting Started Grid'5000 tutorial).

Information and support

Note.png Note

For the time being, access to support is common to both Abaca and Grid'5000....

Before asking for support, you're advised to verify your issue is not documented somewhere on the Grid'5000 website. In particular, you should check:

You may contact the Support staff directly by sending an e-mail to support-staff@lists.grid5000.fr.