Disk reservation: Difference between revisions
Line 69: | Line 69: | ||
|- | |- | ||
|bgcolor="#ffffff" valign="top" style="border:1px solid #cccccc;padding:1em;padding-top:0.5em;"| | |bgcolor="#ffffff" valign="top" style="border:1px solid #cccccc;padding:1em;padding-top:0.5em;"| | ||
[https://intranet.grid5000.fr/oar/Grenoble/drawgantt-svg-disks/ | [https://intranet.grid5000.fr/oar/Grenoble/drawgantt-svg-disks/ Grenoble] | ||
|bgcolor="#ffffff" valign="top" style="border:1px solid #cccccc;padding:1em;padding-top:0.5em;"| | |bgcolor="#ffffff" valign="top" style="border:1px solid #cccccc;padding:1em;padding-top:0.5em;"| | ||
[https://intranet.grid5000.fr/oar/Lille/drawgantt-svg-disks/ Lille] | [https://intranet.grid5000.fr/oar/Lille/drawgantt-svg-disks/ Lille] |
Revision as of 08:59, 19 November 2018
Note | |
---|---|
This page is actively maintained by the Grid'5000 team. If you encounter problems, please report them (see the Support page). Additionally, as it is a wiki page, you are free to make minor corrections yourself if needed. If you would like to suggest a more fundamental change, please contact the Grid'5000 team. |
Disk reservation consists in reserving nodes' hard disks, in order to locally store large datasets between your reservations, and avoid needing moving data to nodes at the beginning of each node reservation. Typically, you can reserve disks for several days, store your data on those disks during your first nodes reservation, and then access the data during the subsequent nodes reservations.
The table below shows the cluster compatibility with disk reservation.
Site | Cluster | Number of nodes | Number of reservable disks per node |
---|---|---|---|
Grenoble | yeti | 4 | 3 |
Lille | chiclet-8 | 1 | 1 |
Lille | chiclet-[1-7] | 7 | 2 |
Lille | chifflot | 8 | 5 |
Lyon | gemini | 2 | 4 |
Nancy | gros | 124 | 1 |
Nancy | grouille | 2 | 1 |
Rennes | parasilo | 27 | 5 |
Last generated from the Grid'5000 Reference API on 2024-11-05 (commit 359267a37d)
How it works
Disk reservation works with jobs of type deploy and jobs of type noop. You can reserve some disks in the long run with a job of type noop. Then, you may reserve a job of type deploy on the same nodes as those where you reserved your disks.
When the job of type deploy starts, the disks you reserved are enabled by the RAID card of the node, and the other disks are disabled. Reserved disks can only be accessed by the user who reserved them (see also Security issues).
Usage
The main commands to reserve disks are given below.
The maximum duration of a disk reservation is defined in the Usage Policy.
Reserve disks and nodes at the same time
- How to reserve a node with only the main disk (none of the additional disks), on the grimoire cluster
(no change to the way a node was to be reserved in the past, before the disk reservation mechanism existed.)
- How to reserve a node with all its disks
- How to reserve nodes grimoire-1 and grimoire-2 with one reservable disk per node
fnancy :
|
oarsub -I -t deploy -p "host in ('grimoire-1.nancy.grid5000.fr','grimoire-2.nancy.grid5000.fr')" -l /host=2+{"type='disk'"}/host=2/disk=1 |
Note | |
---|---|
Yes, the syntax of the last oarsub command is a bit awkward, so please be careful and mind having:
|
Reserve disks and nodes separately
You may, for example, decide to reserve some disks for one week, but the nodes where your disks are located only when you want to carry out an experiment.
First: reserve the disks
Since we want to reserve disks only in a first time, we use the noop job type: with this noop job type, OAR will not try to execute anything on the job resources (which is what we want since disk resources are not capable of executing programs).
(Please mind that Jobs of type noop cannot be interactive: oarsub
-I -t noop ...
is not supported.)
3 examples:
Reserve two disks on grimoire-1 for one week, starting on 2018-01-01:
fnancy :
|
oarsub -r "2018-01-01 00:00:00" -t noop -l {"type='disk' and host='grimoire-1.nancy.grid5000.fr'"}/host=1/disk=2,walltime=168 |
Or reserve the first two disks on grimoire-2:
fnancy :
|
oarsub -r "2018-01-01 00:00:00" -t noop -l {"type='disk' and host='grimoire-2.nancy.grid5000.fr' and disk in ('sdb.grimoire-2', 'sdc.grimoire-2')"}/host=1/disk=2,walltime=168 |
Or reserve all disks on two nodes:
fnancy :
|
oarsub -r "2018-01-01 00:00:00" -t noop -l {"type='disk' and cluster='grimoire'"}/host=2/disk=ALL,walltime=168 |
Second: reserve the nodes
You can then reserve nodes grimoire-1 and grimoire-2 for 3 hours, in the usual way:
fnancy :
|
oarsub -I -t deploy -l {"host in ('grimoire-1.nancy.grid5000.fr', 'grimoire-2.nancy.grid5000.fr')"}/host=2,walltime=3 |
You must respect this order : reserve the disks first, then reserve the nodes. Otherwise the disks you reserved will not be available on your nodes.
Show and use my reserved disks
Gantt diagrams with disk reservations
Reservations of both nodes (processors) and disks are displayed on the following Gantt diagrams:
Getting information about disk reservations from OAR and G5K APIs
- The OAR API shows the properties of each resource of a job. You can retrieve the properties of your reserved disks, such as disk or diskpath:
fnancy :
|
curl https://api.grid5000.fr/stable/sites/ site /internal/oarapi/jobs/ job_id /resources.json (or resources.yaml ) |
- The Grid'5000 API in version sid also provide some details about disk reservations under the "disks" key of the status and jobs APIs:
Show disks once connected on the machines
Once connected to a node in a deploy job with one or more disks reserved locally on the node, several tools can be used to manage the disk(s):
lsblk
will show you the block devices of your disks: sdb, sdc, ... (be careful: sda is the system disk);- Commands like
fdisk
orparted
can be used to partition the disk, if needed; mkfs
can be used to format the disk.
Mind that the platform provides access to the block devices, but does not manage partitioning nor formatting.
Security issues
The mechanism used to enable/disable disks is designed to avoid mistakes from other users. However, a malicious user could take control of the RAID card, enable any disk, and access or erase your data. Please mind securing your data:
- Keep a copy (backup) in a safe place if relevant for your data ;
- If your data is sensitive, mind using cryptographic mechanisms to secure it.
Also, the data on reserved disks is not automatically erased at the end of your job. If you don't want the next user to access it, you have to erase it yourself.
Finally, of course, no backups of data made on the reserved disks are made.