Disk reservation: Difference between revisions

From Grid5000
Jump to navigation Jump to search
(12 intermediate revisions by 2 users not shown)
Line 81: Line 81:
''You must respect this order : reserve the disks first, then reserve the nodes. Otherwise the disks you reserved will not be available on your nodes.''
''You must respect this order : reserve the disks first, then reserve the nodes. Otherwise the disks you reserved will not be available on your nodes.''


= Show and use my reserved disks =
= Checking the state of reserved disks =
== Gantt diagrams with disk reservations ==
== Gantt diagrams with disk reservations ==
Reservations of both nodes (processors) and disks are displayed on the following Gantt diagrams:
Reservations of both nodes (processors) and disks are displayed on the following Gantt diagrams:
Line 100: Line 100:
== Getting information about disk reservations from OAR and G5K APIs ==
== Getting information about disk reservations from OAR and G5K APIs ==
* The OAR API shows the properties of each resource of a job. You can retrieve the properties of your reserved disks, such as disk or diskpath:
* The OAR API shows the properties of each resource of a job. You can retrieve the properties of your reserved disks, such as disk or diskpath:
{{Term|location=fnancy|cmd=<code class="command">curl</code> <code>https://api.grid5000.fr/stable/sites/</code><code class="replace">site</code><code>/internal/oarapi/jobs/</code><code class="replace">job_id</code><code>/resources.json</code> (or <code>resources.yaml</code>)}}
{{Term|location=fnancy|cmd=<code class="command">curl</code> <code>https://api.grid5000.fr/3.0/sites/</code><code class="replace">site</code><code>/internal/oarapi/jobs/</code><code class="replace">job_id</code><code>/resources.json</code> (or <code>resources.yaml</code>)}}


* The Grid'5000 API also provide some details about disk reservations under the '''"disks"''' key of the status and jobs APIs:
* The Grid'5000 API also provide some details about disk reservations under the '''"disks"''' key of the status and jobs APIs:
{{Term|location=fnancy|cmd=<code class="command">curl</code> <code>https://api.grid5000.fr/stable/sites/</code><code class="replace">site</code><code>/status &#124; json_pp</code>}}
{{Term|location=fnancy|cmd=<code class="command">curl</code> <code>https://api.grid5000.fr/3.0/sites/</code><code class="replace">site</code><code>/status &#124; json_pp</code>}}
{{Term|location=fnancy|cmd=<code class="command">curl</code> <code>https://api.grid5000.fr/stable/sites/</code><code class="replace">site</code><code>/jobs/</code><code class="replace">job_id</code><code> &#124; json_pp</code>}}
{{Term|location=fnancy|cmd=<code class="command">curl</code> <code>https://api.grid5000.fr/3.0/sites/</code><code class="replace">site</code><code>/jobs/</code><code class="replace">job_id</code><code> &#124; json_pp</code>}}


== Use the disks once connected on the nodes ==
= Using local disks once connected on the nodes =
Connect to a node, either in the standard environment (thus using <code class=command>sudo-g5k</code> to get the root privileges) or in a kadeployed environment (and connected as root), where you reserved one or more disks.


Several tools can be used to manage the disk(s):
Login as root on a node where you reserved one or more disks:
; <code class="command">lsblk</code>, <code class=command>ls -l /dev/disk/by-path/</code> or equivalent: shows you the block devices of your disks: <code class=file>sdb</code>, <code class=file>sdc</code>, ... But careful again ! Find out which disk(s) you actually reserved ! It is not <code class=file>sda</code> which is the system disk, nor any of the non-reservable disks ! While you can modifiy any disk, only reservable disks are reserved !


For instance, assuming the <code class=file>sdc</code> disk is reserved on a <code class=host>yeti</code> machine in Grenoble, <code class="command">lsblk</code> will show:
* either use <code class="command">sudo-g5k -i</code> from the standard environment to become root
$ lsblk
* either login with SSH as root on an environment you deployed
 
All examples below assume that you are already logged in as root on the node.
 
== Discovering available disks ==
 
The <code class="command">lsblk</code> command lists all block devices. For instance, on a <code class="host">yeti</code> machine in Grenoble, this might show:
# <code class="command">lsblk</code>
  NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  sda          8:0    0 447.1G  0 disk  
  sda          8:0    0 447.1G  0 disk  
Line 122: Line 127:
  └─sda5        8:5    0 401.5G  0 part /tmp
  └─sda5        8:5    0 401.5G  0 part /tmp
  sdc          8:32  0  1.8T  0 disk  
  sdc          8:32  0  1.8T  0 disk  
  nvme0n1    259:0    0  1.5T  0 disk  
  nvme0n1    259:0    0  1.5T  0 disk
And the matching ''by-path'' naming which is provided in [[Grenoble:Hardware#yeti]] is:
nvme1n1    259:1    0  1.5T  0 disk
  $ ls -l /dev/disk/by-path/
 
In this case:
 
* <code class="file">sda</code> is the system disk, so it is always available
* <code class="file">sdc</code> has been reserved explicitly and is visible
* <code class="file">sdb</code> and <code class="file">sdd</code> do not show up: indeed, they have not been reserved for this example
* <code class="file">nvme0n1</code> and <code class="file">nvme1n1</code> are NVMe SSDs and are always available (not reservable)
 
You can compare the output with the reference data shown in [[Grenoble:Hardware#yeti]].
 
{{Warning|text=The actual <code class="file">sdX</code> names might change depending on disk initialization order during boot. Thus, <code class="file">sdc</code> might be an entirely different disk if you reboot the machine. In the following, we demonstrate the use of PCI paths to make sure we unambiguously identify the right disks.}}
 
It is possible to display disks with their PCI path, which is guaranteed to always be the same (unless the hardware is physically modified):
 
  # <code class="command">ls -l /dev/disk/by-path/</code>
  total 0
  total 0
  lrwxrwxrwx 1 root root  9 Oct  7 20:11 pci-0000:18:00.0-scsi-0:0:0:0 -> ../../sda
  lrwxrwxrwx 1 root root  9 Oct  7 20:11 pci-0000:18:00.0-scsi-0:0:0:0 -> ../../sda
Line 133: Line 152:
  lrwxrwxrwx 1 root root 10 Oct  7 20:11 pci-0000:18:00.0-scsi-0:0:0:0-part5 -> ../../sda5
  lrwxrwxrwx 1 root root 10 Oct  7 20:11 pci-0000:18:00.0-scsi-0:0:0:0-part5 -> ../../sda5
  lrwxrwxrwx 1 root root  9 Oct  7 20:11 pci-0000:18:00.0-scsi-0:0:2:0 -> ../../sdc
  lrwxrwxrwx 1 root root  9 Oct  7 20:11 pci-0000:18:00.0-scsi-0:0:2:0 -> ../../sdc
  lrwxrwxrwx 1 root root 13 Oct  7 20:11 pci-0000:6d:00.0-nvme-1 -> ../../nvme0n1
lrwxrwxrwx 1 root root 13 Oct  7 20:11 pci-0000:59:00.0-nvme-1 -> ../../nvme0n1
Hence we notice that:
  lrwxrwxrwx 1 root root 13 Oct  7 20:11 pci-0000:6d:00.0-nvme-1 -> ../../nvm10n1
* <code class=file>sdb</code> and <code class=file>sdd</code> do not show up: indeed they are reservable disks (see [[Grenoble:Hardware#yeti]]), but are not reserved !
 
* <code class=file>sda</code> (and the partitions <code class=file>sda1</code>, ...) show up, but that's the system disk, which is not researvable, and should mostly not be used more than it already is.
Here, we see that <code class="file">sdc</code> has the PCI path <code class="file">pci-0000:18:00.0-scsi-0:0:2:0</code>, which matches the second reservable disk listed on [[Grenoble:Hardware#yeti]].
* Only some <code class=file>sda</code> partitions are already mounted.
 
* <code class=file>nvme0n1</code> shows up, but that the NVMe disk available on the yeti node, which is not a reservable disk (but can be exploited for a day or night/weekend experiment of course ! Feel free to partition/format/mount it)
== Partitioning a disk ==
* Note that <code class=file>sdc</code> or <code class=file>nvme0n1</code> partitions could show up in additional lines, if any partition was created in a previous use of the disks!
 
To start using the disk, you will likely need to partition it. Several tools exist to do this: <code class="command">fdisk</code>, <code class="command">sfdisk</code>, <code class="command">cfdisk</code>, <code class="command">parted</code>...
 
For example, to partition the [[Grenoble:Hardware#yeti|second 2 TB disk of a yeti machine]] interactively:
 
{{Term|location=yeti-1|cmd=<code class="command">cfdisk</code> <code class="replace">/dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:2:0</code>}}
 
Use the interactive prompt to create a single partition of type "Linux filesystem", possibly by deleting existing partitions first.
 
As an advanced usage, you could use LVM to create logical volumes that may span several disks, or mdadm to create software RAID volumes.
 
{{Note|text=We use the full PCI path to the disk instead of the more usual <code class="file">/dev/sdc</code>. This is recommended because the <code class="file">sdX</code> aliases may not always point to the same disk, depending on disk initialization order during boot.}}
 
== Creating a filesystem ==
 
Continuing the previous example, let's create an ext4 filesystem on the first partition of the same disk:
 
{{Term|location=yeti-1|cmd=<code class="command">mkfs.ext4</code> <code class="command">-m 0</code> <code class="replace">/dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:2:0-part1</code>}}
 
Mount it and check that it appears:
 
{{Term|location=yeti-1|cmd=<code class="command">mkdir -p</code> <code class="replace">/mnt/mylocaldisk</code>}}
{{Term|location=yeti-1|cmd=<code class="command">mount</code> <code class="replace">/dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:2:0-part1</code> <code class="replace">/mnt/mylocaldisk</code>}}
{{Term|location=yeti-1|cmd=<code class="command">df -h</code>}}
 
As an advanced usage, you may use any filesystem: Btrfs, HDFS, Ceph, ZFS, Beegfs, etc. Refer to the documentation for each of these systems for guidance.
 
== Troubleshooting ==


When partitioning or formatting local disks, you might encounter an error such as:


; <code class="command">fdisk</code> or <code class="command">parted</code> or any other disk partionning tool:must be used to partition (if no partition is defined) or re-partition a disk.
Error: Partition(s) on /dev/sdb are being used


; <code class="command">mkfs</code> or any format utility: must be used to format partitions with the desired filesystem.
This may be because the disks already contained partitions of a certain type (LVM, software RAID...) from a previous job, and your system automatically started using it. To solve this, you have several options:


; <code class="command">mount</code>: must be used to mount the partition.
* use a tool such as <code class="command">wipefs</code> or <code class="command">pvremove</code> to remove previous information from the disk.


* use a low-level tool such as <code class="command">dd</code> to completely erase the beginning of the disk, and reboot. Use with care as it can destroy your data.


Mind that the disk reservsation system provides access to the block devices. It does not manage partitioning nor formatting nor mounting.
For instance, here is an example script that cleans up disks automatically: https://github.com/pmorillon/terraform-provider-grid5000/blob/master/examples/ceph/modules/rook_ceph/files/disk-format.sh.tmpl


= Security issues =
= Security issues =

Revision as of 15:34, 20 September 2021

Note.png Note

This page is actively maintained by the Grid'5000 team. If you encounter problems, please report them (see the Support page). Additionally, as it is a wiki page, you are free to make minor corrections yourself if needed. If you would like to suggest a more fundamental change, please contact the Grid'5000 team.

Disk reservation consists in reserving on nodes additional hard disks, which are otherwise not usable.

The table below shows the Grid'5000 clusters with such additional hard disks available for reservation.

Site Cluster Number of nodes Number of reservable disks per node
Grenoble yeti 4 3
Lille chiclet 8 2
Lille chifflot 8 5
Lyon gemini 2 4
Nancy grappe 16 1
Nancy gros 124 1
Nancy grouille 2 1
Rennes parasilo 27 5

Last generated from the Grid'5000 Reference API on 2024-02-26 (commit 8c6cab022b)

How it works

Two use cases of the disk reservsation are possible:

Long run reservations of disks only (job reserving no host, i.e. no processing power)
disk-only reservations do not have to fit in the day vs. night&week-end host reservation policy, and can last up to many days (see Grid5000:UsagePolicy). The reserved disks can then be used by regular host jobs during the period of time of the disk reservation. In this use case, the goal is to get more persistence for the local storage of nodes, e.g. avoid the need to reformat disks and reimport dataset in each regular host job. Those long run jobs must use the noop OAR job type.
Regular jobs reserving both host and disks
In this use case, the goal is to get access to the reservable disks within the experiment, just as if the disk were not to reserve separately.

In both cases, making use of the reserved disks requires to gain the root privileges, since disks are provided as bare metal hardware to be partitioned, formated, mounted, filled with no restriction but by the experimenter. As a result, the experimenter can use the reserved disk:

  • either in a non-deploy job, in the standard environment but after enabling sudo with the sudo-g5k command ;
  • or in a deploy job, in a kadeployed environment (use the deploy OAR job type, then kadeploy).

Technically speaking, when a deploy job starts, or whenever sudo-g5k is called in a non-deploy job, the reserved disks stay available (shown by lsblk) while the other disks are disabled and disappear.

Warning.png Warning

Mind that some disks may show up in lsblk, while not being reserved:

  • sda is the system disk and host the partition of the running system.
  • non reservable disks (have a look at the hardware description of the reserved cluster to find out what disks are reservable, in the site's hardware pages, e.g. Nancy:Hardware for the clusters of Nancy) also show up every time for any user (their access cannot be protected by a reservation as they are not reservable !)

Reserved disks can only be accessed by the user who reserved them.

Please note that reserved disks are not cleaned-up at the end of reservation. As a result:

  • Data let on the disks can be accessed by user in later reservations.
  • Reserved disk may first need to get cleaned-up before use (remove previous formating and partitioning)

See also Security issues.

Usage

The main commands to reserve disks are given below.

The maximum duration of a disk reservation is defined in the Usage Policy.

Note.png Note

In the following example, add -t deploy to the oarsub command if you plan to deploy your own environment for your expleriment.

Reserve disks and nodes at the same time

How to reserve a node with only the main disk (none of the additional disks), on the grimoire cluster
Terminal.png fnancy:
oarsub -I -p "cluster='grimoire'" -l /host=1

(no change to the way a node was to be reserved in the past, before the disk reservation mechanism existed.)

How to reserve a node with all its disks
Terminal.png fnancy:
oarsub -I -l {"(type='disk' or type='default') and cluster='grimoire'"}/host=1
How to reserve nodes grimoire-1 and grimoire-2 with one reservable disk per node
Terminal.png fnancy:
oarsub -I -p "host in ('grimoire-1.nancy.grid5000.fr','grimoire-2.nancy.grid5000.fr')" -l /host=2+{"type='disk'"}/host=2/disk=1
Note.png Note

Yes, the syntax of the last oarsub command is a bit awkward, so please be careful and mind having:

  • the -p option explicitly set the hosts you want (using "cluster='grimoire'" instead could not insure that you get the disks on the same nodes you will reserve) ;
  • both host= values in the -l option (2 in the example) exactly match the count of hosts in the list you provide in the -p option (grimoire-1.nancy.grid5000.fr and grimoire-2.nancy.grid5000.fr in the example).
  • we do not need to explicitly write "{type='default'}" in the -l option (before the /host=2+, because default is implicit is the type is not set.
See Advanced OAR for more explanation of the oarsub syntax.

Reserve disks and nodes separately

You may, for example, decide to reserve some disks for one week, but the nodes where your disks are located only when you want to carry out an experiment.

First: reserve the disks

Since we want to reserve disks only in a first time, we use the noop job type: with this noop job type, OAR will not try to execute anything on the job resources (which is what we want since disk resources are not capable of executing programs).

(Please mind that Jobs of type noop cannot be interactive: oarsub-I -t noop ... is not supported.)

3 examples:

Reserve two disks on grimoire-1 for one week, starting on 2018-01-01:

Terminal.png fnancy:
oarsub -r "2018-01-01 00:00:00" -t noop -l {"type='disk' and host='grimoire-1.nancy.grid5000.fr'"}/host=1/disk=2,walltime=168

Or reserve the first two disks on grimoire-2:

Terminal.png fnancy:
oarsub -r "2018-01-01 00:00:00" -t noop -l {"type='disk' and host='grimoire-2.nancy.grid5000.fr' and disk in ('sdb.grimoire-2', 'sdc.grimoire-2')"}/host=1/disk=2,walltime=168

Or reserve all disks on two nodes:

Terminal.png fnancy:
oarsub -r "2018-01-01 00:00:00" -t noop -l {"type='disk' and cluster='grimoire'"}/host=2/disk=ALL,walltime=168

Second: reserve the nodes

You can then reserve nodes grimoire-1 and grimoire-2 for 3 hours, in the usual way:

Terminal.png fnancy:
oarsub -I -l {"host in ('grimoire-1.nancy.grid5000.fr', 'grimoire-2.nancy.grid5000.fr')"}/host=2,walltime=3

You must respect this order : reserve the disks first, then reserve the nodes. Otherwise the disks you reserved will not be available on your nodes.

Checking the state of reserved disks

Gantt diagrams with disk reservations

Reservations of both nodes (processors) and disks are displayed on the following Gantt diagrams:

Grenoble

Lille

Lyon

Nancy

Rennes

Getting information about disk reservations from OAR and G5K APIs

  • The OAR API shows the properties of each resource of a job. You can retrieve the properties of your reserved disks, such as disk or diskpath:
Terminal.png fnancy:
curl https://api.grid5000.fr/3.0/sites/site/internal/oarapi/jobs/job_id/resources.json (or resources.yaml)
  • The Grid'5000 API also provide some details about disk reservations under the "disks" key of the status and jobs APIs:
Terminal.png fnancy:
curl https://api.grid5000.fr/3.0/sites/site/status | json_pp
Terminal.png fnancy:
curl https://api.grid5000.fr/3.0/sites/site/jobs/job_id | json_pp

Using local disks once connected on the nodes

Login as root on a node where you reserved one or more disks:

  • either use sudo-g5k -i from the standard environment to become root
  • either login with SSH as root on an environment you deployed

All examples below assume that you are already logged in as root on the node.

Discovering available disks

The lsblk command lists all block devices. For instance, on a yeti machine in Grenoble, this might show:

# lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda           8:0    0 447.1G  0 disk 
├─sda1        8:1    0   3.7G  0 part [SWAP]
├─sda2        8:2    0  19.6G  0 part /
├─sda3        8:3    0  22.4G  0 part 
├─sda4        8:4    0     1K  0 part 
└─sda5        8:5    0 401.5G  0 part /tmp
sdc           8:32   0   1.8T  0 disk 
nvme0n1     259:0    0   1.5T  0 disk
nvme1n1     259:1    0   1.5T  0 disk

In this case:

  • sda is the system disk, so it is always available
  • sdc has been reserved explicitly and is visible
  • sdb and sdd do not show up: indeed, they have not been reserved for this example
  • nvme0n1 and nvme1n1 are NVMe SSDs and are always available (not reservable)

You can compare the output with the reference data shown in Grenoble:Hardware#yeti.

Warning.png Warning

The actual sdX names might change depending on disk initialization order during boot. Thus, sdc might be an entirely different disk if you reboot the machine. In the following, we demonstrate the use of PCI paths to make sure we unambiguously identify the right disks.

It is possible to display disks with their PCI path, which is guaranteed to always be the same (unless the hardware is physically modified):

# ls -l /dev/disk/by-path/
total 0
lrwxrwxrwx 1 root root  9 Oct  7 20:11 pci-0000:18:00.0-scsi-0:0:0:0 -> ../../sda
lrwxrwxrwx 1 root root 10 Oct  7 20:11 pci-0000:18:00.0-scsi-0:0:0:0-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Oct  7 20:11 pci-0000:18:00.0-scsi-0:0:0:0-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Oct  7 20:12 pci-0000:18:00.0-scsi-0:0:0:0-part3 -> ../../sda3
lrwxrwxrwx 1 root root 10 Oct  7 20:11 pci-0000:18:00.0-scsi-0:0:0:0-part4 -> ../../sda4
lrwxrwxrwx 1 root root 10 Oct  7 20:11 pci-0000:18:00.0-scsi-0:0:0:0-part5 -> ../../sda5
lrwxrwxrwx 1 root root  9 Oct  7 20:11 pci-0000:18:00.0-scsi-0:0:2:0 -> ../../sdc
lrwxrwxrwx 1 root root 13 Oct  7 20:11 pci-0000:59:00.0-nvme-1 -> ../../nvme0n1
lrwxrwxrwx 1 root root 13 Oct  7 20:11 pci-0000:6d:00.0-nvme-1 -> ../../nvm10n1

Here, we see that sdc has the PCI path pci-0000:18:00.0-scsi-0:0:2:0, which matches the second reservable disk listed on Grenoble:Hardware#yeti.

Partitioning a disk

To start using the disk, you will likely need to partition it. Several tools exist to do this: fdisk, sfdisk, cfdisk, parted...

For example, to partition the second 2 TB disk of a yeti machine interactively:

Terminal.png yeti-1:
cfdisk /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:2:0

Use the interactive prompt to create a single partition of type "Linux filesystem", possibly by deleting existing partitions first.

As an advanced usage, you could use LVM to create logical volumes that may span several disks, or mdadm to create software RAID volumes.

Note.png Note

We use the full PCI path to the disk instead of the more usual /dev/sdc. This is recommended because the sdX aliases may not always point to the same disk, depending on disk initialization order during boot.

Creating a filesystem

Continuing the previous example, let's create an ext4 filesystem on the first partition of the same disk:

Terminal.png yeti-1:
mkfs.ext4 -m 0 /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:2:0-part1

Mount it and check that it appears:

Terminal.png yeti-1:
mkdir -p /mnt/mylocaldisk
Terminal.png yeti-1:
mount /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:2:0-part1 /mnt/mylocaldisk
Terminal.png yeti-1:
df -h

As an advanced usage, you may use any filesystem: Btrfs, HDFS, Ceph, ZFS, Beegfs, etc. Refer to the documentation for each of these systems for guidance.

Troubleshooting

When partitioning or formatting local disks, you might encounter an error such as:

Error: Partition(s) on /dev/sdb are being used

This may be because the disks already contained partitions of a certain type (LVM, software RAID...) from a previous job, and your system automatically started using it. To solve this, you have several options:

  • use a tool such as wipefs or pvremove to remove previous information from the disk.
  • use a low-level tool such as dd to completely erase the beginning of the disk, and reboot. Use with care as it can destroy your data.

For instance, here is an example script that cleans up disks automatically: https://github.com/pmorillon/terraform-provider-grid5000/blob/master/examples/ceph/modules/rook_ceph/files/disk-format.sh.tmpl

Security issues

The mechanism used to enable/disable disks is designed to avoid mistakes from other users. However, a malicious user could take control of the RAID card, enable any disk, and access or erase your data. Please notify the Grid'5000 tech-team in case of such event, but first of all mind securing your data:

  • Keep a copy (backup) in a safe place if relevant for your data ;
  • If your data is sensitive, mind using cryptographic mechanisms to secure it.

Also, the data on reserved disks is not automatically erased at the end of your job. If you don't want the next user to access it, you have to erase it yourself.

Finally, no backup of data stored on the reserved disks is made.