Armored Node for Sensitive Data: Difference between revisions

From Grid5000
Jump to navigation Jump to search
 
(5 intermediate revisions by the same user not shown)
Line 37: Line 37:
  <code class="host">frontend:</code><code class="command">grd</code> bs -s <code class="replace">SITE</code> -q production -l {<code class="replace">CLUSTER</code>}/nodes=1+{"type='kavlan'"}/vlan=1 -w <code class="replace">WALLTIME</code> --armor
  <code class="host">frontend:</code><code class="command">grd</code> bs -s <code class="replace">SITE</code> -q production -l {<code class="replace">CLUSTER</code>}/nodes=1+{"type='kavlan'"}/vlan=1 -w <code class="replace">WALLTIME</code> --armor


Note that <code class="replace">CLUSTER</code> can refer to all clusters of default queue, all clusters in the Nancy's production queue, and the clusters among abacus9, abacus12, abacus14, abacus16, roazhon3, roazhon7, roazhon8 and roazhon9 in Rennes's production queue for the moment.
Note that <code class="replace">CLUSTER</code> can refer to '''all clusters''' of '''default''' queue, '''all clusters''' in the '''Nancy''''s '''production''' queue, and the clusters among '''abacus9, abacus12, abacus14, abacus16, roazhon3, roazhon7, roazhon8 and roazhon9''' in '''Rennes''''s '''production''' queue ''for the moment''.


Wait for the script to finish: It must have displayed the <code>Setup completed successfully!</code> message.  
Wait for the script to finish: It must have displayed the <code>Setup completed successfully!</code> message.  
Line 56: Line 56:
  <code class="host">frontend:</code><code class="command">oarsub</code> -q production -t deploy -t destructive -l {"type='kavlan'"}/vlan=1+{<code class="replace">CLUSTER</code>}/nodes=1,walltime=<code class="replace">WALLTIME</code> "sleep infinity"  
  <code class="host">frontend:</code><code class="command">oarsub</code> -q production -t deploy -t destructive -l {"type='kavlan'"}/vlan=1+{<code class="replace">CLUSTER</code>}/nodes=1,walltime=<code class="replace">WALLTIME</code> "sleep infinity"  


Note that <code class="replace">CLUSTER</code> can refer to all clusters of default queue, all clusters in the Nancy's production queue, and the clusters among '''abacus9''', '''abacus12''', '''abacus14''', '''abacus16''', '''roazhon3''', '''roazhon7''', '''roazhon8''' and '''roazhon9''' in Rennes's production queue for the moment.
Note that <code class="replace">CLUSTER</code> can refer to '''all clusters''' of '''default''' queue, '''all clusters''' in the '''Nancy''''s '''production''' queue, and the clusters among '''abacus9, abacus12, abacus14, abacus16, roazhon3, roazhon7, roazhon8 and roazhon9''' in '''Rennes''''s '''production''' queue ''for the moment''.


Note that additional disks available on the node ([[Disk reservation#Reserve_disks_and_nodes_at_the_same_time|that may need an extra reservation]]) will be used as additional secured storage space, but data will always be destroyed at the end of the node reservation.  
Note that additional disks available on the node ([[Disk reservation#Reserve_disks_and_nodes_at_the_same_time|that may need an extra reservation]]) will be used as additional secured storage space, but data will always be destroyed at the end of the node reservation.  
Line 77: Line 77:


; Securing the node with g5k-armor-node.py
; Securing the node with g5k-armor-node.py
Connect to the node from the outside of Grid'5000, using the node's hostname in the VLAN (hostname with the Kavlan suffix for the reserved VLAN, because the node was deployed inside the kavlan VLAN). After securing the node, this will be the only allowed way to connect to the node, as SSH will only be authorized from Grid'5000 access machines:
Connect to the node from the outside of Grid'5000, using the node's hostname in the VLAN (hostname with the Kavlan suffix for the reserved VLAN, because the node was deployed inside the kavlan VLAN). After securing the node, this will be the '''only''' allowed way to connect to the node, as SSH will only be authorized from Grid'5000 access machines:
  <code class="host">your machine:</code><code class="command">ssh</code> -J <code class="replace">YOUR_G5K_LOGIN</code>@access.grid5000.fr root@<code class="replace">node-X</code>-kavlan-<code class="replace">Y</code>.<code class="replace">site</code>.grid5000.fr
  <code class="host">your machine:</code><code class="command">ssh</code> -J <code class="replace">YOUR_G5K_LOGIN</code>@access.grid5000.fr root@<code class="replace">node-X</code>-kavlan-<code class="replace">Y</code>.<code class="replace">site</code>.grid5000.fr


Line 90: Line 90:
  <code class="host">node:</code><code class="command">chmod</code> a+rx g5k-armor-node.py
  <code class="host">node:</code><code class="command">chmod</code> a+rx g5k-armor-node.py
  <code class="host">node:</code><code class="command">./g5k-armor-node.py</code>
  <code class="host">node:</code><code class="command">./g5k-armor-node.py</code>
If the <code>Setup completed successfully!</code> message appears, you can disconnect from the node, and try to connect to the Armored Node, as mentioned [[Armored Node for Sensitive Data#Connect_the_secured_node|below]].


As described above, you might get an error message from SSH, because the node's host key changed. This is expected: the script replaced the node's SSH host key with a newly generated one. Follow the instructions from SSH to remove the old key.
As described above, you might get an error message from SSH, because the node's host key changed. This is expected: the script replaced the node's SSH host key with a newly generated one. Follow the instructions from SSH to remove the old key.
Line 96: Line 98:


=== Connect the secured node ===
=== Connect the secured node ===
You must connect to the node using your Grid'5000 login directly from your workstation:
You must connect to the node using '''your Grid'5000 login''' directly '''from your workstation''':
  <code class="host">your machine:</code><code class="command">ssh</code> -J <code class="replace">YOUR_G5K_LOGIN</code>@access.grid5000.fr <code class="replace">YOUR_G5K_LOGIN</code>@<code class="replace">node-X</code>-kavlan-<code class="replace">Y</code>.<code class="replace">site</code>.grid5000.fr
  <code class="host">your machine:</code><code class="command">ssh</code> -J <code class="replace">YOUR_G5K_LOGIN</code>@access.grid5000.fr <code class="replace">YOUR_G5K_LOGIN</code>@<code class="replace">node-X</code>-kavlan-<code class="replace">Y</code>.<code class="replace">site</code>.grid5000.fr


Line 172: Line 174:
==== Initial Setup ====
==== Initial Setup ====


# '''User Requests Storage Space Creation''': The user requests the creation of a CompuVault storage space from the Grid5000 technical team (mailto:support-staff@lists.grid5000.fr), specifying the required volume and its expiration date.
# '''User Requests Storage Space Creation''': The user requests the creation of a CompuVault storage space from the Grid5000 technical team (mailto:support-staff@lists.grid5000.fr), specifying the project name, the required volume and its expiration date.
# '''Technical Team Sets Up Storage''': Following [[TechTeam:CompuVault|this guide]], the technical team creates the storage space and sets up an iSCSI export protected by a '''login/password''' pair. The parameters (server address, export name, project name and iSCSI login/password) are communicated to the user within a <code class="replace">PROJECT_NAME</code>_cv_config.json file, in a confidential manner.
# '''Technical Team Sets Up Storage''': Following [[TechTeam:CompuVault|this guide]], the technical team creates the storage space and sets up an iSCSI export protected by a '''login/password''' pair. The parameters (server address, export name, project name and iSCSI login/password) are communicated to the user within a <code class="replace">PROJECT_NAME</code>_cv_config.json file, in a confidential manner.
# '''User Configures Armored Node''': Please refer to [[Armored Node for Sensitive Data#Node_reservation,_deployment,_and_securisation| the guide above]] to provision an Armored Node.
# '''User Configures Armored Node''': Please refer to [[Armored Node for Sensitive Data#Node_reservation,_deployment,_and_securisation| the guide above]] to provision an Armored Node.
# '''User Mounts and Encrypts Storage''': The user mounts the storage space on the node via iSCSI and encrypts it with LUKS. The user retains the '''passphrase''' used for LUKS encryption.  
# '''User Initializes Encrypted Storage''': The user initializes the storage space on the node via iSCSI and encrypts it with LUKS. The user retains the '''passphrase''' used for LUKS encryption.  


Please note that the encryption passphrase is '''completely different''' from the iSCSI password. It must not be identical to, or derived from, the iSCSI password. The encryption passphrase '''is known only by the user''' and should be chosen following strong security practices to ensure data protection.
Please note that the encryption passphrase is '''completely different''' from the iSCSI password. It must not be identical to, or derived from, the iSCSI password. The encryption passphrase '''is known only by the user''' and should be chosen following strong security practices to ensure data protection.
Line 195: Line 197:
Wait for the script to finish executing; it should display the <code>Initialization of CompuVault completed successfully!</code> message.
Wait for the script to finish executing; it should display the <code>Initialization of CompuVault completed successfully!</code> message.


==== Subsequent Usages ====
The initial setup should only be done the first time. Please refer to the Usages section below for instructions on how to mount the encrypted storage.
 
==== Usages ====
# '''User Configures Armored Node''': Please refer to [[Armored Node for Sensitive Data#Node_reservation,_deployment,_and_securisation| the guide above]] to provision an Armored Node.
# '''User Configures Armored Node''': Please refer to [[Armored Node for Sensitive Data#Node_reservation,_deployment,_and_securisation| the guide above]] to provision an Armored Node.
# '''User Mounts and Decrypts Storage''': The user mounts the storage space on the node via iSCSI and decrypts it with the '''passphrase''' entered for LUKS encryption.  
# '''User Mounts and Decrypts Storage''': The user mounts the storage space on the node via iSCSI and decrypts it with the '''passphrase''' entered for LUKS encryption.  

Latest revision as of 16:35, 25 November 2024

Note.png Note

This page is actively maintained by the Grid'5000 team. If you encounter problems, please report them (see the Support page). Additionally, as it is a wiki page, you are free to make minor corrections yourself if needed. If you would like to suggest a more fundamental change, please contact the Grid'5000 team.

This page documents how to secure a Grid'5000 node, making it suitable to host and process sensitive data. The process is based on a tool (g5k-armor-node.py) that runs on top of the debian11-x64-big Grid'5000 environment.

Important limitations about this solution

  • The solution does not protect against user errors during the setup of the secure environment. Please ensure that you follow this documentation with extreme care. Failing to do so could result in an insecure environment.
  • The solution does not protect against user errors that could result in transferring sensitive data outside the secure environment (the Internet is reachable from the secure environment). Please ensure that you use this environment with care.
  • The solution does not protect the rest of Grid'5000 against your node. Before using this solution to work on software that might attack other Grid'5000 machines (for example malware), please consult with the Grid'5000 technical staff.

Informing the technical team

Before starting to use Grid'5000 to process sensitive data, inform the technical team that you are going to do so. Email support-staff@lists.grid5000.fr with the following information:

  • your name
  • your affiliation
  • the general description of your planned work and the kind of data that you are going to process (do not include sensitive information here)
  • the description of the resources that you are going to reserve
  • the expected duration of your work

Node reservation, deployment, and securisation

Identify your requirements

  • Select a cluster that suits your needs (for example using the Hardware page).
  • Estimate for how long you will need the resources. If they exceed what is allowed for the default queue in the Usage Policy, maybe the production queue will match your needs. If the duration also exceeds what is allowed by the production queue (more than one week), you can follow the procedure explained on the Usage Policy page to request an exception.
  • Take into consideration that all data (including data you produced) stored locally on the machine will be destroyed at the end of the reservation.
  • Reserve a node and a VLAN, then deploy the node with the debian11-x64-big environment inside the VLAN (see detailed steps below).

Reserve and setup your node

The process can be done manually, as described in Option2: Manually. It is highly recommended to do it manually for the first time, to understand the step-by-step process for reserving a node and VLAN, deploying with kadeploy and securing the node with the g5k-armor-node.py. If you are already familiar with all these tools and steps, please refer directly to Option1: Automated with grd, which automates all the steps explained in Option 2 with a single command.

Option 1: Automated with grd

grd is a command-line tool that automates Grid'5000 workflows. It can handle the steps from the Option 2 automatically.

For example, to reserve and configure a node in the production queue, from the cluster CLUSTER for a duration of WALLTIME, grd can be used as follows (from a frontend or locally, after installing ruby-cute which contains grd):

frontend:grd bs -s SITE -q production -l {CLUSTER}/nodes=1+{"type='kavlan'"}/vlan=1 -w WALLTIME --armor

Note that CLUSTER can refer to all clusters of default queue, all clusters in the Nancy's production queue, and the clusters among abacus9, abacus12, abacus14, abacus16, roazhon3, roazhon7, roazhon8 and roazhon9 in Rennes's production queue for the moment.

Wait for the script to finish: It must have displayed the Setup completed successfully! message.

  • If the Setup completed successfully! message appears, no further action is needed for the setup.

You can disconnect from the node, and try to connect to the Armored Node, as mentioned below.

You should receive an error message from SSH, indicating that the node's host key has changed. This is expected, as the script has replaced the node's SSH host key with a newly generated one. Follow the instructions provided by SSH to remove the old key.


  • If you receive the error message ERROR: A reboot is needed to complete the upgrade., please reboot the node using the reboot command, connect to the node (as described in Option2: Manually) and run the g5k-armor-node.py script again after the reboot, until you see the Setup completed successfully! message.

Option 2: Manually

Make a reservation

Reserve the node and the VLAN. Below is an example for a reservation in the production queue for one node of cluster CLUSTER for a duration of WALLTIME:

frontend:oarsub -q production -t deploy -t destructive -l {"type='kavlan'"}/vlan=1+{CLUSTER}/nodes=1,walltime=WALLTIME "sleep infinity" 

Note that CLUSTER can refer to all clusters of default queue, all clusters in the Nancy's production queue, and the clusters among abacus9, abacus12, abacus14, abacus16, roazhon3, roazhon7, roazhon8 and roazhon9 in Rennes's production queue for the moment.

Note that additional disks available on the node (that may need an extra reservation) will be used as additional secured storage space, but data will always be destroyed at the end of the node reservation.

Once the job has started, connect inside the job:

frontend:oarsub -C JOB ID

Note that since it is a deploy job, the job shell opens on the frontend again.

Take note of the hostname of the reserved node for instance with oarprint:

frontend:oarprint host

Take note of the assigned VLAN number:

frontend:kavlan -V
Deploy the debian11-x64-big environment

Deploy the node with the debian11-x64-big environment, inside the VLAN:

frontend:kadeploy3 -e debian11-x64-big --vlan `kavlan -V`

Now wait for the deployment to complete.

Securing the node with g5k-armor-node.py

Connect to the node from the outside of Grid'5000, using the node's hostname in the VLAN (hostname with the Kavlan suffix for the reserved VLAN, because the node was deployed inside the kavlan VLAN). After securing the node, this will be the only allowed way to connect to the node, as SSH will only be authorized from Grid'5000 access machines:

your machine:ssh -J YOUR_G5K_LOGIN@access.grid5000.fr root@node-X-kavlan-Y.site.grid5000.fr

Note: Here, node-X refers to the name of your node(e.g., paravance-6), and Y refers to the number of your kavlan (e.g., 5). The complete command will look like this at the end:

ssh -J YOUR_G5K_LOGIN@access.grid5000.fr root@paravance-6-kavlan-5.rennes.grid5000.fr


On the node, download g5k-armor-node.py, for example with:

node:wget https://gitlab.inria.fr/grid5000/g5k-armor/-/raw/master/g5k-armor-node.py

Run it:

node:chmod a+rx g5k-armor-node.py
node:./g5k-armor-node.py

If the Setup completed successfully! message appears, you can disconnect from the node, and try to connect to the Armored Node, as mentioned below.

As described above, you might get an error message from SSH, because the node's host key changed. This is expected: the script replaced the node's SSH host key with a newly generated one. Follow the instructions from SSH to remove the old key.

Using the secured node

Connect the secured node

You must connect to the node using your Grid'5000 login directly from your workstation:

your machine:ssh -J YOUR_G5K_LOGIN@access.grid5000.fr YOUR_G5K_LOGIN@node-X-kavlan-Y.site.grid5000.fr

The node can access the Internet and you can use the sudo command on the node to install additional software if needed.

Please remember that:

  • Only your home directory on the secured node is encrypted (/home/<username>). You must not store sensitive data outside of it (or on other Grid'5000 machines).
  • You must only use secured protocols to transfer data to/from the node as described below.
  • If you reboot the node or if the node is shut down for some reason, you will no longer be able to access your data. However, if you made a copy of the encryption key when it was displayed at the end of the script's output, you can restore the encrypted storage from the node with:
echo '<paste key content here>' > /run/user/1000/key
sudo cryptsetup luksOpen --key-file /run/user/1000/key /dev/mapper/vg-data encrypted
sudo mount /dev/mapper/encrypted $HOME
exit

Then reconnect to the node.

If you prefer to avoid keeping a copy of the encryption key, it is a good idea to make intermediary backups of the processed data (outside of Grid'5000), in case the secured node becomes unreachable during the processing.

Transferring data to/from the node

You must transfer data directly between an external secure storage, and your Grid'5000 node. You must not use other Grid'5000 storage spaces (such as NFS spaces) in the process.

It is recommended to use rsync. Using rsync, you can specify access.grid5000.fr as a SSH JumpHost using the -e option. Alternatively, you can customize your SSH configuration as described in the Getting Started tutorial.

  • To transfer files to the node:
rsync -e "ssh -J YOUR_G5K_LOGIN@access.grid5000.fr" <local path> YOUR_G5K_LOGIN@node-X-kavlan-Y.site.grid5000.fr:<remote path>
  • To fetch files from the node:
rsync -e "ssh -J YOUR_G5K_LOGIN@access.grid5000.fr" YOUR_G5K_LOGIN@node-X-kavlan-Y.site.grid5000.fr:<remote path> <local path>

Data management

Several solutions are possible to manage the sensitive data you need to use on the node.

Solution A: Storing Data Outside Grid'5000

You could store the data in a secure storage space outside Grid'5000, and copy it to/from the node, as described above, when needed.

Main limitation of this solution: it is is not suitable if the data volume is important (because of the transfer time).

Solution B: Storing Data In An Encrypted Archive Inside Grid'5000

Assuming you have previously provisioned an Armored Node as outlined in the guide, and have transferred sensitive data within an AES128 encrypted archive, as described in the data transfer section, please follow these steps:

  • On the Armored Node, install 7z by running the following command:
node:sudo apt install 7zip
  • Once in the encrypted home directory on the Armored Node, decompress your encrypted archive using your predefined password with the following command:
node:7zz x sensitive_data.7z
  • Before storing your derived sensitive data within Grid'5000 but outside the secured node, ensure to encrypt and compress them along with your password as follows:
node:7zz a -p -mhe=on -mx=9 -m0=lzma2 -mtc=on -mtm=on -mta=on sensitive_data_derived.7z sensitive_data/

Using the .7z format, files are encrypted with AES-256 encryption by default. Please take note of the crucial encryption options used in the command above, and distinguish them from other useful options:

  • Encryption-specific options (Highly important):

-p: This option prompts for a password during extraction. It's important for securing the encrypted data with a password.

-mhe=on: This option enables encryption for the archive header, so that no one can see your file names in the archive file before entering the password. It enhances the data privacy.

  • Other options related to compression:

-mx=9: Compress to the highest level (9). This reduce the size of the encrypted data.

-m0=lzma2: Compress using the "LZMA2" methode, which is an lossless data compression algorithm. This optimize disk space usage.

-mtc=on -mtm=on -mta=on: Compress using multi-threading. This allows to speed up the compression process.

For more details on the 7zip file archiver, you can refer to the man page on Debian and this compression manuel.


Main limitation of this solution: it is not very practical because of the frequent need for decrypting and decompressing, as well as compressing and encrypting data.

Solution C: Using a Remote Secured Volume (CompuVault)

Initial Setup

  1. User Requests Storage Space Creation: The user requests the creation of a CompuVault storage space from the Grid5000 technical team (mailto:support-staff@lists.grid5000.fr), specifying the project name, the required volume and its expiration date.
  2. Technical Team Sets Up Storage: Following this guide, the technical team creates the storage space and sets up an iSCSI export protected by a login/password pair. The parameters (server address, export name, project name and iSCSI login/password) are communicated to the user within a PROJECT_NAME_cv_config.json file, in a confidential manner.
  3. User Configures Armored Node: Please refer to the guide above to provision an Armored Node.
  4. User Initializes Encrypted Storage: The user initializes the storage space on the node via iSCSI and encrypts it with LUKS. The user retains the passphrase used for LUKS encryption.

Please note that the encryption passphrase is completely different from the iSCSI password. It must not be identical to, or derived from, the iSCSI password. The encryption passphrase is known only by the user and should be chosen following strong security practices to ensure data protection.


Warning.png Warning

Since you will be dealing with decrypted sensitive data, please follow these steps carefully and ensure you have backups of the processed data in a secured way, in case the secured node becomes accidently unreachable.

  • Please make sure to transfer the PROJECT_NAME_cv_config.json received from the technical team, into your home dir on the Armored Node:

your machine:scp -J YOUR_G5K_LOGIN@access.grid5000.fr <local path>/PROJECT_NAME_cv_config.json YOUR_G5K_LOGIN@node-X-kavlan-Y.site.grid5000.fr:/home/YOUR_G5K_LOGIN

  • Connect to the Armored Node, as mentioned above.
  • On the Armored Node, download the compuVault.py, for example with:
node:wget https://gitlab.inria.fr/grid5000/g5k-armor/-/raw/master/compuVault.py

Run it:

node:chmod a+rx compuVault.py
node:./compuVault.py init

Wait for the script to finish executing; it should display the Initialization of CompuVault completed successfully! message.

The initial setup should only be done the first time. Please refer to the Usages section below for instructions on how to mount the encrypted storage.

Usages

  1. User Configures Armored Node: Please refer to the guide above to provision an Armored Node.
  2. User Mounts and Decrypts Storage: The user mounts the storage space on the node via iSCSI and decrypts it with the passphrase entered for LUKS encryption.
  • Please make sure to transfer the PROJECT_NAME_cv_config.json received by the technical team, to your home dir on the Armored Node:

your machine:scp -J YOUR_G5K_LOGIN@access.grid5000.fr <local path>/PROJECT_NAME_cv_config.json YOUR_G5K_LOGIN@node-X-kavlan-Y.site.grid5000.fr:/home/YOUR_G5K_LOGIN

  • Connect to the Armored Node, as mentioned above.
  • On the Armored Node, download the compuVault.py, for example with:
node:wget https://gitlab.inria.fr/grid5000/g5k-armor/-/raw/master/compuVault.py

Run it:

node:chmod a+rx compuVault.py
node:./compuVault.py mount


Wait for the script to finish executing; it should display the Mounting of CompuVault completed successfully! message. Afterward, you will see the mounted encrypted storage at the /mnt/cv-PROJECT_NAME folder.

You can transfer your sensitive data to the encrypted volume following the guide above, conduct your experiment. And your encrypted data will be stored in the remote secured volume.


Main limitations of this solution

  • The secured data storage is only available from one node at a time. If you mount your secured storage on several nodes at the same time, you will corrupt your data in subtle ways. If you require multiple Armored Node simultaneously, please discuss your use case with the technical team.
  • The secured data storage space is not backed up. Please ensure that you have a copy of the data elsewhere in case of catastrophic failure of the storage server.

Troubleshooting

Warning.png Warning

If you experience any issue during the securisation procedure, do not continue further your experiment. The node might not be correctly secured, and thus your data not well protected.

Rerun the securing procedure from the begining

You can try to rerun all the procedure from the begining execept that you do not need to execute the oarsub command (if the job is still running).

Connect the frontend (the one, you previously used) and connect inside the job:

frontend:oarsub -C JOB ID

Format the node and deploy debian11-x64-big environment on it:

frontend:kadeploy3 -e debian11-x64-big --vlan `kavlan -V`

Finally, download and execute the python script as described in the "Securing the node with g5k-armor-node.py" section.

Have more output information on each step for debugging

If you still experience an issue during the procedure, you might want to display more output information for debbuging and understanding the issue.

To do so, do the following:

  • for the "deploying debian11-x64-big environment" step, add --verbose-level 5
frontend:kadeploy3 -e debian11-x64-big --vlan `kavlan -V` --verbose-level 5
  • for the "securing script" step or the "CompuVault solution", set the environment variable GAN_DEBUG to 1 :
node:GAN_DEBUG=1 ./g5k-armor-node.py

Contact the technical team

If the script is not working properly or you have any question on the procedure, do not hesitate to contact the technical team BEFORE running any experiment with sensitive data: support-staff@lists.grid5000.fr. Please include all relevant details that could help the technical team to understand your problem (do NOT send any sensitive data by mail).

For instance, if the script g5k-armor-node.py is not working properly, please run it with the debug mode (see the previous section) and copy/paste potential error messages on the email you will send to the technical team.

Extending node reservation beyond normal limits

A limitation of this solution is the frequent need for setting up the node and importing the required data.

A way to mitigate this problem is to extend the reservations beyond what is normally allowed by Grid5000 policies (7 days max). However :

  • This add constraints for maintenances for the Grid5000 technical team
  • It is generally considered a bad practice to reserve resources (which prevents other users from using them) and then not use them

If really needed, this possibility should be discussed with the user's security correspondant and with the Grid5000 technical team. A prerequisite for this discussion is that the user clarify the hardware that could match its needs, using for example the Hardware page.