Armored Node for Sensitive Data
Note | |
---|---|
This page is actively maintained by the Grid'5000 team. If you encounter problems, please report them (see the Support page). Additionally, as it is a wiki page, you are free to make minor corrections yourself if needed. If you would like to suggest a more fundamental change, please contact the Grid'5000 team. |
This page documents how to secure a Grid'5000 node, making it suitable to host and process sensitive data. The process is based on a tool (g5k-armor-node.py) that runs on top of the debian11-x64-big Grid'5000 environment.
Important limitations about this solution
- The solution does not protect against user errors during the setup of the secure environment. Please ensure that you follow this documentation with extreme care. Failing to do so could result in an insecure environment.
- The solution does not protect against user errors that could result in transferring sensitive data outside the secure environment (the Internet is reachable from the secure environment). Please ensure that you use this environment with care.
- The solution does not protect the rest of Grid'5000 against your node. Before using this solution to work on software that might attack other Grid'5000 machines (for example malware), please consult with the Grid'5000 technical staff.
Informing the technical team
Before starting to use Grid'5000 to process sensitive data, inform the technical team that you are going to do so. Email support-staff@lists.grid5000.fr with the following information:
- your name
- your affiliation
- the general description of your planned work and the kind of data that you are going to process (do not include sensitive information here)
- the description of the resources that you are going to reserve
- the expected duration of your work
Node reservation, deployment, and securisation
Identify your requirements
- Select a cluster that suits your needs (for example using the Hardware page).
- Estimate for how long you will need the resources. If they exceed what is allowed for the default queue in the Usage Policy, maybe the production queue will match your needs. If the duration also exceeds what is allowed by the production queue (more than one week), you can follow the procedure explained on the Usage Policy page to request an exception.
- Take into consideration that all data (including data you produced) stored locally on the machine will be destroyed at the end of the reservation.
- Reserve a node and a VLAN, then deploy the node with the debian11-x64-big environment inside the VLAN (see detailed steps below).
Reserve and setup your node (option 1: manually)
- Make a reservation
Reserve the node and the VLAN. Below is an example for a reservation in the production queue for one node of cluster CLUSTER
for a duration of WALLTIME
:
frontend:
oarsub
-q production -t deploy -t destructive -l {"type='kavlan'"}/vlan=1+{CLUSTER
}/nodes=1,walltime=WALLTIME
"sleep infinity"
Note that CLUSTER
can refer to all clusters of default queue, all clusters in the Nancy's production queue, and the clusters among abacus9, abacus12, abacus14, abacus16, roazhon3, roazhon7, roazhon8 and roazhon9 in Rennes's production queue for the moment.
Note that additional disks available on the node (that may need an extra reservation) will be used as additional secured storage space, but data will always be destroyed at the end of the node reservation.
Once the job has started, connect inside the job:
frontend:
oarsub
-CJOB ID
Note that since it is a deploy job, the job shell opens on the frontend again.
Take note of the hostname of the reserved node for instance with oarprint:
frontend:
oarprint host
Take note of the assigned VLAN number:
frontend:
kavlan
-V
- Deploy the debian11-x64-big environment
Deploy the node with the debian11-x64-big environment, inside the VLAN:
frontend:
kadeploy3
-e debian11-x64-big --vlan `kavlan -V`
Now wait for the deployment to complete.
- Securing the node with g5k-armor-node.py
Connect to the node from the outside of Grid'5000, using the node's hostname in the VLAN (hostname with the Kavlan suffix for the reserved VLAN, because the node was deployed inside the kavlan VLAN). After securing the node, this will be the only allowed way to connect to the node, as SSH will only be authorized from Grid'5000 access machines:
your machine:
ssh
-JYOUR_G5K_LOGIN
@access.grid5000.fr root@node-X-kavlan-Y.site
.grid5000.fr
On the node, download g5k-armor-node.py, for example with:
node:
wget
https://gitlab.inria.fr/grid5000/g5k-armor/-/raw/master/g5k-armor-node.py
Run it:
node:
chmod
a+rx g5k-armor-node.pynode:
./g5k-armor-node.py
Wait for the script to finish: It must have displayed the Setup completed successfully!
message. If you receive the error message ERROR: A reboot is needed to complete the upgrade.
, please reboot the node using the reboot
command, and run the g5k-armor-node.py
script again after the reboot, until you see the Setup completed successfully!
message.
Disconnect from the node, and try to connect again using SSH. You should get an error message from SSH, because the node's host key changed. This is expected: the script replaced the node's SSH host key with a newly generated one. Follow the instructions from SSH to remove the old key.
Reserve and setup your node (option 2: automated with grd)
grd is a command-line tool to automate Grid'5000 workflows. It can perform the steps above automatically.
For example, to reserve and configure a node in the production queue, from the cluster CLUSTER
for a duration of WALLTIME
, grd can be used as follows (from a frontend or locally, after installing ruby-cute which contains grd):
frontend:
grd
bs -sSITE
-q production -l {CLUSTER
}/nodes=1+{"type='kavlan'"}/vlan=1 -wWALLTIME
--armor
Note that CLUSTER
can refer to all clusters of default queue, all clusters in the Nancy's production queue, and the clusters among abacus9, abacus12, abacus14, abacus16, roazhon3, roazhon7, roazhon8 and roazhon9 in Rennes's production queue for the moment.
As described above, you might get an error message from SSH, because the node's host key changed. This is expected: the script replaced the node's SSH host key with a newly generated one. Follow the instructions from SSH to remove the old key.
Using the secured node
Connect the secured node
You must connect to the node using your Grid'5000 login directly from your workstation:
your machine:
ssh
-JYOUR_G5K_LOGIN
@access.grid5000.frYOUR_G5K_LOGIN
@node-X-kavlan-Y.site
.grid5000.fr
The node can access the Internet and you can use the sudo command on the node to install additional software if needed.
Please remember that:
- Only your home directory on the secured node is encrypted (
/home/<username>
). You must not store sensitive data outside of it (or on other Grid'5000 machines). - You must only use secured protocols to transfer data to/from the node as described below.
- If you reboot the node or if the node is shut down for some reason, you will no longer be able to access your data. However, if you made a copy of the encryption key when it was displayed at the end of the script's output, you can restore the encrypted storage from the node with:
echo '<paste key content here>' > /run/user/1000/key sudo cryptsetup luksOpen --key-file /run/user/1000/key /dev/mapper/vg-data encrypted sudo mount /dev/mapper/encrypted $HOME exit
Then reconnect to the node.
If you prefer to avoid keeping a copy of the encryption key, it is a good idea to make intermediary backups of the processed data (outside of Grid'5000), in case the secured node becomes unreachable during the processing.
Transferring data to/from the node
You must transfer data directly between an external secure storage, and your Grid'5000 node. You must not use other Grid'5000 storage spaces (such as NFS spaces) in the process.
It is recommended to use rsync. Using rsync, you can specify access.grid5000.fr as a SSH JumpHost using the -e
option. Alternatively, you can customize your SSH configuration as described in the Getting Started tutorial.
- To transfer files to the node:
rsync -e "ssh -JYOUR_G5K_LOGIN
@access.grid5000.fr"<local path>
YOUR_G5K_LOGIN
@node-X-kavlan-Y.site
.grid5000.fr:<remote path>
- To fetch files from the node:
rsync -e "ssh -JYOUR_G5K_LOGIN
@access.grid5000.fr"YOUR_G5K_LOGIN
@node-X-kavlan-Y.site
.grid5000.fr:<remote path>
<local path>
Data management
Several solutions are possible to manage the sensitive data you need to use on the node.
Solution A: Storing Data Outside Grid'5000
You could store the data in a secure storage space outside Grid'5000, and copy it to/from the node, as described above, when needed.
Main limitation of this solution: it is is not suitable if the data volume is important (because of the transfer time).
Solution B: Storing Data In An Encrypted Archive Inside Grid'5000
Assuming you have previously provisioned an Armored Node as outlined in the guide, and have transferred sensitive data within an AES128 encrypted archive, as described in the data transfer section, please follow these steps:
- On the Armored Node, install 7z by running the following command:
node:
sudo apt install 7zip
- Once in the encrypted home directory on the Armored Node, decompress your encrypted archive using your predefined password with the following command:
node:
7zz x sensitive_data.7z
- Before storing your derived sensitive data within Grid'5000 but outside the secured node, ensure to encrypt and compress them along with your password as follows:
node:
7zz a -p -mhe=on -mx=9 -m0=lzma2 -mtc=on -mtm=on -mta=on sensitive_data_derived.7z sensitive_data/
Using the .7z format, files are encrypted with AES-256 encryption by default. Please take note of the crucial encryption options used in the command above, and distinguish them from other useful options:
- Encryption-specific options (Highly important):
-p
: This option prompts for a password during extraction. It's important for securing the encrypted data with a password.
-mhe=on
: This option enables encryption for the archive header, so that no one can see your file names in the archive file before entering the password. It enhances the data privacy.
- Other options related to compression:
-mx=9
: Compress to the highest level (9). This reduce the size of the encrypted data.
-m0=lzma2
: Compress using the "LZMA2" methode, which is an lossless data compression algorithm. This optimize disk space usage.
-mtc=on -mtm=on -mta=on
: Compress using multi-threading. This allows to speed up the compression process.
For more details on the 7zip file archiver, you can refer to the man page on Debian and this compression manuel.
Main limitation of this solution: it is not very practical because of the frequent need for decrypting and decompressing, as well as compressing and encrypting data.
Solution C: Using a Remote Secured Volume (CompuVault)
Initial Setup
- User Requests Storage Space Creation: The user requests the creation of a CompuVault storage space from the Grid5000 technical team (mailto:support-staff@lists.grid5000.fr), specifying the required volume and its expiration date.
- Technical Team Sets Up Storage: Following this guide, the technical team creates the storage space and sets up an iSCSI export protected by a login/password pair. The parameters (server address, export name, project name and iSCSI login/password) are communicated to the user within a
PROJECT_NAME
_cv_config.json file, in a confidential manner. - User Configures Armored Node: Please refer to the guide above to provision an Armored Node.
- User Mounts and Encrypts Storage: The user mounts the storage space on the node via iSCSI and encrypts it with LUKS. The user retains the passphrase used for LUKS encryption.
Please note that the encryption passphrase is completely different from the iSCSI password. It must not be identical to, or derived from, the iSCSI password. The encryption passphrase is known only by the user and should be chosen following strong security practices to ensure data protection.
- Please make sure to transfer the
PROJECT_NAME
_cv_config.json received from the technical team, into your home dir on the Armored Node:
your machine:
scp
-J YOUR_G5K_LOGIN
@access.grid5000.fr <local path>
/PROJECT_NAME
_cv_config.json YOUR_G5K_LOGIN
@node-X-kavlan-Y.site
.grid5000.fr:/home/YOUR_G5K_LOGIN
- Connect to the Armored Node, as mentioned above.
- On the Armored Node, download the compuVault.py, for example with:
node:
wget
https://gitlab.inria.fr/grid5000/g5k-armor/-/raw/master/compuVault.py
Run it:
node:
chmod
a+rx compuVault.pynode:
./compuVault.py init
Wait for the script to finish executing; it should display the Initialization of CompuVault completed successfully!
message.
Subsequent Usages
- User Configures Armored Node: Please refer to the guide above to provision an Armored Node.
- User Mounts and Decrypts Storage: The user mounts the storage space on the node via iSCSI and decrypts it with the passphrase entered for LUKS encryption.
- Please make sure to transfer the
PROJECT_NAME
_cv_config.json received by the technical team, to your home dir on the Armored Node:
your machine:
scp
-J YOUR_G5K_LOGIN
@access.grid5000.fr <local path>
/PROJECT_NAME
_cv_config.json YOUR_G5K_LOGIN
@node-X-kavlan-Y.site
.grid5000.fr:/home/YOUR_G5K_LOGIN
- Connect to the Armored Node, as mentioned above.
- On the Armored Node, download the compuVault.py, for example with:
node:
wget
https://gitlab.inria.fr/grid5000/g5k-armor/-/raw/master/compuVault.py
Run it:
node:
chmod
a+rx compuVault.pynode:
./compuVault.py mount
Wait for the script to finish executing; it should display the Mounting of CompuVault completed successfully!
message. Afterward, you will see the mounted encrypted storage at the /mnt/cv-PROJECT_NAME
folder.
You can transfer your sensitive data to the encrypted volume following the guide above, conduct your experiment. And your encrypted data will be stored in the remote secured volume.
Main limitations of this solution
- The secured data storage is only available from one node at a time. If you mount your secured storage on several nodes at the same time, you will corrupt your data in subtle ways. If you require multiple Armored Node simultaneously, please discuss your use case with the technical team.
- The secured data storage space is not backed up. Please ensure that you have a copy of the data elsewhere in case of catastrophic failure of the storage server.
Troubleshooting
Warning | |
---|---|
If you experience any issue during the securisation procedure, do not continue further your experiment. The node might not be correctly secured, and thus your data not well protected. |
Rerun the securing procedure from the begining
You can try to rerun all the procedure from the begining execept that you do not need to execute the oarsub command (if the job is still running).
Connect the frontend (the one, you previously used) and connect inside the job:
frontend:
oarsub
-CJOB ID
Format the node and deploy debian11-x64-big environment on it:
frontend:
kadeploy3
-e debian11-x64-big --vlan `kavlan -V`
Finally, download and execute the python script as described in the "Securing the node with g5k-armor-node.py" section.
Have more output information on each step for debugging
If you still experience an issue during the procedure, you might want to display more output information for debbuging and understanding the issue.
To do so, do the following:
- for the "deploying debian11-x64-big environment" step, add
--verbose-level 5
frontend:
kadeploy3
-e debian11-x64-big --vlan `kavlan -V` --verbose-level 5
- for the "securing script" step or the "CompuVault solution", set the environment variable
GAN_DEBUG
to 1 :
node:
GAN_DEBUG=1 ./g5k-armor-node.py
Contact the technical team
If the script is not working properly or you have any question on the procedure, do not hesitate to contact the technical team BEFORE running any experiment with sensitive data: support-staff@lists.grid5000.fr. Please include all relevant details that could help the technical team to understand your problem (do NOT send any sensitive data by mail).
For instance, if the script g5k-armor-node.py
is not working properly, please run it with the debug mode (see the previous section) and copy/paste potential error messages on the email you will send to the technical team.
Extending node reservation beyond normal limits
A limitation of this solution is the frequent need for setting up the node and importing the required data.
A way to mitigate this problem is to extend the reservations beyond what is normally allowed by Grid5000 policies (7 days max). However :
- This add constraints for maintenances for the Grid5000 technical team
- It is generally considered a bad practice to reserve resources (which prevents other users from using them) and then not use them
If really needed, this possibility should be discussed with the user's security correspondant and with the Grid5000 technical team. A prerequisite for this discussion is that the user clarify the hardware that could match its needs, using for example the Hardware page.