Environment creation: Difference between revisions

From Grid5000
Jump to navigation Jump to search
No edit summary
(60 intermediate revisions by 6 users not shown)
Line 1: Line 1:
<!-- TODO
* update https://www.grid5000.fr/w/Getting_Started#Deploying_nodes_with_Kadeploy to use Template:Reference environments
* use Template:Reference environments in https://www.grid5000.fr/w/Advanced_Kadeploy#Search_an_environment
* rewrite https://www.grid5000.fr/w/Advanced_Kadeploy#Create_a_new_environment_from_a_customized_environment with just a link to here
-->
<!-- uncomment when ready
{{Portal|User}}
{{Portal|User}}
{{Portal|Tutorial}}
{{Portal|Tutorial}}
{{TutorialHeader}}
{{TutorialHeader}}
This page presents in details how to create a Grid'5000 environment, that is an operating system ''image'' do deploy either on hardware nodes (bare metal) using <code class="command">kadeploy3</code> or in VMs using <code class="command">qemu</code> for instance.
This page presents in details how to create a Grid'5000 environment. An environment is an operating system '''image''' that can be deployed on hardware nodes (bare metal) using <code class="command">kadeploy3</code>.
-->


__TOC__
__TOC__
Line 16: Line 9:
= Introduction: the several ways for preparing a custom system environment =
= Introduction: the several ways for preparing a custom system environment =
There are different ways to prepare a system environment for experiments:
There are different ways to prepare a system environment for experiments:
* '''<code class="replace">(1)</code>''' A first way consists in deploying a provided environment, for instance <code class="env">debian11-x64-big</code>, and adding software and custom configurations to it after the initial deployment. That every time a new experiment is run (new ''deploy'' job). While it can be relevant, it has an obvious bias: the post-deployment setup is not factorized and must be redone every time and on all nodes.
* '''<code class="replace">(1)</code>''' A first way consists in deploying a provided environment, for instance <code class="env">debian11-big</code>, and adding software and custom configurations to it after the initial deployment. That every time a new experiment is run (new ''deploy'' job). While it can be relevant, it has an obvious bias: the post-deployment setup is not factorized and must be redone every time and on all nodes.
* '''<code class="replace">(2)</code>''' A second way consists in building a ''master'' system environment with all the wanted customizations, then deploying that pre-built environment on the experiment nodes. This way, the environment preparation is only done once for all times and all nodes: it is factorized.
* '''<code class="replace">(2)</code>''' A second way consists in building a ''master'' system environment with all the wanted customizations, then deploying that pre-built environment on the experiment nodes. This way, the environment preparation is only done once for all times and all nodes: it is factorized.


Line 24: Line 17:


{{Note|text=Even though it is somehow out of context in this page, we can also mention the use of the '''<code class="command">sudo-g5k</code>''' command, which allows a user to gain right away the root privileges whenever needed in the production environment (available on machines by default), hence without requiring a ''deploy'' job and to actually deploy an environment with <code class="command">kadeploy</code> beforehand. In the context of the creation of custom system environment, using '''<code class="command">sudo-g5k</code>''':
{{Note|text=Even though it is somehow out of context in this page, we can also mention the use of the '''<code class="command">sudo-g5k</code>''' command, which allows a user to gain right away the root privileges whenever needed in the production environment (available on machines by default), hence without requiring a ''deploy'' job and to actually deploy an environment with <code class="command">kadeploy</code> beforehand. In the context of the creation of custom system environment, using '''<code class="command">sudo-g5k</code>''':
* can simplify '''<code class="replace">(1)</code>''', because it avoids requiring an initial deployment. The standard environment (currently <code class="env">debian11</code> <code class="env">std</code>) is just used.  
* can simplify '''<code class="replace">(1)</code>''', because it avoids requiring an initial deployment. The standard environment is just used.  
* can also simplify '''<code class="replace">(2-a)</code>''':  after using '''<code class="command">sudo-g5k</code>''' to modify the <code class="env">std</code> environment as root, <code class="command">tgz-g5k</code> can be used to export an environment image from the modified standard environment.
* can also simplify '''<code class="replace">(2-a)</code>''':  after using '''<code class="command">sudo-g5k</code>''' to modify the standard environment as root, <code class="command">tgz-g5k</code> can be used to export an environment image from the modified standard environment.


In both cases, one must understand that this however has the some drawbacks: it limits to using the <code class="env">debian11</code> <code class="env">std</code> environment as base system on the nodes of the experiment, which may include some unnecessary complexity or limitations}}
In both cases, one must understand that this however has the some drawbacks: it limits to using the <code class="env">debian11</code> standard environment as base system on the nodes of the experiment, which may include some unnecessary complexity or limitations}}




In the remaining of this page, '''<code class="replace">(1)</code>''' will not be detailed: it is let to the user to choose a tool to deploy software and configurations on ''running'' systems, such as <code class="command">clush</code>, <code class="command">taktuk</code>, <code class="command">ansible</code>, etc.
In the remaining of this page, '''<code class="replace">(1)</code>''' will not be detailed: it is let to the user to choose a tool to deploy software and configurations on ''running'' systems, such as <code class="command">clush</code>, <code class="command">taktuk</code>, <code class="command">ansible</code>, etc.


= About Grid'5000 supported environments =
The next two paragraphs give detailed technical howtos for customizing existing Grid'5000 environments, first following way '''<code class="replace">(2-a)</code>''', then way '''<code class="replace">(2-b)</code>'''.
 
=== About Grid'5000 supported environments ===
While building a system environment from scratch (only taking a generic OS installation media as a base) may be doable, it is technically extremely difficult (see the last section of this page). Most Grid'5000 users should rather create customized environments on top of existing works, already done for Grid'5000.
While building a system environment from scratch (only taking a generic OS installation media as a base) may be doable, it is technically extremely difficult (see the last section of this page). Most Grid'5000 users should rather create customized environments on top of existing works, already done for Grid'5000.


The Grid'5000 technical team provides several '''reference environments''', which a user's customized environment can be built on top of.
The Grid'5000 technical team provides several '''reference environments''', which a user's customized environment can be built on top of. They are maintained in a [https://github.com/grid5000/environments-recipes/tree/master/steps/data/setup/puppet/modules/env/manifests/min git repository that includes both the kameleon recipes and the puppet recipes] (Kameleon invokes puppet for most of the environment's configuration). The list of packages installed in each environment is managed in the [https://gitlab.inria.fr/grid5000/g5k-meta-packages g5k-meta-packages repository].


{{Note|text={{Reference environments}}}}
More information on Grid'5000 reference environments can be found on the [[Getting_Started#On_Grid.275000_reference_environments|Getting started page]].


Of course, a user can also build a new customized environment on top of another user's customized environment.
Of course, a user can also build a new customized environment on top of another user's customized environment.


The next sections give detailed technical howtos for customizing existing Grid'5000 environments, first following way '''<code class="replace">(2-a)</code>''', then way '''<code class="replace">(2-b)</code>'''.
=== About environment postinstalls ===
This page concentrate on the generation of the image and description part of environments. Another important part of an environment is the postinstall. Most Grid'5000 environments use a same postinstall which is named <code class=command>g5k-postinstall</code>. Users may however write their own postinstall to replace it or to add as an additional postinstall.
 
Postinstalls are documented in [[Advanced_Kadeploy#Customizing_the_postinstalls]].


= Creating an environment images using '''tgz-g5k''' =
= Creating an environment images using '''tgz-g5k''' =
In this section, following the '''<code class="replace">(2-a)</code>''' way described above, we explain how to extend an existing Grid'5000 environment by first deploying it on a machine with <code class="command">kadeploy3</code>, then bringing customization to that machine, and finally archiving the operating system of the machine with '''<code class="command">tgz-g5k</code>''' to create a new environment image.
In this section, following the '''<code class="replace">(2-a)</code>''' way described above, we explain how to extend an existing Grid'5000 environment by first deploying it on a machine with <code class="command">kadeploy3</code>, then bringing customization to that machine, and finally archiving the operating system of the machine with '''<code class="command">tgz-g5k</code>''' to create a new environment image.


== 1: deploy the existing environment on a machine ==
== Deploy the existing environment on a machine ==
First, we have to create the ''deploy'' job, to reserve a machine on which we will deploy the existing environment of our choice, which our customized environment will be based on.
First, we have to create the ''deploy'' job, to reserve a machine on which we will deploy the existing environment of our choice, which our customized environment will be based on.


Line 56: Line 54:
The interactive job opens a new shell on the frontend (careful: the job ends when exiting that shell).  
The interactive job opens a new shell on the frontend (careful: the job ends when exiting that shell).  


Since the hostname of the reserved machine is stored in the <code class="env">$OAR_FILE_NODES</code> file, we can deploy the reference environment of our choice (or another user's environment that we would like to extend) with <code class="command">kadeploy3</code>:  
The hostname of the reserved machine is stored in the <code class="env">$OAR_FILE_NODES</code> file which is used by default by Kadeploy. So we can deploy the reference environment of our choice (or another user's environment that we would like to extend) with <code class="command">kadeploy3</code>:  
{{Term|location=frontend|cmd=<code class="command">kadeploy3</code> -k -e <code class="replace">debian11-x64-base</code> -f <code class="env">$OAR_FILE_NODES</code>}}
{{Term|location=frontend|cmd=<code class="command">kadeploy3</code> <code class="replace">debian11-base</code>}}
(if the chosen environment is not registered in <code class="command">kaenv3</code>, see the <code class="command">-a</code> option of <code class="command">kadeploy3</code> to point to a environment description file).
(if the chosen environment is not registered in <code class="command">kaenv3</code>, see the <code class="command">-a</code> option of <code class="command">kadeploy3</code> to point to a environment description file).


== 2: customize the environment ==
== Customize the environment ==


Once the deployment has run successfully, we can connect to the machine using <code class="command">ssh</code> as root without password, and do any customization using shell commands.
Once the deployment has run successfully, we can connect to the machine using <code class="command">ssh</code> as root without password, and do any customization using shell commands.
Line 70: Line 68:
Note: When you are done with the customization, mind clearing temporary files or caches to save disk space.
Note: When you are done with the customization, mind clearing temporary files or caches to save disk space.


== 3: archive the environment image ==
== Archive the environment image ==


We can now archive the customized environment, using <code class=command>tgz-g5k</code> to create a Grid'5000 environment image from the filesystem of the machine. The environment image is a tarball of the filesystem of the OS with some adaptations.
We can now archive the customized environment, using <code class=command>tgz-g5k</code> to create a Grid'5000 environment image from the filesystem of the machine. The environment image is a tarball of the filesystem of the OS with some adaptations.


{{Term|location=frontend|cmd=<code class=command>tgz-g5k -m </code><code class=replace>hostname</code><code class=command> -f </code><code class=replace>~/environment_image.tgz</code>}}
{{Term|location=frontend|cmd=<code class=command>tgz-g5k -m </code><code class=replace>hostname</code><code class=command> -f </code><code class=replace>~/environment_image.tar.zst</code>}}


This will create a file named <code class=replace>environment_image.tgz</code> in your home directory on the <code class=host>frontend</code>.
This will create a file named <code class=replace>environment_image.tar.zst</code> in your home directory on the <code class=host>frontend</code>.


{{Note|text=About <code class=command>tgz-g5k</code>:
{{Note|text=About <code class=command>tgz-g5k</code>:
* If you want to create an image of a machine that run the Grid'5000 default environment (i.e. not in a ''deploy'' job) and that you modified after gaining the root privileges with using <code class=command>sudo-g5k</code>, the <code class=command>-o</code> option of <code class=command>tgz-g5k</code> must be used so that the connection to the machine is done using <code class=command>oarsh</code>/<code class=command>oarcp</code> instead of <code class=command>ssh</code>/<code class=command>scp</code>.
* If you want to create an image of a machine that run the Grid'5000 default environment (i.e. not in a ''deploy'' job) and that you modified after gaining the root privileges with using <code class=command>sudo-g5k</code>, the <code class=command>-o</code> option of <code class=command>tgz-g5k</code> must be used so that the connection to the machine is done using <code class=command>oarsh</code>/<code class=command>oarcp</code> instead of <code class=command>ssh</code>/<code class=command>scp</code>.
* If you want <code class=command>tgz-g5k</code> to access the machine with a custom user id, you can use <code class=command>-u </code><code class="replace">username</code> (default is root).
* If you want <code class=command>tgz-g5k</code> to access the machine with your user id, use the <code class=command>-u </code> option (default is root).
* More information on <code class=command>tgz-g5k</code> in <code class=command>tgz-g5k -h</code> or <code class=command>man tgz-g5k</code>.}}
* More information on <code class=command>tgz-g5k</code> in <code class=command>tgz-g5k -h</code> or <code class=command>man tgz-g5k</code>.}}


== 4: create the environment description file ==
== Create the environment description file ==


The new environment image cannot be deployed directly: the image is only one part of an environment. An environment is described by a YAML document. To use the new image, it must be referred by an environment description, so that deploying that environment uses the new image. Note that the environment includes also other information such as the postinstall script and the kernel command line for instance, which can be changed independently from changing the environment image.
The new environment image cannot be deployed directly: the image is only one part of an environment. An environment is described by a YAML document. To use the new image, it must be referred by an environment description, so that deploying that environment uses the new image. Note that the environment includes also other information such as the postinstall script and the kernel command line for instance, which can be changed independently from changing the environment image.
Line 89: Line 87:
The easiest way to create a description for your new environment is to modify the description of the environment it is based on.  
The easiest way to create a description for your new environment is to modify the description of the environment it is based on.  


Since we used the <code>debian11-x64-base</code> reference environment, we can retrieve its description using the <code class="command">kaenv3</code> command and save it to a file. Then we'll use it as a base for the description of our customized environment.
Since we used the <code>debian11-base</code> reference environment, we can retrieve its description using the <code class="command">kaenv3</code> command and save it to a file. Then we'll use it as a base for the description of our customized environment.
{{Term|location=frontend|cmd=<code class=command>kaenv3</code> -p debian11-x64-base -u deploy > <code class=replace>my-custom-environment.yaml</code> }}
{{Term|location=frontend|cmd=<code class=command>kaenv3</code> -p debian11-base -u deploy > <code class=replace>my-custom-environment.yaml</code> }}


We now edit the file to change the environment <code class='replace'>name</code>, <code class='replace'>version</code>, <code class='replace'>description</code>, <code class='replace'>author</code>, and so on. The <code class='replace'>image file</code> entry must of course be changed to point to our new environment image tarball file. Since it is stored locally in our home directory, the path can be a simple absolute path (remove the <code class="file">server://</code> prefix). If the image is placed in your <code class=file>~/public</code> directory, an HTTP URL can alternatively be used (e.g. <code class=file>http://public.</code><code class=replace>site</code><code class=file>.grid5000.fr/~jdoe/environment_image.tgz</code>). Finally, the <code class="replace">visibility</code> line should be removed or its value changed to <code>shared</code> or <code>private</code>.  
{{Note|text=About the ''debian std'' environments: The ''debian std'' (e.g. <code class=replace>debian11-std</code>) environments are the environments used on nodes by default, providing services such as oar-node as well as custom settings that are necessary for the default system but are useless for user-deployed nodes. Users should rather deploy a ''debian big'' environment. However, if it happens that you customized the ''debian std'' environment (it may be the case if you made your customizations without deploying, just using <code class=command>sudo-g5k</code>), it is advised to take as a model of environment description that of the ''debian big'' environment rather than of the ''debian std'' one:
<syntaxhighlight lang="yaml" line='line' highlight='2-6,10'>
{{Term|location=frontend|cmd=<code class=command>kaenv3</code> -p debian11-big -u deploy > <code class=replace>my-custom-environment.yaml</code>}}
This is especially important with regard to the <code class=command>g5k-postinstall</code> command, which must not include <code class=command>--restrict-user std</code> in your environment's description.
}}
   
We now edit the file to change the environment <code class='replace'>name</code>, <code class='replace'>version</code>, <code class='replace'>description</code>, <code class='replace'>author</code>, and so on. The <code class='replace'>image file</code> entry must of course be changed to point to our new environment image tarball file. Since it is stored locally in our home directory, the path can be a simple absolute path (remove the <code class="file">server://</code> prefix). If the image is placed in your <code class=file>~/public</code> directory, an HTTP URL can alternatively be used (e.g. <code class=file>http://public.SITE.grid5000.fr/~jdoe/environment_image.tar.zst</code>, replace SITE by the actual site). Finally, the <code class="replace">visibility</code> line should be removed or its value changed to <code>shared</code> or <code>private</code>.
<syntaxhighlight lang="yaml" line='line' highlight='2,3,5-7,11'>
---
---
name: my-debian
name: my-debian
version: 1
version: 1
arch: x86_64
description: my customized environment based on debian 10 (buster) - base
description: my customized environment based on debian 10 (buster) - base
author: john@doe.org
author: john@doe.org
Line 103: Line 107:
os: linux
os: linux
image:
image:
   file: /home/jdoe/environment_image.tgz
   file: /home/jdoe/environment_image.tar.zst
   kind: tar
   kind: tar
   compression: gzip
   compression: zstd
postinstalls:
postinstalls:
- archive: server:///grid5000/postinstalls/g5k-postinstall.tgz
- archive: server:///grid5000/postinstalls/g5k-postinstall.tgz
Line 118: Line 122:
multipart: false
multipart: false
</syntaxhighlight>
</syntaxhighlight>
{{Warning|text=A local path for the tarball (no leading <code>server://</code>) will not work if your are [[API_tutorial#Deploy|deploying your environment through the API]]. If you want to use the Kadeploy API, you may want to put your tarball in the <code>public</code> directory of your home and specify the path with HTTP (eg: <code>http://public.</code><code class=replace>site</code><code>.grid5000.fr/~</code><code class=replace>username</code>/<code class=replace>environment_image.tar.zst</code>)}}


Once this is done, our customized environment is ready to be deployed (in a ''deploy job'') using:
Once this is done, our customized environment is ready to be deployed (in a ''deploy job'') using:
{{Term|location=frontend|cmd=<code class=command>kadeploy3</code> -k -f <code class=env>$OAR_NODEFILE</code> -a <code class=replace>my-custom-environment.yaml</code> }}
{{Term|location=frontend|cmd=<code class=command>kadeploy3</code> -a <code class=replace>my-custom-environment.yaml</code> }}
(This kind of deployment is called ''anonymous deployment'' because the description is not yet in the Kadeploy3 environment registry. It is particularly useful when working by iteration on the environment, thus having to recreate the environment image several times. Otherwise, since registered environments are checksummed, changing the image file requires updating the registration every time with <code class="command">kaenv3</code>)
(This kind of deployment is called ''anonymous deployment'' because the description is not yet in the Kadeploy3 environment registry. It is particularly useful when working by iteration on the environment, thus having to recreate the environment image several times. Otherwise, since registered environments are checksummed, changing the image file requires updating the registration every time with <code class="command">kaenv3</code>)


Line 127: Line 133:


Assuming you set the name field in the environment description to "my-debian", it will then be deployable using the following command:
Assuming you set the name field in the environment description to "my-debian", it will then be deployable using the following command:
{{Term|location=frontend|cmd=<code class=command>kadeploy3</code> -k -f <code class=env>$OAR_NODEFILE</code> -e <code class=replace>my-debian</code>}}
{{Term|location=frontend|cmd=<code class=command>kadeploy3</code> <code class=replace>my-debian</code>}}
If the <code class="replace">visibility</code> is set to <code>shared</code>, your environment will show up in the list of available registered environment for any user, using <code class=command>kaenv3 -l -u </code><code class="replace">your_username</code>.
If the <code class="replace">visibility</code> is set to <code>shared</code>, your environment will show up in the list of available registered environment for any user, using <code class=command>kaenv3 -l -u </code><code class="replace">your_username</code>.


Line 148: Line 154:
Kameleon provides features such as ''context isolation'' and ''interactive breakpoints''. ''Context isolation'' means that kameleon can run a build process without altering the operating system from where the tool is called itself (kameleon typically uses qemu VMs for the build). Kameleon does not need to run as root.
Kameleon provides features such as ''context isolation'' and ''interactive breakpoints''. ''Context isolation'' means that kameleon can run a build process without altering the operating system from where the tool is called itself (kameleon typically uses qemu VMs for the build). Kameleon does not need to run as root.


See the [http://kameleon.imag.fr/ Kameleon] website for more information, notably a description of the recipe syntax (language) used in the YAML files.
See the [http://kameleon.imag.fr/ Kameleon] website for more information on the tool.


== Preparing the workspace to use kameleon ==
== Preparing the workspace to use kameleon ==
Line 172: Line 178:
You can then list the available recipe templates with:
You can then list the available recipe templates with:
{{Term|location=node|cmd=<code class=command>kameleon template</code> list}}
{{Term|location=node|cmd=<code class=command>kameleon template</code> list}}
As explained above:
 
== Create the recipe of the new environment ==
The <code class=command>kameleon template list</code> command shows all templates available in the Grid'5000 environment recipes repository.
* We see here the templates for Debian stable (9, 10, and 11) with their different variants.
* We see here the templates for Debian stable (9, 10, and 11) with their different variants.
* We also see the templates for other distributions with only the min variant.  
* We also see the templates for other distributions with only the min variant.  
* Finally, we see the template for a recipe that builds from an existing Grid'5000 environment → we use that one.
* Finally, we see the template for a recipe that builds from an existing Grid'5000 environment → '''In this section, we use that one'''.


== Create the new recipe ==
So we extend the <code class=file>grid5000/from_grid5000_environment/base</code> recipe, by run the following command:
 
As explained above, we extend the <code class=file>grid5000/from_grid5000_environment/base</code> recipe. To do so, we run the following command:


{{Term|location=node ~/my_recipes|cmd=<code class=command>kameleon new</code> <code class=replace>my_custom_environment</code> <code class=file>grid5000/from_grid5000_environment/base</code>}}
{{Term|location=node ~/my_recipes|cmd=<code class=command>kameleon new</code> <code class=replace>my_custom_environment</code> <code class=file>grid5000/from_grid5000_environment/base</code>}}
Line 186: Line 192:


  ## Environment to build from
  ## Environment to build from
  grid5000_environment_import_name: "<code class=replace>debiantesting-x64-min</code>"
  grid5000_environment_import_name: "<code class=replace>debiantesting-min</code>"
  #grid5000_environment_import_user: "deploy"
  #grid5000_environment_import_user: "deploy"
  #grid5000_environment_import_version: ""
  #grid5000_environment_import_version: ""


{{Note|text=About the ''debian std'' environment: please note that customizing the ''debian std'' (e.g. <code class=replace>debian11-std</code>) is mostly not relevant since it includes services and settings that are only necessary for the default system on nodes (when not deployed). It is preferable to use a ''debian big'' environment, which provides all the useful functionalities of ''debian std'' (see above the description of the reference environments).}}
Once done, the important part is to bring our customization steps for the setup of our environment in the '''setup''' section (''bootstrap'' and ''export'' should not require any changes).
Once done, the important part is to bring our customization steps for the setup of our environment in the '''setup''' section (''bootstrap'' and ''export'' should not require any changes).


For example, let's assume we want to install the ''ffmpeg'' package in our environment.
See the [http://kameleon.imag.fr/ Kameleon] website for more information, notably a description of the recipe syntax (language) used in the YAML files.
 
{{Warning|text=Grid'5000 uses its own recipes that take benefit from recipes provided by the Kameleon developers (by extending them). Please beware that the [http://kameleon.imag.fr/ Kameleon] website does not have an up to date description of the usage of Kameleon in Grid'5000.}}
 
{{Note|text=About the execution contexts of the Kameleon commands:
The Kameleon commands execute in one of the '''local''', '''out''' or '''in''' contexts (e.g. ''exec_local'', ''exec_in'', ...). 
* '''local''' is the operating system from where we call the <code class=command>kameleon</code> executable, e.g. the workstation system.
* '''out''' is usually an intermediary operating system (VM) from where the target operating system being built is prepared, providing tools that may not be available in the '''local''' context (e.g. debootstrap). In the Grid'5000 recipes, the '''out''' context is usually '''not used''' (or technically, it is identical to the '''in''' context).
* '''in''' is the operating system that is being built. For Grid'5000 recipes, it is run by Kameleon in a Qemu VM.
}}
 
Let's show some examples.
=== First example: install the ''ffmpeg'' package ===
Let's assume we want to install the ''ffmpeg'' package in our environment.


We add a new step to the recipe, which is just a sequence of actions to execute. This basically gives a ''setup'' section in our recipe as follows:
We add a new step to the recipe, which is just a sequence of actions to execute. This basically gives a ''setup'' section in our recipe as follows:
Line 201: Line 221:
       - exec_in : apt-get update && apt-get install -y ffmpeg
       - exec_in : apt-get update && apt-get install -y ffmpeg


* <code class=command>exec_in</code> means that the command will be executed with bash '''in''' the VM of the build process. See the [http://kameleon.imag.fr/ kameleon documation] for other commands.
* <code class=command>exec_in</code> means that the command will be executed with bash '''in''' the VM of the build process. See the [http://kameleon.imag.fr/ kameleon documention] for other commands.
* <code class=command>install_more_packages</code> is a macrostep, it can group one or several microsteps
* <code class=command>install_more_packages</code> is a macrostep, it can group one or several microsteps
* <code class=command>install_ffmpeg</code> is a microstep
* <code class=command>install_ffmpeg</code> is a microstep
Line 217: Line 237:
(no <code class=replace>:</code> after <code class=replace>install_more_packages</code>, since the macrostep is defined in a separate file).
(no <code class=replace>:</code> after <code class=replace>install_more_packages</code>, since the macrostep is defined in a separate file).


You can find more information about the kameleon language [http://kameleon.imag.fr/ in the kameleon documentation]
=== Second example: Install the NAS Benchmarks ===
 
The NAS benchmarks are commonly used to benchmark HPC applications using MPI or OpenMP.
In this example, we will download and configure the NAS package and build the MPI FT benchmark.
 
To do so we will create a step file that will be called from the recipe in <code class=file>~/my_recipe/steps/setup/NAS_benchmark.yaml</code>.
You can notice that a Kameleon variable is used to define the ''NAS_Home''.
 
- NAS_home: /tmp
- install_NAS_bench:
  # install dependencies
  - exec_in: apt-get -y install openmpi-bin libopenmpi-dev make gfortran gcc
  - download_file_in:
    - https://www.nas.nasa.gov/assets/npb/NPB3.3.1.tar.gz
    - $$NAS_home/NPB3.3.1.tar.gz
  - exec_in: cd $$NAS_home && tar xf NPB3.3.1.tar.gz
- configure_make_def:
  - exec_in: |
      cd $$NAS_home/NPB3.3.1/NPB3.3-MPI/
      cp config/make.def{.template,}
      sed -i 's/^MPIF77.*/MPIF77 = mpif77/' config/make.def
      sed -i 's/^MPICC.*/MPICC = mpicc/' config/make.def
      sed -i 's/^FFLAGS.*/FFLAGS  = -O -mcmodel=medium/' config/make.def
- compile_different_MPI_bench:
  - exec_in: |
      cd $$NAS_home/NPB3.3.1/NPB3.3-MPI/
      for nbproc in 1 2 4 8 16 32
      do
        for class in B C D
        do
          for bench in is lu ft
          do
            # Not all IS bench are compiling but we get 48 working
            make -j 4 $bench NPROCS=$nbproc CLASS=$class || true
          done
        done
      done


As in the previous example, we finally add the NAS_benchmark macrostep to the setup section of the recipe, taking as parameter the ''NAS_home'' variable.
setup:
  - NAS_benchmark:
    - NAS_home: /root
=== Third example: Add a file ===
Let's add a file to your image. You can access the <code class='file'>steps/data</code>
folder inside Kameleon recipes using the '''$$kameleon_data_dir''' variable.
In this example, we will add a script that clears logs in the image.
First, write a step that copies a script and executes it.
This step must be located at <code class='file'>steps/clean_logs.yaml</code>:
- script_path: /usr/local/sbin
- import_script:
  - local2in:
    - $$kameleon_data_dir/$$script_file_name
    - $$script_path/$$script_file_name
  - exec_in: chmod u+x $$script_path/$$script_file_name
- run_script:
  - exec_in: $$script_path/$$script_file_name
{{Note|text=In this step we are using the ''alias'' command ''local2in'' provided by Kameleon.
See documentation of [http://kameleon.imag.fr/commands.html commands] and [http://kameleon.imag.fr/aliases.html alias] for more details.}}
Here is an example of a cleaning script that must be copied in <code class='file'>steps/data/debian_log_cleaner.sh</code>.
#!/bin/sh
# This is my cleaning script 'cause I don't trust G5K
systemctl stop rsyslog
rm -rf /var/log/*.log*
rm -f /root/.bash_history
{{Note|text=Script content does not really matter, it is an example. Of course, you can run these commands directly inside the recipe}}
Finally, we call that step by modifying the ''setup'' section of the recipe.
We set the variables ''script_file_name'' to select the script in the data folder.
  - clean_logs
    - script_file_name: debian_log_cleaner.sh
=== Other examples ===
For more complex examples, you may look at the following tutorials:
* [[User:Pneyron/PMEM-environment|PMEM-environment]]
* [[User:Pneyron/ARM64-custom-environment|ARM64-custom-environment]]
== Inspecting the recipe ==
To inspect our environment before launching the build:
To inspect our environment before launching the build:
* We can look at the information about the environment with <code class=command>kameleon info</code>:
* We can look at the information about the environment with <code class=command>kameleon info</code>:
Line 227: Line 332:


If any error is raised with those commands, it probably comes from a bad syntax in the recipe (e.g. bad YAML formatting).
If any error is raised with those commands, it probably comes from a bad syntax in the recipe (e.g. bad YAML formatting).
For more complex examples, you may look at the following tutorials:
* [[User:Pneyron/PMEM-environment|PMEM-environment]]
* [[User:Pneyron/ARM64-custom-environment|ARM64-custom-environment]]


== Build and test ==
== Build and test ==
Line 247: Line 348:
{{Term|location=frontend|cmd=<code class=command>kaenv3 -a</code> <code class=replace>~/my_recipe/build/my_custom_environment/my_custom_environment.dsc</code>}}
{{Term|location=frontend|cmd=<code class=command>kaenv3 -a</code> <code class=replace>~/my_recipe/build/my_custom_environment/my_custom_environment.dsc</code>}}
; File <code class=file>build/my_custom_environment/my_custom_environment.tar.zst</code>
; File <code class=file>build/my_custom_environment/my_custom_environment.tar.zst</code>
* This is the tarball of our new environment, referred to in the environment description (previously <code class=file>debian11_custom.tgz</code> because the compression was formerly gzip)
* This is the tarball of our new environment, referred to in the environment description


The recipe also takes care of copying the environment files in your public directory. As a result it can also be deployed using and HTTP URL (replace SITE by the actual Grid'5000 site):
The recipe also takes care of copying the environment files in your public directory. As a result it can also be deployed using and HTTP URL (replace SITE by the actual Grid'5000 site):
Line 254: Line 355:
{{Note|text=The environment tarball can also be used directly, for instance with <code class=command>docker import</code>
{{Note|text=The environment tarball can also be used directly, for instance with <code class=command>docker import</code>
After installing docker on a reserved node with <code class=command>g5k-setup-docker</code>, run:
After installing docker on a reserved node with <code class=command>g5k-setup-docker</code>, run:
{{Term|location=node|cmd=<code class=command>zstdcat</code> <code class=file>~/my_recipe/build/my_custom_environment/my_custom_environment.tar.zst</code> &#124; <code class=command>docker import</code> <code class=file>-</code> debian11-x64-min}}
{{Term|location=node|cmd=<code class=command>zstdcat</code> <code class=file>~/my_recipe/build/my_custom_environment/my_custom_environment.tar.zst</code> &#124; <code class=command>docker import</code> <code class=file>-</code> debian11-min}}
Then run the docker container, for instance:
Then run the docker container, for instance:
{{term|location=node|cmd=<code class=command>docker run</code> -ti debian11-g5k bash}}
{{term|location=node|cmd=<code class=command>docker run</code> -ti debian11-g5k bash}}
Line 262: Line 363:
Please note that kameleon provides an interactive debugging of the recipe in case of errors or when breakpoints are inserted in the recipe. See comments in the recipes which give the syntax to add a breakpoint.
Please note that kameleon provides an interactive debugging of the recipe in case of errors or when breakpoints are inserted in the recipe. See comments in the recipes which give the syntax to add a breakpoint.


= About the recipes of the Grid'5000 reference environments =
== About the recipes of the Grid'5000 reference environments ==
As explained above, recipes for Grid'5000 reference environments build from scratch using the target system installer, and use Puppet for the Debian stable environment. The section below shows how to use those recipes in case the previous method does not suit the need.
Contrary to the recipe presented before which reuses the tarball of an existing environment, recipes of Grid'5000 reference environments are built from scratch using the target system installer, and use Puppet for the Debian stable environment. This section shows how to take benefit from those recipes in case the previous method does not suit the need.


== Extending a Grid'5000 recipe that uses Puppet ==
{{Note|text=Here is a summary of ''pros & cons'' of the 2 types of recipes
; Recipe building from an existing environment tarball (previous paragraph)
* Pros:
** Simpler recipes: hide the complexity of the construction of the tarball of the existing environment.
** Quicker build: does not need to build from scratch, does not involve Puppet (even for Debian based environments).
** The setup section of the recipe is left to your customizations (it is empty in the extended template recipe). Nothing is done in your back.
* Cons:
** May hide too much complexity.
** Understanding the overall environment construction requires looking at both the recipe of the existing environment which the tarball is taken from and of the new environment recipe.
** Does not generate qcow2 VM images.
; Recipe extending the Grid'5000 reference environment recipe (paragraphs below)
*  Pros:
** Enable to use Puppet with Debian stable recipes.
** Build both the environment for use with kadeploy and the qcow2 VM image.
** Build from scratch: the recipe describes everything.
* Cons:
** Longer build because installing from scratch (has to run the distribution installer) and use Puppet (for Debian stable recipes).
** More complex: expose all steps to build from scratch.
** The setup section of the recipe must include some necessary steps from the extended template recipe (comes with the ''@base'' macrostep, see ''kameleon dryrun'' to inspect what is actually done) that will change the environment:  will do some packages installation, clean-up, run Puppet. You may have to compose with that in the customizations you bring.
}}
 
We detail below how to work with the Grid'5000 reference environment recipes, first exploiting Puppet (for Debian stable recipes only), second without Puppet.
=== Working with a Grid'5000 reference environment recipe that uses Puppet ===
We present here how to extend a Grid'5000 recipe and '''use Puppet''' to bring some customizations in a traceable way.
We present here how to extend a Grid'5000 recipe and '''use Puppet''' to bring some customizations in a traceable way.


As a reminder, Puppet is only used in the Debian stable environment recipes of Grid'5000. We will extend one of those.
As a reminder, Puppet is only used in the Debian stable environment recipes of Grid'5000. We will extend one of those.


The names of the Debian stable (Debian 9, 10 and 11) recipe templates end by a word after a dash: that's the variant name. Variants are <code class="replace">min</code>, <code class="replace">base</code>, <code class="replace">nfs</code>, <code class="replace">xen</code>, <code class="replace">big</code>, <code class="replace">std</code> (see above in this page for more details). Puppet is in charge of configuring the environment operating system with regard to the chosen '''variant'''. All variants are defined as Puppet classes that include each others in the following order:  
The names of the Debian stable (Debian 10 and 11) recipe templates end by a word after a dash: that's the variant name. Variants are <code class="replace">min</code>, <code class="replace">base</code>, <code class="replace">nfs</code>, <code class="replace">xen</code>, <code class="replace">big</code> (see above in this page for more details). Puppet is in charge of configuring the environment operating system with regard to the chosen '''variant'''. All variants are defined as Puppet classes that include each others in the following order:  
  min &sub; base &sub; nfs &sub; big &sub; std
  min &sub; base &sub; nfs &sub; big
Additionnaly, the <code class="replace">xen</code> class just includes the <code class="replace">base</code> class, so that the <code class="replace">xen</code> environment is just the <code class="replace">base</code> environment with Xen hypervisor and tools added.
Additionnaly, the <code class="replace">xen</code> class just includes the <code class="replace">base</code> class, so that the <code class="replace">xen</code> environment is just the <code class="replace">base</code> environment with Xen hypervisor and tools added.


This means that all changes made in the <code class="replace">min</code> class will affect all other variants. Changes made in the <code class="replace">big</code> class will affect the builds of both the <code class="replace">big</code> and <code class="replace">std</code> variants.
This means that all changes made in the <code class="replace">min</code> class will affect all other variants. Changes made in the <code class="replace">base</code> class will affect the builds of both the <code class="replace">base</code> and <code class="replace">big</code> variants.


; A first simple example, installing the ffmpeg package package:
In this example we will extend the <code class="replace">min</code> environment recipe of Debian 11. To do so, we use the <code class=command>kameleon new</code> command as follows:
In this example we will extend the <code class="replace">min</code> environment recipe of Debian 11. To do so, we use the <code class=command>kameleon new</code> command as follows:


{{Term|location=node ~/my_recipes|cmd=<code class=command>kameleon new</code> debian11_custom grid5000/debian11-x64-<code class=replace>min</code>.yaml}}
{{Term|location=node ~/my_recipes|cmd=<code class=command>kameleon new</code> debian11_custom grid5000/debian11-<code class=replace>min</code>.yaml}}


This creates the <code class="file">~/my_recipes/debian11_custom.yaml</code> file which is our new recipe. Besides, kameleon took care of importing in the directory all the files for the new recipe depends on.
This creates the <code class="file">~/my_recipes/debian11_custom.yaml</code> file which is our new recipe. Besides, kameleon took care of importing in the directory all the files for the new recipe depends on.
Line 324: Line 448:
Puppet covers a lot of needs that we cannot describe in this documentation. To know more, please refer to the [https://docs.puppetlabs.com/ Puppet documentation].
Puppet covers a lot of needs that we cannot describe in this documentation. To know more, please refer to the [https://docs.puppetlabs.com/ Puppet documentation].


== Extending a Grid'5000 recipe without using Puppet ==
; Second example, creating a new environment variant:
For bigger changes, one may create a new environment variant. Having our own variant will allow keeping our set of customization separated from Grid'5000 recipes, which will ease maintenance (for example if Grid'5000 recipes are updated).
 
In this example, we want to install ''apache2'' in the image. We have to create a user (www-data), add an apache2 configuration file, add the web application (here a simple html file), and ensure the service apache2 is running and enabled (starts at boot time). Therefore, we will extend the '''base''' variant of environment with modifications listed before.


All Grid'5000 reference environments can also be extended to add modifications using the kameleon language.
First, we create a new Kameleon recipe named ''debian11-webserv'', based on ''debian11-common'':
{{Term|location=localhost|cmd=<code class=command>kameleon new</code> debian11-webserv grid5000/debian11-common.yaml}}


Let's say we want to extend <code class=replace>grid5000/debiantesting-x64-min</code>. We run the following command:
Then we create a new Puppet module ''apache2'':


{{Term|location=node ~/my_recipes|cmd=<code class=command>kameleon new</code> <code class=replace>my_custom_environment</code> <code class=file>grid5000/debiantesting-x64-min</code>}}
{{Term|location=localhost|cmd=<code class=command>mkdir</code> grid5000/steps/data/setup/puppet/modules/<code class=replace>apache2</code>}}
{{Term|location=localhost|cmd=<code class=command>mkdir</code> grid5000/steps/data/setup/puppet/modules/<code class=replace>apache2</code>/manifests}}
{{Term|location=localhost|cmd=<code class=command>mkdir</code> grid5000/steps/data/setup/puppet/modules/<code class=replace>apache2</code>/files}}


This creates the <code class="replace">~/my_recipes/my_custom_environment.yaml</code> file, which this time directly describes that it extends <code class=file>grid5000/debiantesting-x64-min</code>.  
Here is an example of content for <code class=file>grid5000/steps/data/setup/puppet/modules/apache2/manifests/init.pp</code>:
# Module apache2
class apache2 ( ) {
  package {
    "apache2":
      ensure  => installed;
  }
  user {
    "www-data":
      ensure  => present;
  }
  file {
    "/var/www/my_application":
      ensure  => directory,
      owner    => www-data,
      group    => www-data,
      mode    => '0644';
    "/var/www/my_application/index.html":
      ensure  => file,
      owner    => www-data,
      group    => www-data,
      mode    => '0644',
      source  => 'puppet:///modules/apache2/index.html',
      require  => File['/var/www/my_application'];
    "/etc/apache2/sites-available/my_application.conf":
      ensure  => file,
      owner    => root,
      group    => root,
      mode    => '0644',
      source  => 'puppet:///modules/apache2/my_application.conf',
      require  => Package['apache2'];
    "/etc/apache2/sites-enabled/my_application.conf":
      ensure  => link,
      target  => '../sites-available/my_application.conf',
      require  => Package['apache2'],
      notify  => Service['apache2'];
  }
  service {
    "apache2":
      ensure  => running,
      enable  => true,
      require  => Package['apache2'];
  }
}
 
Files <code class=file>my_application.conf</code> and <code class=file>index.html</code> must be stored in <code class=file>grid5000/steps/data/setup/puppet/modules/apache2/files/</code>
 
<code class=file>grid5000/steps/data/setup/puppet/modules/apache2/files/my_application.conf</code>:
<VirtualHost *:80>
    ServerName my_application
    DocumentRoot /var/www/my_application
    ErrorLog /var/log/apache2/error.log
    # Possible values include: debug, info, notice, warn, error, crit,
    # alert, emerg.
    LogLevel warn
    CustomLog /var/log/apache2/access.log combined
</VirtualHost>
 
<code class=file>grid5000/steps/data/setup/puppet/modules/apache2/files/index.html</code>:
<html>
    <head>
        <title>Hello World!</title>
    </head>
    <body>
        &#60;P>I &#38;#60;3 Grid'5000!&#60;/P>
    </body>
</html>
 
 
We will now integrate this module in a new variant called ''webserv'' that extends the ''base'' variant.
 
First we must create a file <code class=file>grid5000/steps/data/setup/puppet/modules/env/manifests/webserv.pp</code>:
# This file contains the ''apache2'' class used to configure a user environment based on ''base'' variant, that contains apache2.
class env::webserv ( ) {
  class { "env::base": } # we include ''base'' variant here without overloading any of it's default parameters
  class { "apache2": }
}
 
To have it included by the actual Puppet setup, we must also create <code class=file>grid5000/steps/data/setup/puppet/manifests/webserv.pp</code>:
# User env containing apache2
# All recipes are stored in ''env'' module. Here called with ''webserv'' variant parameter.
class { 'env':
  given_variant    => 'webserv';
}
 
And finally, modify <code class=file>grid5000/steps/data/setup/puppet/modules/env/manifests/init.pp</code> to include your variant:
  case $variant {
    'min' :  { include env::min }
    'base':  { include env::base }
    'webserv': { include env::webserv }
    'nfs' :  { include env::nfs }
    'prod':  { include env::prod }
    'big' :  { include env::big }
    'xen' :  { include env::xen }
    default: { notify {"flavor $variant is not implemented":}}
  }
 
Then, instruct in the ''debian11-webserv'' recipe to build our ''webserv'' variant by modifying the ''variant'' variable of the ''global'' section of the recipe.
 
We edit <code class=file>debian11-webserv.yaml</code> as follows:
---
extend: grid5000/debian11-common.yaml
global:
    # You can see the base template `grid5000/debian11-common.yaml` to know the
    # variables that you can override
  variant: webserv
bootstrap:
  - "@base"
setup:
  - "@base"
export:
  - "@base"
 
=== Working with a Grid'5000 reference environment recipe without using Puppet ===
 
All Grid'5000 reference environments can also be extended to add modifications using only the kameleon language (not Puppet).
 
Let's say we want to extend <code class=replace>grid5000/debiantesting-min</code>. We run the following command:
 
{{Term|location=node ~/my_recipes|cmd=<code class=command>kameleon new</code> <code class=replace>my_custom_environment</code> <code class=file>grid5000/debiantesting-min</code>}}
 
This creates the <code class="replace">~/my_recipes/my_custom_environment.yaml</code> file, which this time directly describes that it extends <code class=file>grid5000/debiantesting-min</code>.  


We can now edit the recipe file, our customizations of the environment operating system is to be written in the ''setup'' section (''bootstrap'' and ''export'' should not require any changes as long as we work on customizing the environment for Grid'5000).
We can now edit the recipe file, our customizations of the environment operating system is to be written in the ''setup'' section (''bootstrap'' and ''export'' should not require any changes as long as we work on customizing the environment for Grid'5000).
Line 346: Line 612:
       - exec_in : apt-get update && apt-get install -y ffmpeg
       - exec_in : apt-get update && apt-get install -y ffmpeg


* <code class=command>"@base"</code> means that the steps from the environment we extend should be executed before our new steps (a bit like when using ''super()'' in a constructor of a class in Java or Ruby to call the constructor of the inherited class).
* <code class=command>"@base"</code> means that the steps from the environment we extend should be executed among our new steps (a bit like when using ''super()'' in a constructor of a class in Java or Ruby to call the constructor of the inherited class). Mind that some operations will be performed in your back (mind inspecting what the recipe actually does, e.g. with ''kameleon dryrun'')
* <code class=command>exec_in</code> means that the command will be executed with bash '''in''' the VM of the build process. See the [http://kameleon.imag.fr/ kameleon documation] for other commands.
* <code class=command>exec_in</code> means that the command will be executed with bash '''in''' the VM of the build process. See the [http://kameleon.imag.fr/ kameleon documation] for other commands.
* <code class=command>install_more_packages</code> is a macrostep, <code class=command>install_ffmpeg</code> is a microstep: It is mandatory to define the 2 level of steps and respect the format of a correct YAML document.
* <code class=command>install_more_packages</code> is a macrostep, <code class=command>install_ffmpeg</code> is a microstep: It is mandatory to define the 2 level of steps and respect the format of a correct YAML document.
Line 359: Line 625:
If any error is raised with those commands, it probably come from a bad syntax in the recipe (e.g. bad YAML formating).
If any error is raised with those commands, it probably come from a bad syntax in the recipe (e.g. bad YAML formating).


== Build and test ==
=== Build and test ===
Just like with the previous method, once the recipe is written, we can launch the build. To do so, we just have to run following command:
Once the recipe is written, we can launch the build. To do so, we just have to run following command:
{{Term|location=node ~/my_recipes|cmd=<code class=command>kameleon build</code> <code class=replace>my_custom_environment</code>}}
{{Term|location=node ~/my_recipes|cmd=<code class=command>kameleon build</code> <code class=replace>my_custom_environment</code>}}


Line 372: Line 638:
{{Term|location=frontend|cmd=<code class=command>kaenv3 -a</code> <code class=replace>~/my_recipe/build/my_custom_environment/my_custom_environment.dsc</code>}}
{{Term|location=frontend|cmd=<code class=command>kaenv3 -a</code> <code class=replace>~/my_recipe/build/my_custom_environment/my_custom_environment.dsc</code>}}
; File <code class=file>build/my_custom_environment/my_custom_environment.tar.zst</code>
; File <code class=file>build/my_custom_environment/my_custom_environment.tar.zst</code>
* This is the tarball of our new environment, referred to in the environment description (previously <code class=file>debian11_custom.tgz</code> because the compression was formerly gzip)
* This is the tarball of our new environment, referred to in the environment description
; File <code class=file>build/my_custom_environment/my_custom_environment.qcow2</code>
; File <code class=file>build/my_custom_environment/my_custom_environment.qcow2</code>
* It is a qcow2 version of the environment for use with <code class=command>qemu</code> (as seen earlier, it is not built when extending <code class=file>grid5000/from_grid5000_environment/base</code>).
* It is a qcow2 version of the environment for use with <code class=command>qemu</code> (as seen earlier, it is not built when extending <code class=file>grid5000/from_grid5000_environment/base</code>).
Line 378: Line 644:
{{Term|location=node|cmd=<code class=command>qemu-system-x86_64</code> -enable-kvm -m 2048 -cpu host <code class=file>~/my_recipe/build/my_custom_environment/my_custom_environment.qcow2</code>}}
{{Term|location=node|cmd=<code class=command>qemu-system-x86_64</code> -enable-kvm -m 2048 -cpu host <code class=file>~/my_recipe/build/my_custom_environment/my_custom_environment.qcow2</code>}}


= Creating an environment for an unsupported Operating System =
== Creating an environment for an unsupported Operating System ==
{{Note|text=This requires strong system administrator skills and a very good understanding of how the Grid'5000 bare metal deployment functions.}}
If an Operating System is not provided as a Grid'5000 environment already, it should be doable to write a kameleon recipe to build it, assuming it can:
 
If an Operating System is not provided as a Grid'5000 environment already, please mind that kameleon recipe allows installation from scratch of any Linux systems:
* boot the OS installer in a VM and do an unattended installation
* boot the OS installer in a VM and do an unattended installation
* run some additional setup
* run some additional setup
* finally export the built system ready to be consumed by <code class="command">kadeploy3</code>.
* finally export the built system ready to be consumed by <code class="command">kadeploy3</code>.
Therefore please mind using kameleon. Please get in touch with the Grid'50000 technical team to explain your motivation and ideas and possibly to get some help.
Please get in touch with the Grid'50000 technical team to explain your motivation and ideas and possibly to get some help.
{{Warning|text=This task requires strong system administrator skills and a very good understanding of how the Grid'5000 bare metal deployment functions.}}

Revision as of 12:31, 1 June 2022

Note.png Note

This page is actively maintained by the Grid'5000 team. If you encounter problems, please report them (see the Support page). Additionally, as it is a wiki page, you are free to make minor corrections yourself if needed. If you would like to suggest a more fundamental change, please contact the Grid'5000 team.

This page presents in details how to create a Grid'5000 environment. An environment is an operating system image that can be deployed on hardware nodes (bare metal) using kadeploy3.

Grid'5000 provides bare-metal as a service for experimenting on distributed computers, thanks to the kadeploy service. While kadeploy handles the efficient deployment of a user's customized system (also named environment in the Grid'5000 terminology) on many nodes, companion tools allow building such custom systems environments. Hence, this page describes the Grid'5000 environment creation processes, with the several methods for doing it, and helpful tools.

Introduction: the several ways for preparing a custom system environment

There are different ways to prepare a system environment for experiments:

  • (1) A first way consists in deploying a provided environment, for instance debian11-big, and adding software and custom configurations to it after the initial deployment. That every time a new experiment is run (new deploy job). While it can be relevant, it has an obvious bias: the post-deployment setup is not factorized and must be redone every time and on all nodes.
  • (2) A second way consists in building a master system environment with all the wanted customizations, then deploying that pre-built environment on the experiment nodes. This way, the environment preparation is only done once for all times and all nodes: it is factorized.

Once again, building such a customized master system environment can be achieved in different ways:

  • (2-a) A first way consists in deploying an already provided environment (such as one of the Grid'5000 supported reference environments) on one node, doing some customizations on that node, then finally saving the operating system of the node as a master environment image. Then, deploy it on all the nodes of an experiment. This usually involves the tgz-g5k command to create the master environment image (tarball).
  • (2-b) A second way consists in building the master environment to deploy on all the nodes of an experiment from a recipe which describes the whole system environment construction process. This obviously allows for the reconstructibility and sharing of the environment, hence it helps the reproducibility of the experiment. This involves the same build process that is used to produce the Grid'5000 reference environments, using the kameleon tool.
Note.png Note

Even though it is somehow out of context in this page, we can also mention the use of the sudo-g5k command, which allows a user to gain right away the root privileges whenever needed in the production environment (available on machines by default), hence without requiring a deploy job and to actually deploy an environment with kadeploy beforehand. In the context of the creation of custom system environment, using sudo-g5k:

  • can simplify (1), because it avoids requiring an initial deployment. The standard environment is just used.
  • can also simplify (2-a): after using sudo-g5k to modify the standard environment as root, tgz-g5k can be used to export an environment image from the modified standard environment.
In both cases, one must understand that this however has the some drawbacks: it limits to using the debian11 standard environment as base system on the nodes of the experiment, which may include some unnecessary complexity or limitations


In the remaining of this page, (1) will not be detailed: it is let to the user to choose a tool to deploy software and configurations on running systems, such as clush, taktuk, ansible, etc.

The next two paragraphs give detailed technical howtos for customizing existing Grid'5000 environments, first following way (2-a), then way (2-b).

About Grid'5000 supported environments

While building a system environment from scratch (only taking a generic OS installation media as a base) may be doable, it is technically extremely difficult (see the last section of this page). Most Grid'5000 users should rather create customized environments on top of existing works, already done for Grid'5000.

The Grid'5000 technical team provides several reference environments, which a user's customized environment can be built on top of. They are maintained in a git repository that includes both the kameleon recipes and the puppet recipes (Kameleon invokes puppet for most of the environment's configuration). The list of packages installed in each environment is managed in the g5k-meta-packages repository.

More information on Grid'5000 reference environments can be found on the Getting started page.

Of course, a user can also build a new customized environment on top of another user's customized environment.

About environment postinstalls

This page concentrate on the generation of the image and description part of environments. Another important part of an environment is the postinstall. Most Grid'5000 environments use a same postinstall which is named g5k-postinstall. Users may however write their own postinstall to replace it or to add as an additional postinstall.

Postinstalls are documented in Advanced_Kadeploy#Customizing_the_postinstalls.

Creating an environment images using tgz-g5k

In this section, following the (2-a) way described above, we explain how to extend an existing Grid'5000 environment by first deploying it on a machine with kadeploy3, then bringing customization to that machine, and finally archiving the operating system of the machine with tgz-g5k to create a new environment image.

Deploy the existing environment on a machine

First, we have to create the deploy job, to reserve a machine on which we will deploy the existing environment of our choice, which our customized environment will be based on.

Note.png Note

At this stage, it is wise choosing a Grid'5000 site and cluster that is not too loaded, furthermore using rather old hardware is of special interest because newer hardware usually has significantly longer boot time → see the Hardware page.

We do an interactive job (-I), of the deploy type (-t deploy), on only one machine (-l host=1). We will give ourselves 3 hours with -l walltime=3.

Terminal.png frontend:
oarsub -I -t deploy -l host=1,walltime=3

The interactive job opens a new shell on the frontend (careful: the job ends when exiting that shell).

The hostname of the reserved machine is stored in the $OAR_FILE_NODES file which is used by default by Kadeploy. So we can deploy the reference environment of our choice (or another user's environment that we would like to extend) with kadeploy3:

Terminal.png frontend:
kadeploy3 debian11-base

(if the chosen environment is not registered in kaenv3, see the -a option of kadeploy3 to point to a environment description file).

Customize the environment

Once the deployment has run successfully, we can connect to the machine using ssh as root without password, and do any customization using shell commands.

Terminal.png frontend:
ssh root@hostname

You can therefore update your environment (to add any missing library you need, remove any package that you don't need in order to sizes down the image and possibly speeds up the deployment process, etc.)

Note: When you are done with the customization, mind clearing temporary files or caches to save disk space.

Archive the environment image

We can now archive the customized environment, using tgz-g5k to create a Grid'5000 environment image from the filesystem of the machine. The environment image is a tarball of the filesystem of the OS with some adaptations.

Terminal.png frontend:
tgz-g5k -m hostname -f ~/environment_image.tar.zst

This will create a file named environment_image.tar.zst in your home directory on the frontend.

Note.png Note

About tgz-g5k:

  • If you want to create an image of a machine that run the Grid'5000 default environment (i.e. not in a deploy job) and that you modified after gaining the root privileges with using sudo-g5k, the -o option of tgz-g5k must be used so that the connection to the machine is done using oarsh/oarcp instead of ssh/scp.
  • If you want tgz-g5k to access the machine with your user id, use the -u option (default is root).
  • More information on tgz-g5k in tgz-g5k -h or man tgz-g5k.

Create the environment description file

The new environment image cannot be deployed directly: the image is only one part of an environment. An environment is described by a YAML document. To use the new image, it must be referred by an environment description, so that deploying that environment uses the new image. Note that the environment includes also other information such as the postinstall script and the kernel command line for instance, which can be changed independently from changing the environment image.

The easiest way to create a description for your new environment is to modify the description of the environment it is based on.

Since we used the debian11-base reference environment, we can retrieve its description using the kaenv3 command and save it to a file. Then we'll use it as a base for the description of our customized environment.

Terminal.png frontend:
kaenv3 -p debian11-base -u deploy > my-custom-environment.yaml
Note.png Note

About the debian std environments: The debian std (e.g. debian11-std) environments are the environments used on nodes by default, providing services such as oar-node as well as custom settings that are necessary for the default system but are useless for user-deployed nodes. Users should rather deploy a debian big environment. However, if it happens that you customized the debian std environment (it may be the case if you made your customizations without deploying, just using sudo-g5k), it is advised to take as a model of environment description that of the debian big environment rather than of the debian std one:

Terminal.png frontend:
kaenv3 -p debian11-big -u deploy > my-custom-environment.yaml
This is especially important with regard to the g5k-postinstall command, which must not include --restrict-user std in your environment's description.

We now edit the file to change the environment name, version, description, author, and so on. The image file entry must of course be changed to point to our new environment image tarball file. Since it is stored locally in our home directory, the path can be a simple absolute path (remove the server:// prefix). If the image is placed in your ~/public directory, an HTTP URL can alternatively be used (e.g. http://public.SITE.grid5000.fr/~jdoe/environment_image.tar.zst, replace SITE by the actual site). Finally, the visibility line should be removed or its value changed to shared or private.

---
name: my-debian
version: 1
arch: x86_64
description: my customized environment based on debian 10 (buster) - base
author: john@doe.org
visibility: shared
destructive: false
os: linux
image:
  file: /home/jdoe/environment_image.tar.zst
  kind: tar
  compression: zstd
postinstalls:
- archive: server:///grid5000/postinstalls/g5k-postinstall.tgz
  compression: gzip
  script: g5k-postinstall --net debian
boot:
  kernel: "/vmlinuz"
  initrd: "/initrd.img"
  kernel_params:
filesystem: ext4
partition_type: 131
multipart: false
Warning.png Warning

A local path for the tarball (no leading server://) will not work if your are deploying your environment through the API. If you want to use the Kadeploy API, you may want to put your tarball in the public directory of your home and specify the path with HTTP (eg: http://public.site.grid5000.fr/~username/environment_image.tar.zst)

Once this is done, our customized environment is ready to be deployed (in a deploy job) using:

Terminal.png frontend:
kadeploy3 -a my-custom-environment.yaml

(This kind of deployment is called anonymous deployment because the description is not yet in the Kadeploy3 environment registry. It is particularly useful when working by iteration on the environment, thus having to recreate the environment image several times. Otherwise, since registered environments are checksummed, changing the image file requires updating the registration every time with kaenv3)

Once your customized environment is ready, it's optionally the time to add it to the Kadeploy3 environment registry:

Terminal.png frontend:
kaenv3 -a my-custom-environment.yaml

Assuming you set the name field in the environment description to "my-debian", it will then be deployable using the following command:

Terminal.png frontend:
kadeploy3 my-debian

If the visibility is set to shared, your environment will show up in the list of available registered environment for any user, using kaenv3 -l -u your_username.

Warning.png Warning

The registration of the environment does not make a copy of the environment image and postinstall files! Do not remove them or the environment will be broken. Also, environments registered by users are not automatically replicated on all sites (there is one kadeploy registry per site).

Creating an environment from a recipe using kameleon

In this section, following the (2-b) way described above, we explain how to build environments from recipes describing the whole creation process, rather than doing interactive modifications in command-line then use tgz-g5k to export the system to an image. With the method, all steps requested to build the environment are written, which helps traceability, reconstructability, and sharing.

There are actually different ways of writing recipes. The recipes of the Grid'5000 reference environment for instance describe the whole process of installation of the operating system from scratch using the installer of the target Linux distribution. Also, the Debian stable environments which are provided in several variants use Puppet for most of the configuration of the system. However, while building from scratch and using Puppet may look very nice, it is also complex and longer to execute.

As a result, we present a method in this documentation that is simpler and closer to what is actually done when using tgz-g5k:

  1. The recipe will first retrieve the operating system image (tarball) of an existing Grid'5000 environment and run it a VM. That's bootstrap stage.
  2. Then, the recipe will allow any customizations of the VM's operating system in the setup stage.
  3. Finally, the export stage will create a new environment from the customized operating system, ready to be consumed by kadeploy.

Recipes are written for and processed by a tool named kameleon, that do the actual build of the environment. Kameleon is a powerful utility to generate operating system images (environments in the Grid'5000 context) from recipes.

A kameleon recipe is composed of a main YAML file: the recipe. That recipe possibly depends on other YAML files: some recipes it extends and some macro steps. It also depends on various other files: data, helper scripts,....

Kameleon provides features such as context isolation and interactive breakpoints. Context isolation means that kameleon can run a build process without altering the operating system from where the tool is called itself (kameleon typically uses qemu VMs for the build). Kameleon does not need to run as root.

See the Kameleon website for more information on the tool.

Preparing the workspace to use kameleon

For the work with kameleon, we suggest creating a fresh directory that will contain our recipes and all dependencies. Optionally, that directory can of course be versioned in a new git project, in order to keep track of any changes made.

Terminal.png node:
mkdir ~/my_recipes && cd ~/my_recipes
Warning.png Warning

In the remaining of this document, kameleon commands are always run on a Grid'5000 machine in a regular job (not of type deploy), not on the personal workstation, and never on Grid'5000 frontends. The root privilege is not required to build an environment with kameleon (except on PPC64 machines/the drac cluster where sudo-g5k ppc64_cpu --smt=off must be run to deactivate hyperthreading before running kameleon, because qemu on PPC64 does not support hyperthreading).

First we create an interactive job to reserve a node where to run the kameleon commands:

Terminal.png frontend:
oarsub -I

Kameleon is preinstalled on the nodes.

We then install the repository of Grid'5000 recipes:

Terminal.png node:

In case you already installed the repository previously, you may want to update it. To do so, run the following command:

Terminal.png node:
kameleon repository update grid5000

You can then list the available recipe templates with:

Terminal.png node:
kameleon template list

Create the recipe of the new environment

The kameleon template list command shows all templates available in the Grid'5000 environment recipes repository.

  • We see here the templates for Debian stable (9, 10, and 11) with their different variants.
  • We also see the templates for other distributions with only the min variant.
  • Finally, we see the template for a recipe that builds from an existing Grid'5000 environment → In this section, we use that one.

So we extend the grid5000/from_grid5000_environment/base recipe, by run the following command:

Terminal.png node ~/my_recipes:
kameleon new my_custom_environment grid5000/from_grid5000_environment/base

We can now edit the new recipe file: my_custom_environment.yaml, and look at the global section. A lot of comments are provided to help adapt the recipe to our needs. The most important information to provide in the recipe is what existing environment we want to base our recipe on. This information must be provided in the grid5000_environment_import_name global variable. It must be set to one of the environments that are shown when running kaenv3 -l on a frontend. For instance we may choose to use debiantesting-x64-min. Most other global variables are commented (line begin with a #) because default values may be just fine. However, we may want to change some of those variables for instance to specify the user and version for a specific environment.

## Environment to build from
grid5000_environment_import_name: "debiantesting-min"
#grid5000_environment_import_user: "deploy"
#grid5000_environment_import_version: ""
Note.png Note

About the debian std environment: please note that customizing the debian std (e.g. debian11-std) is mostly not relevant since it includes services and settings that are only necessary for the default system on nodes (when not deployed). It is preferable to use a debian big environment, which provides all the useful functionalities of debian std (see above the description of the reference environments).

Once done, the important part is to bring our customization steps for the setup of our environment in the setup section (bootstrap and export should not require any changes).

See the Kameleon website for more information, notably a description of the recipe syntax (language) used in the YAML files.

Warning.png Warning

Grid'5000 uses its own recipes that take benefit from recipes provided by the Kameleon developers (by extending them). Please beware that the Kameleon website does not have an up to date description of the usage of Kameleon in Grid'5000.

Note.png Note

About the execution contexts of the Kameleon commands:

The Kameleon commands execute in one of the local, out or in contexts (e.g. exec_local, exec_in, ...).

  • local is the operating system from where we call the kameleon executable, e.g. the workstation system.
  • out is usually an intermediary operating system (VM) from where the target operating system being built is prepared, providing tools that may not be available in the local context (e.g. debootstrap). In the Grid'5000 recipes, the out context is usually not used (or technically, it is identical to the in context).
  • in is the operating system that is being built. For Grid'5000 recipes, it is run by Kameleon in a Qemu VM.

Let's show some examples.

First example: install the ffmpeg package

Let's assume we want to install the ffmpeg package in our environment.

We add a new step to the recipe, which is just a sequence of actions to execute. This basically gives a setup section in our recipe as follows:

setup:
 - install_more_packages:
    - install_ffmpeg:
      - exec_in : apt-get update && apt-get install -y ffmpeg
  • exec_in means that the command will be executed with bash in the VM of the build process. See the kameleon documention for other commands.
  • install_more_packages is a macrostep, it can group one or several microsteps
  • install_ffmpeg is a microstep

It is mandatory to define the 2 levels of steps (macrostep and microstep) and respect the format of a correct YAML document, to have a working recipe.

Optionnaly a macrostep and its microsteps can also be defined in a separate file. For instance, we can create the ~/my_recipes/steps/setup/ directory hierarchy and the steps/setup/install_more_packages.yaml file inside, with the following content:

- install_ffmpeg:
    - exec_in : apt-get update && apt-get install -y ffmpeg

Then use it in the recipe with just:

setup:
 - install_more_packages

(no : after install_more_packages, since the macrostep is defined in a separate file).

Second example: Install the NAS Benchmarks

The NAS benchmarks are commonly used to benchmark HPC applications using MPI or OpenMP. In this example, we will download and configure the NAS package and build the MPI FT benchmark.

To do so we will create a step file that will be called from the recipe in ~/my_recipe/steps/setup/NAS_benchmark.yaml. You can notice that a Kameleon variable is used to define the NAS_Home.

- NAS_home: /tmp
- install_NAS_bench:
  # install dependencies
  - exec_in: apt-get -y install openmpi-bin libopenmpi-dev make gfortran gcc
  - download_file_in:
    - https://www.nas.nasa.gov/assets/npb/NPB3.3.1.tar.gz
    - $$NAS_home/NPB3.3.1.tar.gz
  - exec_in: cd $$NAS_home && tar xf NPB3.3.1.tar.gz
- configure_make_def:
  - exec_in: |
      cd $$NAS_home/NPB3.3.1/NPB3.3-MPI/
      cp config/make.def{.template,}
      sed -i 's/^MPIF77.*/MPIF77 = mpif77/' config/make.def
      sed -i 's/^MPICC.*/MPICC = mpicc/' config/make.def
      sed -i 's/^FFLAGS.*/FFLAGS  = -O -mcmodel=medium/' config/make.def
- compile_different_MPI_bench:
  - exec_in: |
      cd $$NAS_home/NPB3.3.1/NPB3.3-MPI/
      for nbproc in 1 2 4 8 16 32
      do
        for class in B C D
        do
          for bench in is lu ft
          do
            # Not all IS bench are compiling but we get 48 working
            make -j 4 $bench NPROCS=$nbproc CLASS=$class || true
          done
        done
      done

As in the previous example, we finally add the NAS_benchmark macrostep to the setup section of the recipe, taking as parameter the NAS_home variable.

setup:
  - NAS_benchmark:
    - NAS_home: /root

Third example: Add a file

Let's add a file to your image. You can access the steps/data folder inside Kameleon recipes using the $$kameleon_data_dir variable.

In this example, we will add a script that clears logs in the image.

First, write a step that copies a script and executes it. This step must be located at steps/clean_logs.yaml:

- script_path: /usr/local/sbin
- import_script:
  - local2in:
    - $$kameleon_data_dir/$$script_file_name
    - $$script_path/$$script_file_name
  - exec_in: chmod u+x $$script_path/$$script_file_name
- run_script:
  - exec_in: $$script_path/$$script_file_name
Note.png Note

In this step we are using the alias command local2in provided by Kameleon. See documentation of commands and alias for more details.

Here is an example of a cleaning script that must be copied in steps/data/debian_log_cleaner.sh.

#!/bin/sh
# This is my cleaning script 'cause I don't trust G5K
systemctl stop rsyslog
rm -rf /var/log/*.log*
rm -f /root/.bash_history
Note.png Note

Script content does not really matter, it is an example. Of course, you can run these commands directly inside the recipe

Finally, we call that step by modifying the setup section of the recipe. We set the variables script_file_name to select the script in the data folder.

  - clean_logs
    - script_file_name: debian_log_cleaner.sh

Other examples

For more complex examples, you may look at the following tutorials:

Inspecting the recipe

To inspect our environment before launching the build:

  • We can look at the information about the environment with kameleon info:
Terminal.png node ~/my_recipes:
kameleon info my_custom_environment
  • We can look at what the build of the environment involves by running kameleon dryrun:
Terminal.png node ~/my_recipes:
kameleon dryrun my_custom_environment

Those commands are of great help to find out about the recipe's macrosteps and microsteps, files, variables, etc...

If any error is raised with those commands, it probably comes from a bad syntax in the recipe (e.g. bad YAML formatting).

Build and test

Once the recipe is written, we can launch the build. To do so, we just have to run following command:

Terminal.png node ~/my_recipes:
kameleon build my_custom_environment
Warning.png Warning

Depending on different factors (e.g. the size of the image you are about to create (what variant), the hardware used (SSD or HDD)), the build process can last from a few minutes to a lot longer.

We end up with a build directory that contains the freshly build files we are interested in:

File build/my_custom_environment/my_custom_environment.dsc
  • This is the description file of the new environment (this is a YAML file, the file extension does not really matter, be it .dsc, .env or .yaml)
  • It can be used either directly with kadeploy to run the deployment without registering the environment:

After creating a new job of type deploy, we run the following kadeploy command from the frontend:

Terminal.png frontend:
kadeploy -a ~/my_recipe/build/my_custom_environment/my_custom_environment.dsc
  • Or to register the environment with kaenv3, for later use with kadeploy.
Terminal.png frontend:
kaenv3 -a ~/my_recipe/build/my_custom_environment/my_custom_environment.dsc
File build/my_custom_environment/my_custom_environment.tar.zst
  • This is the tarball of our new environment, referred to in the environment description

The recipe also takes care of copying the environment files in your public directory. As a result it can also be deployed using and HTTP URL (replace SITE by the actual Grid'5000 site):

Note.png Note

The environment tarball can also be used directly, for instance with docker import

After installing docker on a reserved node with g5k-setup-docker, run:

Terminal.png node:
zstdcat ~/my_recipe/build/my_custom_environment/my_custom_environment.tar.zst | docker import - debian11-min

Then run the docker container, for instance:

Terminal.png node:
docker run -ti debian11-g5k bash

Of course, writing recipes, building the environment, and testing it may be a trial and error process requiring to loop over the different stages.

Please note that kameleon provides an interactive debugging of the recipe in case of errors or when breakpoints are inserted in the recipe. See comments in the recipes which give the syntax to add a breakpoint.

About the recipes of the Grid'5000 reference environments

Contrary to the recipe presented before which reuses the tarball of an existing environment, recipes of Grid'5000 reference environments are built from scratch using the target system installer, and use Puppet for the Debian stable environment. This section shows how to take benefit from those recipes in case the previous method does not suit the need.

Note.png Note

Here is a summary of pros & cons of the 2 types of recipes

Recipe building from an existing environment tarball (previous paragraph)
  • Pros:
    • Simpler recipes: hide the complexity of the construction of the tarball of the existing environment.
    • Quicker build: does not need to build from scratch, does not involve Puppet (even for Debian based environments).
    • The setup section of the recipe is left to your customizations (it is empty in the extended template recipe). Nothing is done in your back.
  • Cons:
    • May hide too much complexity.
    • Understanding the overall environment construction requires looking at both the recipe of the existing environment which the tarball is taken from and of the new environment recipe.
    • Does not generate qcow2 VM images.
Recipe extending the Grid'5000 reference environment recipe (paragraphs below)
  • Pros:
    • Enable to use Puppet with Debian stable recipes.
    • Build both the environment for use with kadeploy and the qcow2 VM image.
    • Build from scratch: the recipe describes everything.
  • Cons:
    • Longer build because installing from scratch (has to run the distribution installer) and use Puppet (for Debian stable recipes).
    • More complex: expose all steps to build from scratch.
    • The setup section of the recipe must include some necessary steps from the extended template recipe (comes with the @base macrostep, see kameleon dryrun to inspect what is actually done) that will change the environment: will do some packages installation, clean-up, run Puppet. You may have to compose with that in the customizations you bring.

We detail below how to work with the Grid'5000 reference environment recipes, first exploiting Puppet (for Debian stable recipes only), second without Puppet.

Working with a Grid'5000 reference environment recipe that uses Puppet

We present here how to extend a Grid'5000 recipe and use Puppet to bring some customizations in a traceable way.

As a reminder, Puppet is only used in the Debian stable environment recipes of Grid'5000. We will extend one of those.

The names of the Debian stable (Debian 10 and 11) recipe templates end by a word after a dash: that's the variant name. Variants are min, base, nfs, xen, big (see above in this page for more details). Puppet is in charge of configuring the environment operating system with regard to the chosen variant. All variants are defined as Puppet classes that include each others in the following order:

min ⊂ base ⊂ nfs ⊂ big

Additionnaly, the xen class just includes the base class, so that the xen environment is just the base environment with Xen hypervisor and tools added.

This means that all changes made in the min class will affect all other variants. Changes made in the base class will affect the builds of both the base and big variants.

A first simple example, installing the ffmpeg package package

In this example we will extend the min environment recipe of Debian 11. To do so, we use the kameleon new command as follows:

Terminal.png node ~/my_recipes:
kameleon new debian11_custom grid5000/debian11-min.yaml

This creates the ~/my_recipes/debian11_custom.yaml file which is our new recipe. Besides, kameleon took care of importing in the directory all the files for the new recipe depends on.

You can list the recipes which are present in your workspace using the list command:

Terminal.png node ~/my_recipes:
kameleon list

You can see your new debian11_custom recipe along with the recipes that it extends directly or indirectly.

You can look at the steps involved in the build of the recipes using the kameleon dryrun command:

Terminal.png node ~/my_recipes:
kameleon dryrun debian11_custom

We see that the setup section has setup and run steps for an orchestrator: what's the part that prepares everything to use Puppet in the recipe and run it.

Since we want to write our customization with the Puppet language, we do not have to modify the debian11_custom.yaml recipe file much. We may just change the environment description by editing the recipe file ~/my_recipes/debian11_custom.yaml, and changing the description field starting at line 4, for instance:

#==============================================================================
#
# DESCRIPTION: My Grid'5000 Debian Bullseye
#
#==============================================================================

Once done, we can close the file and look at the Puppet code.

Puppet is a software configuration management tool that includes its own declarative language to describe system configuration. It is a model-driven solution that requires limited programming knowledge to use.

The puppets modules used by the Grid'5000 reference environments are located in ~/my_recipes/grid5000/steps/data/setup/puppet/modules/env/manifests/.

For our use case, we can look at the commonpackages.pp file. This is a really simple file that requests packages to be installed.

We can install ffmpeg like this :

class env::commonpackages{
}
...
class env::commonpackages::ffmpeg{
  package{ 'ffmpeg':
    ensure => installed;
  }
}
...

In this quite simple use case, but if you have a package like postfix which requires more configuration, it could be more complex! You may look at the puppet classes to find out how it works. Puppet covers a lot of needs that we cannot describe in this documentation. To know more, please refer to the Puppet documentation.

Second example, creating a new environment variant

For bigger changes, one may create a new environment variant. Having our own variant will allow keeping our set of customization separated from Grid'5000 recipes, which will ease maintenance (for example if Grid'5000 recipes are updated).

In this example, we want to install apache2 in the image. We have to create a user (www-data), add an apache2 configuration file, add the web application (here a simple html file), and ensure the service apache2 is running and enabled (starts at boot time). Therefore, we will extend the base variant of environment with modifications listed before.

First, we create a new Kameleon recipe named debian11-webserv, based on debian11-common:

Terminal.png localhost:
kameleon new debian11-webserv grid5000/debian11-common.yaml

Then we create a new Puppet module apache2:

Terminal.png localhost:
mkdir grid5000/steps/data/setup/puppet/modules/apache2
Terminal.png localhost:
mkdir grid5000/steps/data/setup/puppet/modules/apache2/manifests
Terminal.png localhost:
mkdir grid5000/steps/data/setup/puppet/modules/apache2/files

Here is an example of content for grid5000/steps/data/setup/puppet/modules/apache2/manifests/init.pp:

# Module apache2

class apache2 ( ) {

  package {
    "apache2":
      ensure  => installed;
  }
  user {
    "www-data":
      ensure   => present;
  }
  file {
    "/var/www/my_application":
      ensure   => directory,
      owner    => www-data,
      group    => www-data,
      mode     => '0644';
    "/var/www/my_application/index.html":
      ensure   => file,
      owner    => www-data,
      group    => www-data,
      mode     => '0644',
      source   => 'puppet:///modules/apache2/index.html',
      require  => File['/var/www/my_application'];
    "/etc/apache2/sites-available/my_application.conf":
      ensure   => file,
      owner    => root,
      group    => root,
      mode     => '0644',
      source   => 'puppet:///modules/apache2/my_application.conf',
      require  => Package['apache2'];
    "/etc/apache2/sites-enabled/my_application.conf":
      ensure   => link,
      target   => '../sites-available/my_application.conf',
      require  => Package['apache2'],
      notify   => Service['apache2'];
  }
  service {
    "apache2":
      ensure   => running,
      enable   => true,
      require  => Package['apache2'];
  }
}

Files my_application.conf and index.html must be stored in grid5000/steps/data/setup/puppet/modules/apache2/files/

grid5000/steps/data/setup/puppet/modules/apache2/files/my_application.conf:

<VirtualHost *:80>

    ServerName my_application

    DocumentRoot /var/www/my_application

    ErrorLog /var/log/apache2/error.log

    # Possible values include: debug, info, notice, warn, error, crit,
    # alert, emerg.
    LogLevel warn

    CustomLog /var/log/apache2/access.log combined

</VirtualHost>

grid5000/steps/data/setup/puppet/modules/apache2/files/index.html:

<html>
    <head>
        <title>Hello World!</title>
    </head>
    <body>
        <P>I &#60;3 Grid'5000!</P>
    </body>
</html>


We will now integrate this module in a new variant called webserv that extends the base variant.

First we must create a file grid5000/steps/data/setup/puppet/modules/env/manifests/webserv.pp:

# This file contains the apache2 class used to configure a user environment based on base variant, that contains apache2.

class env::webserv ( ) {

  class { "env::base": } # we include base variant here without overloading any of it's default parameters
  class { "apache2": }
}

To have it included by the actual Puppet setup, we must also create grid5000/steps/data/setup/puppet/manifests/webserv.pp:

# User env containing apache2
# All recipes are stored in env module. Here called with webserv variant parameter.

class { 'env':
  given_variant    => 'webserv';
}

And finally, modify grid5000/steps/data/setup/puppet/modules/env/manifests/init.pp to include your variant:

 case $variant {
   'min' :  { include env::min }
   'base':  { include env::base }
   'webserv': { include env::webserv }
   'nfs' :  { include env::nfs }
   'prod':  { include env::prod }
   'big' :  { include env::big }
   'xen' :  { include env::xen }
   default: { notify {"flavor $variant is not implemented":}}
 }

Then, instruct in the debian11-webserv recipe to build our webserv variant by modifying the variant variable of the global section of the recipe.

We edit debian11-webserv.yaml as follows:

---
extend: grid5000/debian11-common.yaml

global:
    # You can see the base template `grid5000/debian11-common.yaml` to know the
    # variables that you can override
  variant: webserv

bootstrap:
  - "@base"

setup:
  - "@base"

export:
  - "@base"

Working with a Grid'5000 reference environment recipe without using Puppet

All Grid'5000 reference environments can also be extended to add modifications using only the kameleon language (not Puppet).

Let's say we want to extend grid5000/debiantesting-min. We run the following command:

Terminal.png node ~/my_recipes:
kameleon new my_custom_environment grid5000/debiantesting-min

This creates the ~/my_recipes/my_custom_environment.yaml file, which this time directly describes that it extends grid5000/debiantesting-min.

We can now edit the recipe file, our customizations of the environment operating system is to be written in the setup section (bootstrap and export should not require any changes as long as we work on customizing the environment for Grid'5000).

For example, just like in the previous section with Puppet, we describe below how to install the ffmpeg package (but in the kameleon language this time).

We add a new step to the recipe, which is just a sequence of actions to execute. This basically gives a setup section in our recipe as follows:

setup:
 - "@base"
 - install_more_packages:
    - install_ffmpeg:
      - exec_in : apt-get update && apt-get install -y ffmpeg
  • "@base" means that the steps from the environment we extend should be executed among our new steps (a bit like when using super() in a constructor of a class in Java or Ruby to call the constructor of the inherited class). Mind that some operations will be performed in your back (mind inspecting what the recipe actually does, e.g. with kameleon dryrun)
  • exec_in means that the command will be executed with bash in the VM of the build process. See the kameleon documation for other commands.
  • install_more_packages is a macrostep, install_ffmpeg is a microstep: It is mandatory to define the 2 level of steps and respect the format of a correct YAML document.

Please notice that this is very similar to what we did when extending the grid5000/from_grid5000_environment/base recipe, but the @base is added, since we want the setup section of the extended recipe to be executed.

Again, to inspect our environment before launching the build:

  • We can look at the information about the environment with kameleon info:
Terminal.png node ~/my_recipes:
kameleon info my_custom_environment
  • We can look at what the build of the environment involves by running kameleon dryrun:
Terminal.png node ~/my_recipes:
kameleon dryrun my_custom_environment

If any error is raised with those commands, it probably come from a bad syntax in the recipe (e.g. bad YAML formating).

Build and test

Once the recipe is written, we can launch the build. To do so, we just have to run following command:

Terminal.png node ~/my_recipes:
kameleon build my_custom_environment

We end up with a build directory that contains the freshly build files we are interested in:

File build/my_custom_environment/my_custom_environment.dsc
  • This is the description file of the new environment (this is a YAML file, the file extension does not really matter, be it .dsc, .env or .yaml)
  • It can be used either directly with kadeploy to run the deployment without registering the environment:

After creating a new job of type deploy, we run the following kadeploy command from the frontend:

Terminal.png frontend:
kadeploy -a ~/my_recipe/build/my_custom_environment/my_custom_environment.dsc
  • Or to register the environment with kaenv3, for later use with kadeploy.
Terminal.png frontend:
kaenv3 -a ~/my_recipe/build/my_custom_environment/my_custom_environment.dsc
File build/my_custom_environment/my_custom_environment.tar.zst
  • This is the tarball of our new environment, referred to in the environment description
File build/my_custom_environment/my_custom_environment.qcow2
  • It is a qcow2 version of the environment for use with qemu (as seen earlier, it is not built when extending grid5000/from_grid5000_environment/base).

Just run Qemu on the image:

Terminal.png node:
qemu-system-x86_64 -enable-kvm -m 2048 -cpu host ~/my_recipe/build/my_custom_environment/my_custom_environment.qcow2

Creating an environment for an unsupported Operating System

If an Operating System is not provided as a Grid'5000 environment already, it should be doable to write a kameleon recipe to build it, assuming it can:

  • boot the OS installer in a VM and do an unattended installation
  • run some additional setup
  • finally export the built system ready to be consumed by kadeploy3.

Please get in touch with the Grid'50000 technical team to explain your motivation and ideas and possibly to get some help.

Warning.png Warning

This task requires strong system administrator skills and a very good understanding of how the Grid'5000 bare metal deployment functions.