Environment creation: Difference between revisions

From Grid5000
Jump to navigation Jump to search
mNo edit summary
 
(13 intermediate revisions by 7 users not shown)
Line 20: Line 20:
* can also simplify '''<code class="replace">(2-a)</code>''':  after using '''<code class="command">sudo-g5k</code>''' to modify the standard environment as root, <code class="command">tgz-g5k</code> can be used to export an environment image from the modified standard environment.
* can also simplify '''<code class="replace">(2-a)</code>''':  after using '''<code class="command">sudo-g5k</code>''' to modify the standard environment as root, <code class="command">tgz-g5k</code> can be used to export an environment image from the modified standard environment.


In both cases, one must understand that this however has the some drawbacks: it limits to using the <code class="env">debian11</code> standard environment as base system on the nodes of the experiment, which may include some unnecessary complexity or limitations}}
In both cases, one must understand that this however has some drawbacks: it limits to using the <code class="env">debian11</code> standard environment as base system on the nodes of the experiment, which may include some unnecessary complexity or limitations}}




Line 47: Line 47:
First, we have to create the ''deploy'' job, to reserve a machine on which we will deploy the existing environment of our choice, which our customized environment will be based on.
First, we have to create the ''deploy'' job, to reserve a machine on which we will deploy the existing environment of our choice, which our customized environment will be based on.


{{Note|text=At this stage, it is wise choosing a Grid'5000 site and cluster that is not too loaded, furthermore using rather old hardware is of special interest because newer hardware usually has significantly longer boot time → see the [[Hardware]] page.}}
{{Note|text=At this stage, it is wise to choose a Grid'5000 site and cluster that is not too loaded, furthermore using rather old hardware is of special interest because newer hardware usually has significantly longer boot time → see the [[Hardware]] page.}}


We do an interactive job (<code>-I</code>), of the deploy type (<code>-t deploy</code>), on only one machine (<code>-l host=1</code>). We will give ourselves 3 hours with <code>-l walltime=3</code>.   
We do an interactive job (<code>-I</code>), of the deploy type (<code>-t deploy</code>), on only one machine (<code>-l host=1</code>). We will give ourselves 3 hours with <code>-l walltime=3</code>.   
Line 77: Line 77:


{{Note|text=About <code class=command>tgz-g5k</code>:
{{Note|text=About <code class=command>tgz-g5k</code>:
* If you want to create an image of a machine that run the Grid'5000 default environment (i.e. not in a ''deploy'' job) and that you modified after gaining the root privileges with using <code class=command>sudo-g5k</code>, the <code class=command>-o</code> option of <code class=command>tgz-g5k</code> must be used so that the connection to the machine is done using <code class=command>oarsh</code>/<code class=command>oarcp</code> instead of <code class=command>ssh</code>/<code class=command>scp</code>.
* If you want to create an image of a machine that runs the Grid'5000 default environment (i.e. not in a ''deploy'' job) and that you modified after gaining the root privileges with using <code class=command>sudo-g5k</code>, the <code class=command>-o</code> option of <code class=command>tgz-g5k</code> must be used so that the connection to the machine is done using <code class=command>oarsh</code>/<code class=command>oarcp</code> instead of <code class=command>ssh</code>/<code class=command>scp</code>.
* If you want <code class=command>tgz-g5k</code> to access the machine with your user id, use the <code class=command>-u </code> option (default is root).
* If you want <code class=command>tgz-g5k</code> to access the machine with your user id, use the <code class=command>-u </code> option (default is root).
* More information on <code class=command>tgz-g5k</code> in <code class=command>tgz-g5k -h</code> or <code class=command>man tgz-g5k</code>.}}
* More information on <code class=command>tgz-g5k</code> in <code class=command>tgz-g5k -h</code> or <code class=command>man tgz-g5k</code>.}}
Line 88: Line 88:


Since we used the <code>debian11-base</code> reference environment, we can retrieve its description using the <code class="command">kaenv3</code> command and save it to a file. Then we'll use it as a base for the description of our customized environment.
Since we used the <code>debian11-base</code> reference environment, we can retrieve its description using the <code class="command">kaenv3</code> command and save it to a file. Then we'll use it as a base for the description of our customized environment.
{{Term|location=frontend|cmd=<code class=command>kaenv3</code> -p debian11-base -u deploy > <code class=replace>my-custom-environment.yaml</code> }}
{{Term|location=frontend|cmd=<code class=command>kaenv3</code> -p debian11-base > <code class=replace>my-custom-environment.yaml</code> }}


{{Note|text=About the ''architecture'': <code>debian11-base</code> is the generic name of the environment. The specific architecture, like ''x64'', ''arm64'' or ''ppc64'' could be added to use the alias of the environment.
Example: <code>debian11-x64-base</code>.
}}
{{Note|text=About the ''debian std'' environments: The ''debian std'' (e.g. <code class=replace>debian11-std</code>) environments are the environments used on nodes by default, providing services such as oar-node as well as custom settings that are necessary for the default system but are useless for user-deployed nodes. Users should rather deploy a ''debian big'' environment. However, if it happens that you customized the ''debian std'' environment (it may be the case if you made your customizations without deploying, just using <code class=command>sudo-g5k</code>), it is advised to take as a model of environment description that of the ''debian big'' environment rather than of the ''debian std'' one:  
{{Note|text=About the ''debian std'' environments: The ''debian std'' (e.g. <code class=replace>debian11-std</code>) environments are the environments used on nodes by default, providing services such as oar-node as well as custom settings that are necessary for the default system but are useless for user-deployed nodes. Users should rather deploy a ''debian big'' environment. However, if it happens that you customized the ''debian std'' environment (it may be the case if you made your customizations without deploying, just using <code class=command>sudo-g5k</code>), it is advised to take as a model of environment description that of the ''debian big'' environment rather than of the ''debian std'' one:  
{{Term|location=frontend|cmd=<code class=command>kaenv3</code> -p debian11-big -u deploy > <code class=replace>my-custom-environment.yaml</code>}}
{{Term|location=frontend|cmd=<code class=command>kaenv3</code> -p debian11-big > <code class=replace>my-custom-environment.yaml</code>}}
This is especially important with regard to the <code class=command>g5k-postinstall</code> command, which must not include <code class=command>--restrict-user std</code> in your environment's description.
This is especially important with regard to the <code class=command>g5k-postinstall</code> command, which must not include <code class=command>--restrict-user std</code> in your environment's description.
}}
}}
Line 117: Line 120:
   kernel: "/vmlinuz"
   kernel: "/vmlinuz"
   initrd: "/initrd.img"
   initrd: "/initrd.img"
   kernel_params:
   kernel_params: ""
filesystem: ext4
filesystem: ext4
partition_type: 131
partition_type: 131
Line 196: Line 199:
  #grid5000_environment_import_version: ""
  #grid5000_environment_import_version: ""


{{Note|text=Your recipe uses a template named <code class=file>grid5000/from_grid5000_environment/base</code> and several macrostep files it depends on. Those files may change over time because of bug fixes or other evolutions. As a result, it may be interesting to fetch updates from time to time when working on your recipe. This can be achieved with the following command, first to update the template repository, then to update your recipe files.
{{Note|text=Your recipe uses a template named <code class=file>grid5000/from_grid5000_environment/base</code> and several macrostep files it depends on. Those files may change over time because of bug fixes or other evolutions. As a result, it may be interesting to fetch updates from time to time when working on your recipe. This can be achieved with the following commands, first to update the template repository, then to update your recipe files.


{{Term|location=node ~/my_recipes|cmd=<code class=command>kameleon repo</code> update grid5000}}
{{Term|location=node ~/my_recipes|cmd=<code class=command>kameleon repo</code> update grid5000}}
Line 234: Line 237:
It is mandatory to define the 2 levels of steps (macrostep and microstep) and respect the format of a correct YAML document, to have a working recipe.
It is mandatory to define the 2 levels of steps (macrostep and microstep) and respect the format of a correct YAML document, to have a working recipe.


Optionnaly a macrostep and its microsteps can also be defined in a separate file. For instance, we can create the <code class=file>~/my_recipes/steps/setup/</code> directory hierarchy and the <code class=file>steps/setup/install_more_packages.yaml</code> file inside, with the following content:
Optionally a macrostep and its microsteps can also be defined in a separate file. For instance, we can create the <code class=file>~/my_recipes/steps/setup/</code> directory hierarchy and the <code class=file>steps/setup/install_more_packages.yaml</code> file inside, with the following content:


  - install_ffmpeg:
  - install_ffmpeg:
Line 335: Line 338:
* We can look at the information about the environment with <code class=command>kameleon info</code>:
* We can look at the information about the environment with <code class=command>kameleon info</code>:
{{Term|location=node ~/my_recipes|cmd=<code class=command>kameleon info</code> <code class=replace>my_custom_environment</code>}}
{{Term|location=node ~/my_recipes|cmd=<code class=command>kameleon info</code> <code class=replace>my_custom_environment</code>}}
* We can look at what the build of the environment involves by running <code class=command>kameleon dryrun</code>:
* We can look at what the build of the environment involves without actually building by running <code class=command>kameleon build --dryrun</code>:
{{Term|location=node ~/my_recipes|cmd=<code class=command>kameleon dryrun</code> <code class=replace>my_custom_environment</code>}}
{{Term|location=node ~/my_recipes|cmd=<code class=command>kameleon build --dryrun</code> <code class=replace>my_custom_environment</code>}}
Those commands are of great help to find out about the recipe's macrosteps and microsteps, files, variables, etc...
Those commands are of great help to find out about the recipe's macrosteps and microsteps, files, variables, etc...


Line 350: Line 353:
; File <code class=file>build/my_custom_environment/my_custom_environment.dsc</code>
; File <code class=file>build/my_custom_environment/my_custom_environment.dsc</code>
* This is the description file of the new environment (this is a YAML file, the file extension does not really matter, be it <code class=file>.dsc</code>, <code class=file>.env</code> or <code class=file>.yaml</code>)
* This is the description file of the new environment (this is a YAML file, the file extension does not really matter, be it <code class=file>.dsc</code>, <code class=file>.env</code> or <code class=file>.yaml</code>)
* It can be used either directly with kadeploy to run the deployment without registering the environment:  
* It can be used either directly with kadeploy to run the deployment without registering the environment.
 
The <code class=file>my_custom_environment.dsc</code> file may need to be edited to set the image file path: use <code class=file>local:///home/yourlogin/my_recipe/build/my_custom_environment/my_custom_environment.tar.zst</code> to point to a local file, but you may also specify an URL if you have put the file in your public directory (see below for an example with that).
 
After creating a new job of type '''deploy''', we run the following kadeploy command from the frontend:
After creating a new job of type '''deploy''', we run the following kadeploy command from the frontend:
{{Term|location=frontend|cmd=<code class=command>kadeploy -a</code> <code class=replace>~/my_recipe/build/my_custom_environment/my_custom_environment.dsc</code>}}
{{Term|location=frontend|cmd=<code class=command>kadeploy -a</code> <code class=replace>~/my_recipe/build/my_custom_environment/my_custom_environment.dsc</code>}}
Line 392: Line 398:
** Longer build because installing from scratch (has to run the distribution installer) and use Puppet (for Debian stable recipes).
** Longer build because installing from scratch (has to run the distribution installer) and use Puppet (for Debian stable recipes).
** More complex: expose all steps to build from scratch.
** More complex: expose all steps to build from scratch.
** The setup section of the recipe must include some necessary steps from the extended template recipe (comes with the ''@base'' macrostep, see ''kameleon dryrun'' to inspect what is actually done) that will change the environment:  will do some packages installation, clean-up, run Puppet. You may have to compose with that in the customizations you bring.
** The setup section of the recipe must include some necessary steps from the extended template recipe (comes with the ''@base'' macrostep, see ''kameleon build --dryrun'' to inspect what is actually done) that will change the environment:  will do some packages installation, clean-up, run Puppet. You may have to compose with that in the customizations you bring.
}}
}}


Line 401: Line 407:
As a reminder, Puppet is only used in the Debian stable environment recipes of Grid'5000. We will extend one of those.
As a reminder, Puppet is only used in the Debian stable environment recipes of Grid'5000. We will extend one of those.


The names of the Debian stable (Debian 10 and 11) recipe templates end by a word after a dash: that's the variant name. Variants are <code class="replace">min</code>, <code class="replace">base</code>, <code class="replace">nfs</code>, <code class="replace">xen</code>, <code class="replace">big</code> (see above in this page for more details). Puppet is in charge of configuring the environment operating system with regard to the chosen '''variant'''. All variants are defined as Puppet classes that include each others in the following order:  
The names of the Debian stable (Debian 10 and 11) recipe templates end by a word after a dash: that's the variant name. Variants are <code class="replace">min</code>, <code class="replace">base</code>, <code class="replace">nfs</code>, <code class="replace">big</code> (see above in this page for more details). Puppet is in charge of configuring the environment operating system with regard to the chosen '''variant'''. All variants are defined as Puppet classes that include each others in the following order:  
  min &sub; base &sub; nfs &sub; big
  min &sub; base &sub; nfs &sub; big
Additionnaly, the <code class="replace">xen</code> class just includes the <code class="replace">base</code> class, so that the <code class="replace">xen</code> environment is just the <code class="replace">base</code> environment with Xen hypervisor and tools added.


This means that all changes made in the <code class="replace">min</code> class will affect all other variants. Changes made in the <code class="replace">base</code> class will affect the builds of both the <code class="replace">base</code> and <code class="replace">big</code> variants.
This means that all changes made in the <code class="replace">min</code> class will affect all other variants. Changes made in the <code class="replace">base</code> class will affect the builds of both the <code class="replace">base</code> and <code class="replace">big</code> variants.
Line 419: Line 424:
You can see your new '''debian11_custom''' recipe along with the recipes that it extends directly or indirectly.
You can see your new '''debian11_custom''' recipe along with the recipes that it extends directly or indirectly.


You can look at the steps involved in the build of the recipes using the <code class=command>kameleon dryrun</code> command:
You can look at the steps involved in the build of the recipes using the <code class=command>kameleon build --dryrun</code> command:
{{Term|location=node ~/my_recipes|cmd=<code class=command>kameleon dryrun</code> debian11_custom}}
{{Term|location=node ~/my_recipes|cmd=<code class=command>kameleon build --dryrun</code> debian11_custom}}


We see that the setup section has setup and run steps for an ''orchestrator'': what's the part that prepares everything to use Puppet in the recipe and run it.
We see that the setup section has setup and run steps for an ''orchestrator'': what's the part that prepares everything to use Puppet in the recipe and run it.
Line 574: Line 579:
     'prod':  { include env::prod }
     'prod':  { include env::prod }
     'big' :  { include env::big }
     'big' :  { include env::big }
    'xen' :  { include env::xen }
     default: { notify {"flavor $variant is not implemented":}}
     default: { notify {"flavor $variant is not implemented":}}
   }
   }
Line 620: Line 624:
       - exec_in : apt-get update && apt-get install -y ffmpeg
       - exec_in : apt-get update && apt-get install -y ffmpeg


* <code class=command>"@base"</code> means that the steps from the environment we extend should be executed among our new steps (a bit like when using ''super()'' in a constructor of a class in Java or Ruby to call the constructor of the inherited class). Mind that some operations will be performed in your back (mind inspecting what the recipe actually does, e.g. with ''kameleon dryrun'')  
* <code class=command>"@base"</code> means that the steps from the environment we extend should be executed among our new steps (a bit like when using ''super()'' in a constructor of a class in Java or Ruby to call the constructor of the inherited class). Mind that some operations will be performed in your back (mind inspecting what the recipe actually does, e.g. with ''kameleon build --dryrun'')  
* <code class=command>exec_in</code> means that the command will be executed with bash '''in''' the VM of the build process. See the [http://kameleon.imag.fr/ kameleon documation] for other commands.
* <code class=command>exec_in</code> means that the command will be executed with bash '''in''' the VM of the build process. See the [http://kameleon.imag.fr/ kameleon documation] for other commands.
* <code class=command>install_more_packages</code> is a macrostep, <code class=command>install_ffmpeg</code> is a microstep: It is mandatory to define the 2 level of steps and respect the format of a correct YAML document.
* <code class=command>install_more_packages</code> is a macrostep, <code class=command>install_ffmpeg</code> is a microstep: It is mandatory to define the 2 level of steps and respect the format of a correct YAML document.
Line 629: Line 633:
* We can look at the information about the environment with <code class=command>kameleon info</code>:
* We can look at the information about the environment with <code class=command>kameleon info</code>:
{{Term|location=node ~/my_recipes|cmd=<code class=command>kameleon info</code> <code class=replace>my_custom_environment</code>}}
{{Term|location=node ~/my_recipes|cmd=<code class=command>kameleon info</code> <code class=replace>my_custom_environment</code>}}
* We can look at what the build of the environment involves by running <code class=command>kameleon dryrun</code>:
* We can look at what the build of the environment involves by running <code class=command>kameleon build --dryrun</code>:
{{Term|location=node ~/my_recipes|cmd=<code class=command>kameleon dryrun</code> <code class=replace>my_custom_environment</code>}}
{{Term|location=node ~/my_recipes|cmd=<code class=command>kameleon build --dryrun</code> <code class=replace>my_custom_environment</code>}}
If any error is raised with those commands, it probably come from a bad syntax in the recipe (e.g. bad YAML formating).
If any error is raised with those commands, it probably come from a bad syntax in the recipe (e.g. bad YAML formating).



Latest revision as of 12:16, 9 October 2024

Note.png Note

This page is actively maintained by the Grid'5000 team. If you encounter problems, please report them (see the Support page). Additionally, as it is a wiki page, you are free to make minor corrections yourself if needed. If you would like to suggest a more fundamental change, please contact the Grid'5000 team.

This page presents in details how to create a Grid'5000 environment. An environment is an operating system image that can be deployed on hardware nodes (bare metal) using kadeploy3.

Grid'5000 provides bare-metal as a service for experimenting on distributed computers, thanks to the kadeploy service. While kadeploy handles the efficient deployment of a user's customized system (also named environment in the Grid'5000 terminology) on many nodes, companion tools allow building such custom systems environments. Hence, this page describes the Grid'5000 environment creation processes, with the several methods for doing it, and helpful tools.

Introduction: the several ways for preparing a custom system environment

There are different ways to prepare a system environment for experiments:

  • (1) A first way consists in deploying a provided environment, for instance debian11-big, and adding software and custom configurations to it after the initial deployment. That every time a new experiment is run (new deploy job). While it can be relevant, it has an obvious bias: the post-deployment setup is not factorized and must be redone every time and on all nodes.
  • (2) A second way consists in building a master system environment with all the wanted customizations, then deploying that pre-built environment on the experiment nodes. This way, the environment preparation is only done once for all times and all nodes: it is factorized.

Once again, building such a customized master system environment can be achieved in different ways:

  • (2-a) A first way consists in deploying an already provided environment (such as one of the Grid'5000 supported reference environments) on one node, doing some customizations on that node, then finally saving the operating system of the node as a master environment image. Then, deploy it on all the nodes of an experiment. This usually involves the tgz-g5k command to create the master environment image (tarball).
  • (2-b) A second way consists in building the master environment to deploy on all the nodes of an experiment from a recipe which describes the whole system environment construction process. This obviously allows for the reconstructibility and sharing of the environment, hence it helps the reproducibility of the experiment. This involves the same build process that is used to produce the Grid'5000 reference environments, using the kameleon tool.
Note.png Note

Even though it is somehow out of context in this page, we can also mention the use of the sudo-g5k command, which allows a user to gain right away the root privileges whenever needed in the production environment (available on machines by default), hence without requiring a deploy job and to actually deploy an environment with kadeploy beforehand. In the context of the creation of custom system environment, using sudo-g5k:

  • can simplify (1), because it avoids requiring an initial deployment. The standard environment is just used.
  • can also simplify (2-a): after using sudo-g5k to modify the standard environment as root, tgz-g5k can be used to export an environment image from the modified standard environment.
In both cases, one must understand that this however has some drawbacks: it limits to using the debian11 standard environment as base system on the nodes of the experiment, which may include some unnecessary complexity or limitations


In the remaining of this page, (1) will not be detailed: it is let to the user to choose a tool to deploy software and configurations on running systems, such as clush, taktuk, ansible, etc.

The next two paragraphs give detailed technical howtos for customizing existing Grid'5000 environments, first following way (2-a), then way (2-b).

About Grid'5000 supported environments

While building a system environment from scratch (only taking a generic OS installation media as a base) may be doable, it is technically extremely difficult (see the last section of this page). Most Grid'5000 users should rather create customized environments on top of existing works, already done for Grid'5000.

The Grid'5000 technical team provides several reference environments, which a user's customized environment can be built on top of. They are maintained in a git repository that includes both the kameleon recipes and the puppet recipes (Kameleon invokes puppet for most of the environment's configuration). The list of packages installed in each environment is managed in the g5k-meta-packages repository.

More information on Grid'5000 reference environments can be found on the Getting started page.

Of course, a user can also build a new customized environment on top of another user's customized environment.

About environment postinstalls

This page concentrate on the generation of the image and description part of environments. Another important part of an environment is the postinstall. Most Grid'5000 environments use a same postinstall which is named g5k-postinstall. Users may however write their own postinstall to replace it or to add as an additional postinstall.

Postinstalls are documented in Advanced_Kadeploy#Customizing_the_postinstalls.

Creating an environment images using tgz-g5k

In this section, following the (2-a) way described above, we explain how to extend an existing Grid'5000 environment by first deploying it on a machine with kadeploy3, then bringing customization to that machine, and finally archiving the operating system of the machine with tgz-g5k to create a new environment image.

Deploy the existing environment on a machine

First, we have to create the deploy job, to reserve a machine on which we will deploy the existing environment of our choice, which our customized environment will be based on.

Note.png Note

At this stage, it is wise to choose a Grid'5000 site and cluster that is not too loaded, furthermore using rather old hardware is of special interest because newer hardware usually has significantly longer boot time → see the Hardware page.

We do an interactive job (-I), of the deploy type (-t deploy), on only one machine (-l host=1). We will give ourselves 3 hours with -l walltime=3.

Terminal.png frontend:
oarsub -I -t deploy -l host=1,walltime=3

The interactive job opens a new shell on the frontend (careful: the job ends when exiting that shell).

The hostname of the reserved machine is stored in the $OAR_FILE_NODES file which is used by default by Kadeploy. So we can deploy the reference environment of our choice (or another user's environment that we would like to extend) with kadeploy3:

Terminal.png frontend:
kadeploy3 debian11-base

(if the chosen environment is not registered in kaenv3, see the -a option of kadeploy3 to point to a environment description file).

Customize the environment

Once the deployment has run successfully, we can connect to the machine using ssh as root without password, and do any customization using shell commands.

Terminal.png frontend:
ssh root@hostname

You can therefore update your environment (to add any missing library you need, remove any package that you don't need in order to sizes down the image and possibly speeds up the deployment process, etc.)

Note: When you are done with the customization, mind clearing temporary files or caches to save disk space.

Archive the environment image

We can now archive the customized environment, using tgz-g5k to create a Grid'5000 environment image from the filesystem of the machine. The environment image is a tarball of the filesystem of the OS with some adaptations.

Terminal.png frontend:
tgz-g5k -m hostname -f ~/environment_image.tar.zst

This will create a file named environment_image.tar.zst in your home directory on the frontend.

Note.png Note

About tgz-g5k:

  • If you want to create an image of a machine that runs the Grid'5000 default environment (i.e. not in a deploy job) and that you modified after gaining the root privileges with using sudo-g5k, the -o option of tgz-g5k must be used so that the connection to the machine is done using oarsh/oarcp instead of ssh/scp.
  • If you want tgz-g5k to access the machine with your user id, use the -u option (default is root).
  • More information on tgz-g5k in tgz-g5k -h or man tgz-g5k.

Create the environment description file

The new environment image cannot be deployed directly: the image is only one part of an environment. An environment is described by a YAML document. To use the new image, it must be referred by an environment description, so that deploying that environment uses the new image. Note that the environment includes also other information such as the postinstall script and the kernel command line for instance, which can be changed independently from changing the environment image.

The easiest way to create a description for your new environment is to modify the description of the environment it is based on.

Since we used the debian11-base reference environment, we can retrieve its description using the kaenv3 command and save it to a file. Then we'll use it as a base for the description of our customized environment.

Terminal.png frontend:
kaenv3 -p debian11-base > my-custom-environment.yaml
Note.png Note

About the architecture: debian11-base is the generic name of the environment. The specific architecture, like x64, arm64 or ppc64 could be added to use the alias of the environment. Example: debian11-x64-base.

Note.png Note

About the debian std environments: The debian std (e.g. debian11-std) environments are the environments used on nodes by default, providing services such as oar-node as well as custom settings that are necessary for the default system but are useless for user-deployed nodes. Users should rather deploy a debian big environment. However, if it happens that you customized the debian std environment (it may be the case if you made your customizations without deploying, just using sudo-g5k), it is advised to take as a model of environment description that of the debian big environment rather than of the debian std one:

Terminal.png frontend:
kaenv3 -p debian11-big > my-custom-environment.yaml
This is especially important with regard to the g5k-postinstall command, which must not include --restrict-user std in your environment's description.

We now edit the file to change the environment name, version, description, author, and so on. The image file entry must of course be changed to point to our new environment image tarball file. Since it is stored locally in our home directory, the path can be a simple absolute path (remove the server:// prefix). If the image is placed in your ~/public directory, an HTTP URL can alternatively be used (e.g. http://public.SITE.grid5000.fr/~jdoe/environment_image.tar.zst, replace SITE by the actual site). Finally, the visibility line should be removed or its value changed to shared or private.

---
name: my-debian
version: 1
arch: x86_64
description: my customized environment based on debian 10 (buster) - base
author: john@doe.org
visibility: shared
destructive: false
os: linux
image:
  file: /home/jdoe/environment_image.tar.zst
  kind: tar
  compression: zstd
postinstalls:
- archive: server:///grid5000/postinstalls/g5k-postinstall.tgz
  compression: gzip
  script: g5k-postinstall --net debian
boot:
  kernel: "/vmlinuz"
  initrd: "/initrd.img"
  kernel_params: ""
filesystem: ext4
partition_type: 131
multipart: false
Warning.png Warning

A local path for the tarball (no leading server://) will not work if your are deploying your environment through the API. If you want to use the Kadeploy API, you may want to put your tarball in the public directory of your home and specify the path with HTTP (eg: http://public.site.grid5000.fr/~username/environment_image.tar.zst)

Once this is done, our customized environment is ready to be deployed (in a deploy job) using:

Terminal.png frontend:
kadeploy3 -a my-custom-environment.yaml

(This kind of deployment is called anonymous deployment because the description is not yet in the Kadeploy3 environment registry. It is particularly useful when working by iteration on the environment, thus having to recreate the environment image several times. Otherwise, since registered environments are checksummed, changing the image file requires updating the registration every time with kaenv3)

Once your customized environment is ready, it's optionally the time to add it to the Kadeploy3 environment registry:

Terminal.png frontend:
kaenv3 -a my-custom-environment.yaml

Assuming you set the name field in the environment description to "my-debian", it will then be deployable using the following command:

Terminal.png frontend:
kadeploy3 my-debian

If the visibility is set to shared, your environment will show up in the list of available registered environment for any user, using kaenv3 -l -u your_username.

Warning.png Warning

The registration of the environment does not make a copy of the environment image and postinstall files! Do not remove them or the environment will be broken. Also, environments registered by users are not automatically replicated on all sites (there is one kadeploy registry per site).

Creating an environment from a recipe using kameleon

In this section, following the (2-b) way described above, we explain how to build environments from recipes describing the whole creation process, rather than doing interactive modifications in command-line then use tgz-g5k to export the system to an image. With the method, all steps requested to build the environment are written, which helps traceability, reconstructability, and sharing.

There are actually different ways of writing recipes. The recipes of the Grid'5000 reference environment for instance describe the whole process of installation of the operating system from scratch using the installer of the target Linux distribution. Also, the Debian stable environments which are provided in several variants use Puppet for most of the configuration of the system. However, while building from scratch and using Puppet may look very nice, it is also complex and longer to execute.

As a result, we present a method in this documentation that is simpler and closer to what is actually done when using tgz-g5k:

  1. The recipe will first retrieve the operating system image (tarball) of an existing Grid'5000 environment and run it a VM. That's bootstrap stage.
  2. Then, the recipe will allow any customizations of the VM's operating system in the setup stage.
  3. Finally, the export stage will create a new environment from the customized operating system, ready to be consumed by kadeploy.

Recipes are written for and processed by a tool named kameleon, that do the actual build of the environment. Kameleon is a powerful utility to generate operating system images (environments in the Grid'5000 context) from recipes.

A kameleon recipe is composed of a main YAML file: the recipe. That recipe possibly depends on other YAML files: some recipes it extends and some macro steps. It also depends on various other files: data, helper scripts,....

Kameleon provides features such as context isolation and interactive breakpoints. Context isolation means that kameleon can run a build process without altering the operating system from where the tool is called itself (kameleon typically uses qemu VMs for the build). Kameleon does not need to run as root.

See the Kameleon website for more information on the tool.

Preparing the workspace to use kameleon

For the work with kameleon, we suggest creating a fresh directory that will contain our recipes and all dependencies. Optionally, that directory can of course be versioned in a new git project, in order to keep track of any changes made.

Terminal.png node:
mkdir ~/my_recipes && cd ~/my_recipes
Warning.png Warning

In the remaining of this document, kameleon commands are always run on a Grid'5000 machine in a regular job (not of type deploy), not on the personal workstation, and never on Grid'5000 frontends. The root privilege is not required to build an environment with kameleon (except on PPC64 machines/the drac cluster where sudo-g5k ppc64_cpu --smt=off must be run to deactivate hyperthreading before running kameleon, because qemu on PPC64 does not support hyperthreading).

First we create an interactive job to reserve a node where to run the kameleon commands:

Terminal.png frontend:
oarsub -I

Kameleon is preinstalled on the nodes.

We then install the repository of Grid'5000 recipes:

Terminal.png node:

In case you already installed the repository previously, you may want to update it. To do so, run the following command:

Terminal.png node:
kameleon repository update grid5000

You can then list the available recipe templates with:

Terminal.png node:
kameleon template list

Create the recipe of the new environment

The kameleon template list command shows all templates available in the Grid'5000 environment recipes repository.

  • We see here the templates for Debian stable (9, 10, and 11) with their different variants.
  • We also see the templates for other distributions with only the min variant.
  • Finally, we see the template for a recipe that builds from an existing Grid'5000 environment → In this section, we use that one.

So we extend the grid5000/from_grid5000_environment/base recipe, by run the following command:

Terminal.png node ~/my_recipes:
kameleon new my_custom_environment grid5000/from_grid5000_environment/base

We can now edit the new recipe file: my_custom_environment.yaml, and look at the global section. A lot of comments are provided to help adapt the recipe to our needs. The most important information to provide in the recipe is what existing environment we want to base our recipe on. This information must be provided in the grid5000_environment_import_name global variable. It must be set to one of the environments that are shown when running kaenv3 -l on a frontend. For instance we may choose to use debiantesting-x64-min. Most other global variables are commented (line begin with a #) because default values may be just fine. However, we may want to change some of those variables for instance to specify the user and version for a specific environment.

## Environment to build from
grid5000_environment_import_name: "debiantesting-min"
#grid5000_environment_import_user: "deploy"
#grid5000_environment_import_version: ""
Note.png Note

Your recipe uses a template named grid5000/from_grid5000_environment/base and several macrostep files it depends on. Those files may change over time because of bug fixes or other evolutions. As a result, it may be interesting to fetch updates from time to time when working on your recipe. This can be achieved with the following commands, first to update the template repository, then to update your recipe files.

Terminal.png node ~/my_recipes:
kameleon repo update grid5000
Terminal.png node ~/my_recipes:
kameleon template import grid5000/from_grid5000_environment/base
This will possibly show conflicts that should be resolved by overwriting the old version of the files.

Note.png Note

About the debian std environment: please note that customizing the debian std (e.g. debian11-std) is mostly not relevant since it includes services and settings that are only necessary for the default system on nodes (when not deployed). It is preferable to use a debian big environment, which provides all the useful functionalities of debian std (see above the description of the reference environments).

Once done, the important part is to bring our customization steps for the setup of our environment in the setup section (bootstrap and export should not require any changes).

See the Kameleon website for more information, notably a description of the recipe syntax (language) used in the YAML files.

Warning.png Warning

Grid'5000 uses its own recipes that take benefit from recipes provided by the Kameleon developers (by extending them). Please beware that the Kameleon website does not have an up to date description of the usage of Kameleon in Grid'5000.

Note.png Note

About the execution contexts of the Kameleon commands:

The Kameleon commands execute in one of the local, out or in contexts (e.g. exec_local, exec_in, ...).

  • local is the operating system from where we call the kameleon executable, e.g. the workstation system.
  • out is usually an intermediary operating system (VM) from where the target operating system being built is prepared, providing tools that may not be available in the local context (e.g. debootstrap). In the Grid'5000 recipes, the out context is usually not used (or technically, it is identical to the in context).
  • in is the operating system that is being built. For Grid'5000 recipes, it is run by Kameleon in a Qemu VM.

Let's show some examples.

First example: install the ffmpeg package

Let's assume we want to install the ffmpeg package in our environment.

We add a new step to the recipe, which is just a sequence of actions to execute. This basically gives a setup section in our recipe as follows:

setup:
 - install_more_packages:
    - install_ffmpeg:
      - exec_in : apt-get update && apt-get install -y ffmpeg
  • exec_in means that the command will be executed with bash in the VM of the build process. See the kameleon documention for other commands.
  • install_more_packages is a macrostep, it can group one or several microsteps
  • install_ffmpeg is a microstep

It is mandatory to define the 2 levels of steps (macrostep and microstep) and respect the format of a correct YAML document, to have a working recipe.

Optionally a macrostep and its microsteps can also be defined in a separate file. For instance, we can create the ~/my_recipes/steps/setup/ directory hierarchy and the steps/setup/install_more_packages.yaml file inside, with the following content:

- install_ffmpeg:
    - exec_in : apt-get update && apt-get install -y ffmpeg

Then use it in the recipe with just:

setup:
 - install_more_packages

(no : after install_more_packages, since the macrostep is defined in a separate file).

Second example: Install the NAS Benchmarks

The NAS benchmarks are commonly used to benchmark HPC applications using MPI or OpenMP. In this example, we will download and configure the NAS package and build the MPI FT benchmark.

To do so we will create a step file that will be called from the recipe in ~/my_recipe/steps/setup/NAS_benchmark.yaml. You can notice that a Kameleon variable is used to define the NAS_Home.

- NAS_home: /tmp
- install_NAS_bench:
  # install dependencies
  - exec_in: apt-get -y install openmpi-bin libopenmpi-dev make gfortran gcc
  - download_file_in:
    - https://www.nas.nasa.gov/assets/npb/NPB3.3.1.tar.gz
    - $$NAS_home/NPB3.3.1.tar.gz
  - exec_in: cd $$NAS_home && tar xf NPB3.3.1.tar.gz
- configure_make_def:
  - exec_in: |
      cd $$NAS_home/NPB3.3.1/NPB3.3-MPI/
      cp config/make.def{.template,}
      sed -i 's/^MPIF77.*/MPIF77 = mpif77/' config/make.def
      sed -i 's/^MPICC.*/MPICC = mpicc/' config/make.def
      sed -i 's/^FFLAGS.*/FFLAGS  = -O -mcmodel=medium/' config/make.def
- compile_different_MPI_bench:
  - exec_in: |
      cd $$NAS_home/NPB3.3.1/NPB3.3-MPI/
      for nbproc in 1 2 4 8 16 32
      do
        for class in B C D
        do
          for bench in is lu ft
          do
            # Not all IS bench are compiling but we get 48 working
            make -j 4 $bench NPROCS=$nbproc CLASS=$class || true
          done
        done
      done

As in the previous example, we finally add the NAS_benchmark macrostep to the setup section of the recipe, taking as parameter the NAS_home variable.

setup:
  - NAS_benchmark:
    - NAS_home: /root

Third example: Add a file

Let's add a file to your image. You can access the steps/data folder inside Kameleon recipes using the $$kameleon_data_dir variable.

In this example, we will add a script that clears logs in the image.

First, write a step that copies a script and executes it. This step must be located at steps/clean_logs.yaml:

- script_path: /usr/local/sbin
- import_script:
  - local2in:
    - $$kameleon_data_dir/$$script_file_name
    - $$script_path/$$script_file_name
  - exec_in: chmod u+x $$script_path/$$script_file_name
- run_script:
  - exec_in: $$script_path/$$script_file_name
Note.png Note

In this step we are using the alias command local2in provided by Kameleon. See documentation of commands and alias for more details.

Here is an example of a cleaning script that must be copied in steps/data/debian_log_cleaner.sh.

#!/bin/sh
# This is my cleaning script 'cause I don't trust G5K
systemctl stop rsyslog
rm -rf /var/log/*.log*
rm -f /root/.bash_history
Note.png Note

Script content does not really matter, it is an example. Of course, you can run these commands directly inside the recipe

Finally, we call that step by modifying the setup section of the recipe. We set the variables script_file_name to select the script in the data folder.

  - clean_logs
    - script_file_name: debian_log_cleaner.sh

Other examples

For more complex examples, you may look at the following tutorials:

Inspecting the recipe

To inspect our environment before launching the build:

  • We can look at the information about the environment with kameleon info:
Terminal.png node ~/my_recipes:
kameleon info my_custom_environment
  • We can look at what the build of the environment involves without actually building by running kameleon build --dryrun:
Terminal.png node ~/my_recipes:
kameleon build --dryrun my_custom_environment

Those commands are of great help to find out about the recipe's macrosteps and microsteps, files, variables, etc...

If any error is raised with those commands, it probably comes from a bad syntax in the recipe (e.g. bad YAML formatting).

Build and test

Once the recipe is written, we can launch the build. To do so, we just have to run following command:

Terminal.png node ~/my_recipes:
kameleon build my_custom_environment
Warning.png Warning

Depending on different factors (e.g. the size of the image you are about to create (what variant), the hardware used (SSD or HDD)), the build process can last from a few minutes to a lot longer.

We end up with a build directory that contains the freshly build files we are interested in:

File build/my_custom_environment/my_custom_environment.dsc
  • This is the description file of the new environment (this is a YAML file, the file extension does not really matter, be it .dsc, .env or .yaml)
  • It can be used either directly with kadeploy to run the deployment without registering the environment.

The my_custom_environment.dsc file may need to be edited to set the image file path: use local:///home/yourlogin/my_recipe/build/my_custom_environment/my_custom_environment.tar.zst to point to a local file, but you may also specify an URL if you have put the file in your public directory (see below for an example with that).

After creating a new job of type deploy, we run the following kadeploy command from the frontend:

Terminal.png frontend:
kadeploy -a ~/my_recipe/build/my_custom_environment/my_custom_environment.dsc
  • Or to register the environment with kaenv3, for later use with kadeploy.
Terminal.png frontend:
kaenv3 -a ~/my_recipe/build/my_custom_environment/my_custom_environment.dsc
File build/my_custom_environment/my_custom_environment.tar.zst
  • This is the tarball of our new environment, referred to in the environment description

The recipe also takes care of copying the environment files in your public directory. As a result it can also be deployed using and HTTP URL (replace SITE by the actual Grid'5000 site):

Note.png Note

The environment tarball can also be used directly, for instance with docker import

After installing docker on a reserved node with g5k-setup-docker, run:

Terminal.png node:
zstdcat ~/my_recipe/build/my_custom_environment/my_custom_environment.tar.zst | docker import - debian11-min

Then run the docker container, for instance:

Terminal.png node:
docker run -ti debian11-g5k bash

Of course, writing recipes, building the environment, and testing it may be a trial and error process requiring to loop over the different stages.

Please note that kameleon provides an interactive debugging of the recipe in case of errors or when breakpoints are inserted in the recipe. See comments in the recipes which give the syntax to add a breakpoint.

About the recipes of the Grid'5000 reference environments

Contrary to the recipe presented before which reuses the tarball of an existing environment, recipes of Grid'5000 reference environments are built from scratch using the target system installer, and use Puppet for the Debian stable environment. This section shows how to take benefit from those recipes in case the previous method does not suit the need.

Note.png Note

Here is a summary of pros & cons of the 2 types of recipes

Recipe building from an existing environment tarball (previous paragraph)
  • Pros:
    • Simpler recipes: hide the complexity of the construction of the tarball of the existing environment.
    • Quicker build: does not need to build from scratch, does not involve Puppet (even for Debian based environments).
    • The setup section of the recipe is left to your customizations (it is empty in the extended template recipe). Nothing is done in your back.
  • Cons:
    • May hide too much complexity.
    • Understanding the overall environment construction requires looking at both the recipe of the existing environment which the tarball is taken from and of the new environment recipe.
    • Does not generate qcow2 VM images.
Recipe extending the Grid'5000 reference environment recipe (paragraphs below)
  • Pros:
    • Enable to use Puppet with Debian stable recipes.
    • Build both the environment for use with kadeploy and the qcow2 VM image.
    • Build from scratch: the recipe describes everything.
  • Cons:
    • Longer build because installing from scratch (has to run the distribution installer) and use Puppet (for Debian stable recipes).
    • More complex: expose all steps to build from scratch.
    • The setup section of the recipe must include some necessary steps from the extended template recipe (comes with the @base macrostep, see kameleon build --dryrun to inspect what is actually done) that will change the environment: will do some packages installation, clean-up, run Puppet. You may have to compose with that in the customizations you bring.

We detail below how to work with the Grid'5000 reference environment recipes, first exploiting Puppet (for Debian stable recipes only), second without Puppet.

Working with a Grid'5000 reference environment recipe that uses Puppet

We present here how to extend a Grid'5000 recipe and use Puppet to bring some customizations in a traceable way.

As a reminder, Puppet is only used in the Debian stable environment recipes of Grid'5000. We will extend one of those.

The names of the Debian stable (Debian 10 and 11) recipe templates end by a word after a dash: that's the variant name. Variants are min, base, nfs, big (see above in this page for more details). Puppet is in charge of configuring the environment operating system with regard to the chosen variant. All variants are defined as Puppet classes that include each others in the following order:

min ⊂ base ⊂ nfs ⊂ big

This means that all changes made in the min class will affect all other variants. Changes made in the base class will affect the builds of both the base and big variants.

A first simple example, installing the ffmpeg package package

In this example we will extend the min environment recipe of Debian 11. To do so, we use the kameleon new command as follows:

Terminal.png node ~/my_recipes:
kameleon new debian11_custom grid5000/debian11-min.yaml

This creates the ~/my_recipes/debian11_custom.yaml file which is our new recipe. Besides, kameleon took care of importing in the directory all the files for the new recipe depends on.

You can list the recipes which are present in your workspace using the list command:

Terminal.png node ~/my_recipes:
kameleon list

You can see your new debian11_custom recipe along with the recipes that it extends directly or indirectly.

You can look at the steps involved in the build of the recipes using the kameleon build --dryrun command:

Terminal.png node ~/my_recipes:
kameleon build --dryrun debian11_custom

We see that the setup section has setup and run steps for an orchestrator: what's the part that prepares everything to use Puppet in the recipe and run it.

Since we want to write our customization with the Puppet language, we do not have to modify the debian11_custom.yaml recipe file much. We may just change the environment description by editing the recipe file ~/my_recipes/debian11_custom.yaml, and changing the description field starting at line 4, for instance:

#==============================================================================
#
# DESCRIPTION: My Grid'5000 Debian Bullseye
#
#==============================================================================

Once done, we can close the file and look at the Puppet code.

Puppet is a software configuration management tool that includes its own declarative language to describe system configuration. It is a model-driven solution that requires limited programming knowledge to use.

The puppets modules used by the Grid'5000 reference environments are located in ~/my_recipes/grid5000/steps/data/setup/puppet/modules/env/manifests/.

For our use case, we can look at the commonpackages.pp file. This is a really simple file that requests packages to be installed.

We can install ffmpeg like this :

class env::commonpackages{
}
...
class env::commonpackages::ffmpeg{
  package{ 'ffmpeg':
    ensure => installed;
  }
}
...

In this quite simple use case, but if you have a package like postfix which requires more configuration, it could be more complex! You may look at the puppet classes to find out how it works. Puppet covers a lot of needs that we cannot describe in this documentation. To know more, please refer to the Puppet documentation.

Second example, creating a new environment variant

For bigger changes, one may create a new environment variant. Having our own variant will allow keeping our set of customization separated from Grid'5000 recipes, which will ease maintenance (for example if Grid'5000 recipes are updated).

In this example, we want to install apache2 in the image. We have to create a user (www-data), add an apache2 configuration file, add the web application (here a simple html file), and ensure the service apache2 is running and enabled (starts at boot time). Therefore, we will extend the base variant of environment with modifications listed before.

First, we create a new Kameleon recipe named debian11-webserv, based on debian11-common:

Terminal.png localhost:
kameleon new debian11-webserv grid5000/debian11-common.yaml

Then we create a new Puppet module apache2:

Terminal.png localhost:
mkdir grid5000/steps/data/setup/puppet/modules/apache2
Terminal.png localhost:
mkdir grid5000/steps/data/setup/puppet/modules/apache2/manifests
Terminal.png localhost:
mkdir grid5000/steps/data/setup/puppet/modules/apache2/files

Here is an example of content for grid5000/steps/data/setup/puppet/modules/apache2/manifests/init.pp:

# Module apache2

class apache2 ( ) {

  package {
    "apache2":
      ensure  => installed;
  }
  user {
    "www-data":
      ensure   => present;
  }
  file {
    "/var/www/my_application":
      ensure   => directory,
      owner    => www-data,
      group    => www-data,
      mode     => '0644';
    "/var/www/my_application/index.html":
      ensure   => file,
      owner    => www-data,
      group    => www-data,
      mode     => '0644',
      source   => 'puppet:///modules/apache2/index.html',
      require  => File['/var/www/my_application'];
    "/etc/apache2/sites-available/my_application.conf":
      ensure   => file,
      owner    => root,
      group    => root,
      mode     => '0644',
      source   => 'puppet:///modules/apache2/my_application.conf',
      require  => Package['apache2'];
    "/etc/apache2/sites-enabled/my_application.conf":
      ensure   => link,
      target   => '../sites-available/my_application.conf',
      require  => Package['apache2'],
      notify   => Service['apache2'];
  }
  service {
    "apache2":
      ensure   => running,
      enable   => true,
      require  => Package['apache2'];
  }
}

Files my_application.conf and index.html must be stored in grid5000/steps/data/setup/puppet/modules/apache2/files/

grid5000/steps/data/setup/puppet/modules/apache2/files/my_application.conf:

<VirtualHost *:80>

    ServerName my_application

    DocumentRoot /var/www/my_application

    ErrorLog /var/log/apache2/error.log

    # Possible values include: debug, info, notice, warn, error, crit,
    # alert, emerg.
    LogLevel warn

    CustomLog /var/log/apache2/access.log combined

</VirtualHost>

grid5000/steps/data/setup/puppet/modules/apache2/files/index.html:

<html>
    <head>
        <title>Hello World!</title>
    </head>
    <body>
        <P>I &#60;3 Grid'5000!</P>
    </body>
</html>


We will now integrate this module in a new variant called webserv that extends the base variant.

First we must create a file grid5000/steps/data/setup/puppet/modules/env/manifests/webserv.pp:

# This file contains the apache2 class used to configure a user environment based on base variant, that contains apache2.

class env::webserv ( ) {

  class { "env::base": } # we include base variant here without overloading any of it's default parameters
  class { "apache2": }
}

To have it included by the actual Puppet setup, we must also create grid5000/steps/data/setup/puppet/manifests/webserv.pp:

# User env containing apache2
# All recipes are stored in env module. Here called with webserv variant parameter.

class { 'env':
  given_variant    => 'webserv';
}

And finally, modify grid5000/steps/data/setup/puppet/modules/env/manifests/init.pp to include your variant:

 case $variant {
   'min' :  { include env::min }
   'base':  { include env::base }
   'webserv': { include env::webserv }
   'nfs' :  { include env::nfs }
   'prod':  { include env::prod }
   'big' :  { include env::big }
   default: { notify {"flavor $variant is not implemented":}}
 }

Then, instruct in the debian11-webserv recipe to build our webserv variant by modifying the variant variable of the global section of the recipe.

We edit debian11-webserv.yaml as follows:

---
extend: grid5000/debian11-common.yaml

global:
    # You can see the base template `grid5000/debian11-common.yaml` to know the
    # variables that you can override
  variant: webserv

bootstrap:
  - "@base"

setup:
  - "@base"

export:
  - "@base"

Working with a Grid'5000 reference environment recipe without using Puppet

All Grid'5000 reference environments can also be extended to add modifications using only the kameleon language (not Puppet).

Let's say we want to extend grid5000/debiantesting-min. We run the following command:

Terminal.png node ~/my_recipes:
kameleon new my_custom_environment grid5000/debiantesting-min

This creates the ~/my_recipes/my_custom_environment.yaml file, which this time directly describes that it extends grid5000/debiantesting-min.

We can now edit the recipe file, our customizations of the environment operating system is to be written in the setup section (bootstrap and export should not require any changes as long as we work on customizing the environment for Grid'5000).

For example, just like in the previous section with Puppet, we describe below how to install the ffmpeg package (but in the kameleon language this time).

We add a new step to the recipe, which is just a sequence of actions to execute. This basically gives a setup section in our recipe as follows:

setup:
 - "@base"
 - install_more_packages:
    - install_ffmpeg:
      - exec_in : apt-get update && apt-get install -y ffmpeg
  • "@base" means that the steps from the environment we extend should be executed among our new steps (a bit like when using super() in a constructor of a class in Java or Ruby to call the constructor of the inherited class). Mind that some operations will be performed in your back (mind inspecting what the recipe actually does, e.g. with kameleon build --dryrun)
  • exec_in means that the command will be executed with bash in the VM of the build process. See the kameleon documation for other commands.
  • install_more_packages is a macrostep, install_ffmpeg is a microstep: It is mandatory to define the 2 level of steps and respect the format of a correct YAML document.

Please notice that this is very similar to what we did when extending the grid5000/from_grid5000_environment/base recipe, but the @base is added, since we want the setup section of the extended recipe to be executed.

Again, to inspect our environment before launching the build:

  • We can look at the information about the environment with kameleon info:
Terminal.png node ~/my_recipes:
kameleon info my_custom_environment
  • We can look at what the build of the environment involves by running kameleon build --dryrun:
Terminal.png node ~/my_recipes:
kameleon build --dryrun my_custom_environment

If any error is raised with those commands, it probably come from a bad syntax in the recipe (e.g. bad YAML formating).

Build and test

Once the recipe is written, we can launch the build. To do so, we just have to run following command:

Terminal.png node ~/my_recipes:
kameleon build my_custom_environment

We end up with a build directory that contains the freshly build files we are interested in:

File build/my_custom_environment/my_custom_environment.dsc
  • This is the description file of the new environment (this is a YAML file, the file extension does not really matter, be it .dsc, .env or .yaml)
  • It can be used either directly with kadeploy to run the deployment without registering the environment:

After creating a new job of type deploy, we run the following kadeploy command from the frontend:

Terminal.png frontend:
kadeploy -a ~/my_recipe/build/my_custom_environment/my_custom_environment.dsc
  • Or to register the environment with kaenv3, for later use with kadeploy.
Terminal.png frontend:
kaenv3 -a ~/my_recipe/build/my_custom_environment/my_custom_environment.dsc
File build/my_custom_environment/my_custom_environment.tar.zst
  • This is the tarball of our new environment, referred to in the environment description
File build/my_custom_environment/my_custom_environment.qcow2
  • It is a qcow2 version of the environment for use with qemu (as seen earlier, it is not built when extending grid5000/from_grid5000_environment/base).

Just run Qemu on the image:

Terminal.png node:
qemu-system-x86_64 -enable-kvm -m 2048 -cpu host ~/my_recipe/build/my_custom_environment/my_custom_environment.qcow2

Creating an environment for an unsupported Operating System

If an Operating System is not provided as a Grid'5000 environment already, it should be doable to write a kameleon recipe to build it, assuming it can:

  • boot the OS installer in a VM and do an unattended installation
  • run some additional setup
  • finally export the built system ready to be consumed by kadeploy3.

Please get in touch with the Grid'50000 technical team to explain your motivation and ideas and possibly to get some help.

Warning.png Warning

This task requires strong system administrator skills and a very good understanding of how the Grid'5000 bare metal deployment functions.