Virtualization in Grid'5000: Difference between revisions
(→Run the guest OS using the qemu/kvm command: fix qemu commands) |
|||
Line 205: | Line 205: | ||
; You can also use the qemu/kvm command to start the virtual machine: | ; You can also use the qemu/kvm command to start the virtual machine: | ||
{{Term|location=node|cmd=<code class="command">qemu-system-x86_64 -enable-kvm -cpu host -smp 2 -m 2048 -rtc base=localtime -no-acpi -drive file=/tmp/debian10-x64- | {{Term|location=node|cmd=<code class="command">qemu-system-x86_64 -enable-kvm -cpu host -smp 2 -m 2048 -rtc base=localtime -no-acpi -drive file=/tmp/debian10-x64-base.qcow2,cache=unsafe,media=disk,if=virtio,id=drive0 -netdev bridge,id=br0 -device virtio-net-pci,netdev=br0,mac=</code><code class="replace">AA:BB:CC:DD:EE:FF</code><code class="command"> -nographic</code>}} | ||
Or with a VNC display: | Or with a VNC display: | ||
{{Term|location=node|cmd=<code class="command">qemu-system-x86_64 -enable-kvm -cpu host -smp 2 -m 2048 -rtc base=localtime -no-acpi -drive file=/tmp/debian10-x64- | {{Term|location=node|cmd=<code class="command">qemu-system-x86_64 -enable-kvm -cpu host -smp 2 -m 2048 -rtc base=localtime -no-acpi -drive file=/tmp/debian10-x64-base.qcow2,cache=unsafe,media=disk,if=virtio,id=drive0 -netdev bridge,id=br0 -device virtio-net-pci,netdev=br0,mac=</code><code class="replace">AA:BB:CC:DD:EE:FF</code><code class="command"> -device cirrus-vga,id=video0 -vnc :0</code>}} | ||
These are an example commands. Feel free to adapt them to your use case. (The qemu process can be launched in a <code class="command">screen</code> session in order to be detachable. If you are not familiar with screen, read its [[Screen|documentation]]) | These are an example commands. Feel free to adapt them to your use case. (The qemu process can be launched in a <code class="command">screen</code> session in order to be detachable. If you are not familiar with screen, read its [[Screen|documentation]]) |
Revision as of 17:51, 22 June 2021
Note | |
---|---|
This page is actively maintained by the Grid'5000 team. If you encounter problems, please report them (see the Support page). Additionally, as it is a wiki page, you are free to make minor corrections yourself if needed. If you would like to suggest a more fundamental change, please contact the Grid'5000 team. |
Purpose
This page presents how to use KVM on the standard environment (with a "non-deploy" reservation). The aim is to permit the execution of virtual machines on the nodes, along with a subnet reservation, which will give you a range of routed IP for your experiment.
In the first part, you will learn the basics of g5k-subnets, which is a prerequisite for the rest of this tutorial. The Quick start explains how to run a VM on the standard environment in the minimal number of steps. The next part is optional, it explains in details the contextualization mechanism, which allows you to customize your virtual machines. In the Multi-site experiment section, we will deploy 2 VMs on 2 sites, and we will measure the network bandwidth between them with iperf.
Finally, an alternative to KVM on the standard environment is quickly introduced: the Xen reference environments.
Prerequisite: Network subnets reservation with g5k-subnets
Users deploying VMs on Grid'5000 need to attribute IP address to them. Each site of Grid'5000 is allocated a /14 block for this purpose, divided in 4 smaller blocks.
OAR can be used to reserve a range of IPs. OAR permits to share the IP resources among users, and avoid the potential IP conflicts at the same time.
More details in the Subnet reservation page.
Reservation
A subnet reservation is done similarly to a compute resource reservation, in OAR.
To reserve 4 /22 subnets and 2 nodes, just type:
You can of course have more complex request. To obtain 4 /22 on different /19 subnets, you can type:
To request a node from a specific cluster, advanced OAR usage is needed:
Usage
The simplest way to get the list of your allocated subnets is to use the g5k-subnets
script provided on the head node of the submission.
# g5k-subnets 10.8.0.0 10.8.8.0
Several other printing options are available (-p
option to display the CIDR format, -b
to display broadcast address, -n
to see the netmask, and -a
is equivalent to -bnp
):
# g5k-subnets -a 10.8.0.0/21 10.11.255.255 255.255.252.0 10.11.255.254 10.8.8.0/21 10.11.255.255 255.255.252.0 10.11.255.254
You can also summarize the subnets into a larger one if they are contiguous:
# g5k-subnets -sp 10.8.0.0/20
You can display all the available IP in your reservation, and their associated unique mac addresses, with the following command.
# g5k-subnets -im 10.158.16.1 00:16:3E:9E:10:01 ...
Note | |
---|---|
For detailed information, see the Subnet reservation page. The Grid5000:Network page also describes our organization of the virtual IP space inside Grid'5000. |
Quick start
In this part, we will create a virtual machine in a few steps, and ssh to it.
Job submission
In order to test easily the kvm environment, we use an interactive job, and we reserve one subnet and one node with hardware virtualization capabilities.
Disk image, virtual machine
A disk image containing Debian 10 Buster is available at the following path:
/grid5000/virt-images/debian10-x64-base.qcow2
You can copy it on the node : It will be our base image for our VMs :
If we want to create multiple VMs, we will have to copy the qcow2 as many times as the number of VM we want.
To gain storage space, we can use debian10-x64-base.qcow2
as a backing file :
By doing this, domain1.qcow2 will only store the difference from debian10-x64-base.qcow2 (and not the whole image)
If you want to create a second virtual machine based on the same image, simply run the same command with domain2.qcow
instead of domain1.qcow2
Choose a MAC address
As seen before, g5k-subnets maintains a correspondence between MAC addresses and IP addresses. The Debian system provided on the disk image is configured to use DHCP and the DHCP server will assign the IP corresponding to the MAC address of the virtual machine.
Consequently, you have to choose an IP in the range you have reserved, and set the MAC address of the VM to the associated MAC address.
You can get the list of available IP, and an associated unique MAC address with the following command.
10.172.0.1 00:16:3E:AC:00:01 10.172.0.2 00:16:3E:AC:00:02 10.172.0.3 00:16:3E:AC:00:03 10.172.0.4 00:16:3E:AC:00:04 10.172.0.5 00:16:3E:AC:00:05 10.172.0.6 00:16:3E:AC:00:06 10.172.0.7 00:16:3E:AC:00:07 10.172.0.8 00:16:3E:AC:00:08 10.172.0.9 00:16:3E:AC:00:09 10.172.0.10 00:16:3E:AC:00:0A ...
Run the guest OS using libvirt
Libvirt is a toolkit for managing virtualization servers. Libvirt is also an abstraction layer for different virtualization solutions, including KVM but also Xen and VMWare ESX.
In our case, we use libvirt on top of KVM.
- Create a domain file in XML, describing a virtual machine.
eg : domain1.xml
<domain type='kvm'>
<name>domain1</name>
<memory>2048000</memory>
<vcpu>1</vcpu>
<os>
<type arch="x86_64">hvm</type>
</os>
<clock offset="localtime"/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/bin/kvm</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/tmp/domain1.qcow2'/>
<target dev='vda' bus='virtio'/>
</disk>
<interface type='bridge'>
<source bridge='br0'/>
<mac address='AA:BB:CC:DD:EE:FF'/>
</interface>
<serial type='pty'>
<source path='/dev/ttyS0'/>
<target port='0'/>
</serial>
<console type='pty'>
<source path='/dev/ttyS0'/>
<target port='0'/>
</console>
</devices>
</domain>
Now, we can run and manage our guest OS with virsh
.
- Run the guest with the following command :
- We can see ou guest is currently running :
Id Name State --------------------------------------- 1 domain1 running
- You can connect to your VM console
- The default root password is
grid5000
- Use
CTRL+]
to disconnect fromvirsh console
(orCTRL+ALT+6
on macos)
- The default root password is
Note | |
---|---|
To have a Graphical user interface (GUI) for the VM, you can configure a VNC access. To do so, simply add the following line to <graphics type='vnc' listen='0.0.0.0' autoport='yes' keymap='en-us'/>
Get the VNC port uses by the VM: The answer should be something like Then, you need to create a SSH tunnel to access the VM from your personal computer. With:
localhost: PORT .
|
- At this point, you can repeat the full process and launch several VMs in parallel.
- Stop the execution of your VM with:
Run the guest OS using the qemu/kvm command
- You can also use the qemu/kvm command to start the virtual machine
Or with a VNC display:
These are an example commands. Feel free to adapt them to your use case. (The qemu process can be launched in a screen
session in order to be detachable. If you are not familiar with screen, read its documentation)
SSH to your virtual machine
Root SSH authentication with password is disabled by default, to SSH to your VM, do the following steps
- Log into your VM console using
virsh console domain1
. The root password isgrid5000
- Run these command to allow root login with password in ssh config, and reload ssh daemon :
Finally, you can ssh directly to your VM from anywhere in Grid'5000:
Contextualize your VMs with cloud-init
As we have seen, we must use the console of our VM to configure SSH and connect to it later. It's a bit annoying if we have many VMs, we would have to manyally configure SSH on each instances within the console.
This part describes how to contextualize your VM using cloud-init.
cloud-init ( https://cloudinit.readthedocs.io/en/latest/ ) runs on startup of the VM and search for a datasource to fetch configurations to apply to the VM, such as :
- Set the hostname
- Create users
- Copy SSH key to root account
- Mount a device
- Execute a script
- ...
This is the contextualization.
On Grid'5000, this datasource is a virtual disk (.iso) that contains the configurations we want.
Create a virtual disk for cloud-init
In this example, we will create a CD containing simple contextualization configuration for cloud-init: It will change the hostname of the VM and add your public SSH key to the root account.
To help you creating cloud-init configuration file, there is a script cloud-init-example.sh
you can copy on your node:
This script will generate basics configuration files for cloud-init to add your public SSH key to the root account so that you can SSH to the VM without password and without using the console.
Note | |
---|---|
The previous command assume your SSH public key is in |
You can see 2 files were created in cloud-init-data : meta-data and user-data
- meta-data contains configuration such as hostname, root SSH key, instance id, ... .You can see the script wrote your SSH public key in this file.
- user-data can contains more configuration in different format
- It can be a bash script that will be executed on startup
- It can be a file in YAML that describes configuration like creating users, mounting a device, running puppet, changing the resolv.conf, ... ( For other examples : https://cloudinit.readthedocs.io/en/latest/topics/examples.html# )
Now, we can generate an iso file using the following command :
node:/tmp :
|
genisoimage -output cloud-init-data.iso -volid cidata -joliet -rock cloud-init-data/user-data cloud-init-data/meta-data |
The file cloud-init-data.iso
is ready to be attached to a VM.
Cloud-init will detect the disk on startup and configure the virtual machine using the informations in meta-data and user-data on the CD.
Start a VM with contextualization
We will run a new VM with contextualization : First we create a new disk image from our base image :
node:/tmp :
|
qemu-img create -f qcow2 -o backing_file=/tmp/debian10-x64-base.qcow2 /tmp/contextualized-domain.qcow2 |
We create a new contextualized-domain.xml
with this content :
<domain type='kvm'>
<name>contextualized-domain</name>
<memory>2048000</memory>
<vcpu>1</vcpu>
<os>
<type arch="x86_64">hvm</type>
</os>
<clock offset="localtime"/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/bin/kvm</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/tmp/contextualized-domain.qcow2'/>
<target dev='vda' bus='virtio'/>
</disk>
<disk type='file' device='cdrom'>
<source file='/tmp/cloud-init-data.iso'/>
<target dev='vdb' bus='virtio'/>
<readonly/>
</disk>
<interface type='bridge'>
<source bridge='br0'/>
<mac address='AA:BB:CC:DD:EE:FF'/>
</interface>
<serial type='pty'>
<source path='/dev/ttyS0'/>
<target port='0'/>
</serial>
<console type='pty'>
<source path='/dev/ttyS0'/>
<target port='0'/>
</console>
</devices>
</domain>
You can notice it's the same xml file as in the previous section, except that we added highlighted part for the cdrom device, to mount the iso on the VM as a cdrom.
Note | |
---|---|
- Don't forget to change the MAC address with one of your g5k-subnet mac |
Start the guest OS and connect it using ssh :
You can now SSH to your VM without password, and without having to use the VM console. You can notice the hostname also changed to example-vm, as specified in the meta-data file.
Playing with cloud-init
This optional part shows more example of what can be done with cloud-init
Run a script on startup with user-data
The current content of user-data is :
#cloud-config disable_root: false
By default, cloud-init disable the root account.
If you try to connect as root on the VM without enabling the root account in user-data,
you will get a message saying you need to connect as user debian (your public key will be accepted for user debian).
This user has sudo rights
user-data file start with #cloud-config, telling cloud-init that the format of the file is a cloud-config. As we will see in the next part, it's a file in YAML that describes the configurations cloud-init has to apply on boot.
But user-data can also be a bash script, and that's what we will do here. Replace the content of user-data with:
#!/bin/bash
apt-get update && apt-get install -y lighttpd
cat << EOF > /var/www/html/index.lighttpd.html
<!DOCTYPE html>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
<title>VM HTTP Server</title>
</head>
<body>
<h1> Install and configured with cloud-init </h1>
</body>
</html>
EOF
Generate the iso file with this new configuration :
node:/tmp :
|
genisoimage -output cloud-init-data.iso -volid cidata -joliet -rock cloud-init-data/user-data cloud-init-data/meta-data |
We will destroy our running VM, create a fresh disk from debian10-x64-base.qcow2 and restart it :
node:/tmp :
|
qemu-img create -f qcow2 -o backing_file=/tmp/debian10-x64-base.qcow2 /tmp/contextualized-domain.qcow2 |
After a few moment, we can try :
The script in user-data ran on startup. It installed lighttpd, a small http server, and replaced the default index.html
We can still ssh on our VM, but not as root since we remove the option that enabled root account in user-data :
Going further with user-data in YAML
We have seen how to use user-data as a startup script.
We will now use it in cloud-config format : It's description file in YAML to trigger some actions on startup :
Copy the following content to user-data and insert your public SSH key where needed. Then regenerate the iso file :
#cloud-config
groups:
- foo
- bar
users:
- name: foo
primary-group: foo
groups: users
shell: /bin/bash
ssh-authorized-keys:
- <insert your public key here>
- name: bar
primary-group: bar
groups: users
sudo: ALL=(ALL) NOPASSWD:ALL
shell: /bin/bash
ssh-authorized-keys:
- <insert your public key here>
packages:
- lighttpd
node:/tmp :
|
genisoimage -output cloud-init-data.iso -volid cidata -joliet -rock cloud-init-data/user-data cloud-init-data/meta-data |
The advantage of using this format is the readability. We can quickly identify what it will do :
- Create a user foo in group foo (with no sudo right)
- Create a user bar in group bar with sudo rights
- Install the package lighttpd
You can create a new VM to test this new configuration :
node:/tmp :
|
qemu-img create -f qcow2 -o backing_file=/tmp/debian10-x64-base.qcow2 /tmp/contextualized-domain.qcow2 |
You'll be able to connect without password as foo and bar
For more example of what can be done with cloud-config : http://cloudinit.readthedocs.io/en/latest/topics/examples.html
Add cloud-init to you own image
You may want to add cloud-init to your own virtual image so you can run custom VMs on Grid'5000.
To do so, simply run
in your VM before exporting it as qcow2.
You can also use virt-customize for an already existing qcow2 :
Multi-site experiment
In this part, to illustrate what can be done using Virtual machines on the standard environment, we will start two virtual machines on two sites, and make them communicate using the virtualization network.
Reservation
Open 2 terminals, and ssh to the frontends of 2 sites, in this example, it will be the frontend of Luxembourg, and the frontend of Nancy. Then, reserve two virtualization-capable nodes and two subnets on two different sites.
For the rest of the multi-site experiment part, don't forget to run each command on both sites.
Network configuration
In this part, we will choose an IP for the 2 virtual machines.
Choose a couple of IP & MAC for each VM, in the output of g5k-subnets -im
.
Note that g5k-subnets
returns completely different information on each site. In the following, we assume that you chose 10.144.8.1 (00:16:3e:90:08:01) in Nancy, and 10.172.0.1 (00:16:3e:ac:00:01) in Luxembourg.
Instantiate your VMs
Copy a standard virtual machine image
Copy the default virtual machine image from /grid5000/virt-images/debian10-x64-base.qcow2
to /tmp
on both nodes :
Configure cloud-init
To be able to SSH without password, we will use cloud-init :
node(both) :
|
genisoimage -output cloud-init-data.iso -volid cidata -joliet -rock cloud-init-data/user-data cloud-init-data/meta-data |
Create the domain.xml
file
The domain.xml
file contains the description of your virtual machine.
Create it on both side and adapt it to use a mac address provided by g5k-subnets -im
. The virtual machine will get the IP associated to its mac address :
<domain type='kvm'>
<name>buster</name>
<memory>2048000</memory>
<vcpu>1</vcpu>
<os>
<type arch="x86_64">hvm</type>
</os>
<clock offset="localtime"/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/bin/kvm</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/tmp/debian10-x64-base.qcow2'/>
<target dev='vda' bus='virtio'/>
</disk>
<disk type='file' device='cdrom'>
<source file='/tmp/cloud-init-data.iso'/>
<target dev='vdb' bus='virtio'/>
<readonly/>
</disk>
<interface type='bridge'>
<source bridge='br0'/>
<mac address='AA:BB:CC:DD:EE:FF'/>
</interface>
<serial type='pty'>
<source path='/dev/ttyS0'/>
<target port='0'/>
</serial>
<console type='pty'>
<source path='/dev/ttyS0'/>
<target port='0'/>
</console>
</devices>
</domain>
Launch the two VMs
Enjoy !
SSH in your VMs
Install and run iperf
Finally, we will install iperf
and measure the bandwidth between the two VMs:
- install
iperf
withapt-get
; - then, run
iperf
in server mode (-s
parameter) on one node, and in client mode (-c
parameter) on the other.
root@vm-1:~# iperf -s ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 85.3 KByte (default) ------------------------------------------------------------ [ 4] local 10.144.8.1 port 5001 connected with 10.172.0.1 port 52389 [ ID] Interval Transfer Bandwidth [ 4] 0.0-10.0 sec 1.09 GBytes 938 Mbits/sec |
root@vm-1:~# iperf -c 10.144.8.1 ------------------------------------------------------------ Client connecting to 10.144.8.1, TCP port 5001 TCP window size: 16.0 KByte (default) ------------------------------------------------------------ [ 3] local 10.172.0.1 port 52389 connected with 10.144.8.1 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 1.09 GBytes 938 Mbits/sec |
Another alternative: Xen reference environments
Grid'5000 proposes Xen reference environments, as an alternative to KVM on the standard environment. This last part is a quick guide to Xen, we will show how to deploy a Xen environment on nodes, create virtual machines and use g5k-subnets for the network configuration.
Note | |
---|---|
In Xen terminology, a domain U or domU is a virtual machine. The domain 0 or dom0 is the physical machine which hosts the domUs (in our case the dom0 is the Grid5000 node you deployed). |
Reserve resources and deploy xen environment
DomU network configuration
The image debian10-x64-xen
includes a pre-configured domU.
The configuration file of this VM is placed in /etc/xen/domU.cfg
.
Inside this file, you can specify the parameters of your virtual machine. They are defined by:
- kernel and initrd : linux kernel and initrd with xen domU support.
- vcpus : number of virtual CPUs given to the VM.
- memory : size (MB) of RAM given to the VM.
- root : where is located the root partition .
- disk : which files contain the partitions on your virtual host.
- name : the name of the hostname, as displayed by xl list and as given by the system itself.
- vif : the configuration of the domU's network interfaces
- on_poweroff on_restart on_crash : how should react xen hypervisor on these events
You can find the official documentation and other options here : http://xenbits.xen.org/docs/4.9-testing/man/xl.cfg.5.html
The vif line configures the domU's network. It usually contains:
- a MAC address
- the bridge name, in our case br0 which is a bridge that includes the production network interface.
Note | |
---|---|
In the |
Use the default domU
Select 1 IP from your reserved subnet:
10.172.4.1 00:16:3E:AC:04:01
Edit the file /etc/xen/domU.cfg
and replace the mac address. Then start the domU.
Name ID Mem VCPUs State Time(s) Domain-0 0 976 8 r----- 30.7 domU 1 512 1 -b---- 4.7
The example VM is already configured to accept the debian10-x64-xen key. So you can SSH to it without password, and without cloud-init :
Create a new domU
Select another ip and mac address, and create a new domU with the command xen-create-image
... 10.172.4.3 00:16:3E:AC:04:03 ...
node :
|
xen-create-image --dir=/tmp/ --size=10G --hostname=domU2 --role=udev --genpass=0 --password=grid5000 --mac=00:16:3E:AC:04:03 --dhcp --bridge=br0 --memory=512M |
At this point, a new domU configuration file (/etc/xen/domU2.cfg
and a new disk image /tmp/domains/domU2/disk.img
have been generated.
Due to the default xen configuration in the debian10-x64-xen environment, the hosts SSH key has been copied during the image generation : You can SSH as root without password in domU2 :
Using Grid'5000 qcow2 images
It's possible to run VM with Grid'5000 environnement. First, copy the image and the script to setup cloud-init on the node (the dom0) :
frontend :
|
scp /grid5000/virt-images/debian10-x64-min.qcow2 /grid5000/virt-images/cloud-init-example.sh root@node:/tmp |
To be able to SSH to the VM with your public key, run the following commands :
Copy your SSH key
node:/tmp :
|
genisoimage -output cloud-init-data.iso -volid cidata -joliet -rock cloud-init-data/user-data cloud-init-data/meta-data |
Then we create a domU config file /etc/xen/g5k_image.cfg :
#
# Configuration file for the Xen instance domU, created
# by xen-tools 4.7 on Fri Jun 1 00:48:00 2018.
#
#
# Kernel + memory size
#
kernel = '/vmlinuz'
extra = 'elevator=noop'
ramdisk = '/initrd.img'
vcpus = '1'
memory = '512'
# Disk device(s).
#
root = '/dev/xvda1 ro'
disk = [
'format=qcow2, vdev=xvda, access=rw, target=/tmp/debian10-x64-min.qcow2',
'format=raw, vdev=hdc, access=ro, devtype=cdrom, target=/tmp/cloud-init-data.iso'
]
# Hostname
#
name = 'g5k_image'
# Networking
#
dhcp = 'dhcp'
vif = [ 'mac=MAC g5k-subnet
,bridge=br0' ]
# Behaviour
#
on_poweroff = 'destroy'
on_reboot = 'restart'
on_crash = 'restart'
The important parts in this configuration file are :
- The 2 drives : 1 for the image of the environment and 1 for the cloud-init.iso
- The mac address you need to change
Finally, run the VM :
Common administrative commands
- List the running domUs with the following command:
- Connect to a domU using the xen console
- Start a domU
- Shutdown properly a domU
- Instantly terminate a domU
- Print information about the dom0
- Shows real time monitoring information:
Going further
Please, refer to the official Xen documentation and Debian documentation.