Booting and Using Virtual Machines on Grid'5000

From Grid5000
Revision as of 16:36, 22 August 2012 by Dbalouek (talk | contribs)
Jump to navigation Jump to search


Overview

The goal of this tutorial is to introduce the use of large amount of virtual machines overs the Grid'5000 platform.

After a short part about Grid'5000 specifications and the requirements of this session, users will be presented a set of scripts and tools designed over the Grid'5000 software stack to deploy and interact with a significant number of virtual machines.

These instances can then be used at the user convenience in order to investigate particular concerns.

Grid'5000 specifications

When booting KVM instances over the platform, we need physical hosts that supports hardware virtualisation. During the deployment, we are in charge of configuring the IP network used by virtual machines throught network isolation capabilities of Grid'5000 and the subnet reservation system. It enables the use of an IP network ranging from ranging from /22 to /16 subnets, and ensures the communications with the various instances.

Deployment

Booking the resources

The first step is to retrieve the last version of the scripts

Terminal.png frontend:
scp -r /home/alebre/dynVM__TEMPLATE__/ DESTINATION_FOLDER


Move to the folder containing the code

Terminal.png frontend:
cd DESTINATION_FOLDER


Book the g5k resources according to the desired time and duration of your experiment We will now consider as master site the site from which you book the ressources and executes the scripts. This script will return you an OAR that includes the reservation of the nodes, a virtual network and a subnet

Terminal.png frontend:
./getmaxgridnodes.sh 2012-24-07 19:00:00 03:00:00


Execute the oar request returned by getmaxgridnodes.sh (don't forget to redirect it as shown in the example) The master site is sophia in the following example.

Terminal.png frontend:
oargridsub -t deploy -w 3:00:00 -s "2012-24-07 19:00:00" sophia:rdef="{\\\\\\\"type='kavlan-global'\\\\\\\"}/vlan=1+/nodes=5+/slash_18=1",rennes:rdef=/nodes=5,nancy:rdef=/nodes=5,lille:rdef=/nodes=5 > ~/oargrid.out


Deploying and configuring the physical machines

Move to the Flauncher directory

Terminal.png frontend:
cd Flauncher


Get the list of nodes and connect to the OAR grid job

Terminal.png frontend:
./grid_get_nodes.sh


You are now connected to the grid OAR job


Display the list of nodes

Terminal.png frontend:
less ~/gridnodes-uniq


Move to the Flauncher directory

Terminal.png frontend:
cd Flauncher


Deploy the nodes

Terminal.png frontend:
./grid_deploy_nodes.sh


Deploy the vlan and set the hypervisor (Warning : Use lowercase letter and repeat the master site as the first site)

Terminal.png frontend:
./vlan_auto_launch.sh sophia "sophia nancy lille rennes"


We use a service node during the process.

Retrieve the service node

service_node=$(sed -n '/sophia/p' ./log/machines-list.txt | head -n1)

Connect to the service node

Terminal.png frontend:
ssh root@$service_node


Figure 1: FLauncher Process

Usage

Retrieving the infrastructure informations

The service node contains all the informations about the deployed infrastructure.

Those informations can be accessed through various getters functions according to the desired informations.

Example : Retrieving the mapping between the reserved IP and the nodes

Terminal.png service node:
./get_names_and_ip_from_hosts.sh


Please consult the README file for complete description of the available getters.

Terminal.png service node:
less README


Creation of the Virtual Machines

Create a single virtual machine on a host

It is possible to create a single virtual machine by using the following script

The creation needs an IP adress(IP) , an amount of RAM memory(MEMORY), the current site and node and a arbitrary ID.

Terminal.png on physical host:
./create_vm_local.sh -i IP -m MEMORY -n SITE -s NODE -v ID


Create all the virtual machines from a service node

When the booting involve a large amount of virtual machines, it is necessary to describe the desired infrasctructure to enable an automatic and remote deployment.

#list of nodes
node0 4 20 8
node1 4 20 8
node2 4 20 8
#list of VMs
vm00 1 1 1 2147483647
vm01 1 1 1 2147483647
vm02 1 1 1 2147483647
vm03 1 1 1 2147483647
vm04 1 1 1 2147483647
vm05 1 1 1 2147483647
#initial configuration
node0 vm00 vm03
node1 vm01 vm04
node2 vm02 vm05
#end of configuration

In this particular configuration, we declare 3 physical nodes and 6 virtual machines.

The last part of the configuration describes a simple mapping of 2 vitrual machines on each node.

(nodeX must designate the physical hostname of your infrastructure)


We can now create virtual machines on each host according to the specified configuration file, from the service node.

Terminal.png service node:
./remote_create_vms.sh CONF_FILE


All the virtual instances are started at the same time, using a hierarchical structure mong the physical nodes.The correlation between name and IP is stored in a dedicated file propagated on each physical node. This allows us to identify and communicate with all the virtual machines.

At the end of the operation, the instances previously described are booted and available for experiments.


Interaction with the Machines

We need to be control,monitor and communicate with both the host OSes and the guest instances spread across the infrastructure at any time.

For that purpose, the following scripts are based on hierarchical communication structures to ensure a large-scale communication with the physical and/or virtual instances.

Execute remote commands

Execute a command on a list of nodes using a tree distribution

Terminal.png service node:
./mykash.sh nodefile command timeout


Copy files

Copy a file on a list of node using a tree distribution


Terminal.png service node:
./mykaput.sh nodefile file_to_upload remote_path tree_arity


In Practice : 10240 Virtual Machines on 512 Physical Hosts

Considering that physical machines must support hardware virtualization to start KVM instances, the largest experiment that has been conducted up to now involved 10240 KVM instances upon 512 nodes through 4 sites and 10 clusters. The whole setup is performed in less than 30 minutes with about 10 minutes spent on the deployment of the nodes, 5 minutes for the installation and configuration of the required packages on the physical hosts, while the rest is dedicated to the booting of the virtual machines. The result of that work opens doors to the manipulation of virtual machines thoughout a distributed infrastructure like traditionnal operating systems handle process on a local node.


Useful Tips

Booking grid resources

  • Provide the request that retrieves the maximum number of nodes that is available during the time slot defined on specific clusters
Terminal.png frontend:
disco -s '2012-07-24 19:00:00' -w 1 griffon graphene

Deployments

  • Ensure a minimum amount of deployed nodes

To ensure that 95% (rounded down) of the reserved nodes are correctly deployed (3 attempts max), instead of running:

Terminal.png frontend:
kadeploy3 -e squeeze-x64-prod -f $OAR_NODEFILE -k


Run:

NB_NODES=$(sort -u $OAR_NODE_FILE | wc -l)
MIN_NODES=$(($NB_NODES * 95/100))
/grid5000/code/bin/katapult3 --deploy-env squeeze-x64-prod --copy-ssh-key --min-deployed-nodes $MIN_NODES --max-deploy-runs 3


Communication

  • About the Saturation of ARP tables

ARP_table_overflow_tuning

TCP_bandwidth_tuning


Tools


Contact

  • Adrien Lèbre, Associate Professor at Mines Nantes, Ecole des mines de Nantes, Ascola Team | mail
  • Daniel Balouek, Ingénieur Héméra, ENS Lyon, Inria, Avalon Team | mail
  • Flavien Quesnel, PhD student, Ecole des mines de Nantes, Ascola Team | mail