Network reconfiguration tutorial: Difference between revisions

From Grid5000
Jump to navigation Jump to search
Line 141: Line 141:
<pre style="color:red">(bpichot) ^ uhh… either way, once in the global VLAN it will be unreachable unless using kaconsole3, since the global VLAN is not routed :/</pre>
<pre style="color:red">(bpichot) ^ uhh… either way, once in the global VLAN it will be unreachable unless using kaconsole3, since the global VLAN is not routed :/</pre>


The following is a better approach, in order to reconfigure the network '''after''' setting the VLAN. It makes use of the 'at' command:
The following command, based on the 'at' command, is a better approach, in order to reconfigure the network '''after''' setting the VLAN.
{{Term|location=nancy:frontend|cmd=<code class="command">ssh</code> root@<code class="replace">node_nancy</code> "apt-get --yes install at && echo 'service networking restart' &#124; at now + 1 minute" && kavlan -s -i <code class="replace">Global_Vlan_Id</code> -m <code class="replace">node_nancy</code>.nancy.grid5000.fr --verbose}}
{{Term|location=nancy:frontend|cmd=<code class="command">ssh</code> root@<code class="replace">node_nancy</code> "apt-get --yes install at && echo '(service networking restart ; killall dhclient)' &#124; at now + 1 minute" && kavlan -s -i <code class="replace">Global_Vlan_Id</code> -m <code class="replace">node_nancy</code>.nancy.grid5000.fr --verbose}}
 
{{Note|text=The "killall dhclient" is here because, in order to reduce the number of steps in this tutorial, we modified the interface configuration '''before''' switching it down, and therefore restarting the networking service would have let the DHCP client running. The clean method would have been, with kaconsole3, to switch off the node's network (hence the use of kaconsole3), only then change the interface configuration, and restart the network.}}


=== In Rennes (Rennes terminal) ===
=== In Rennes (Rennes terminal) ===

Revision as of 17:36, 19 January 2016

Note.png Note

This page is actively maintained by the Grid'5000 team. If you encounter problems, please report them (see the Support page). Additionally, as it is a wiki page, you are free to make minor corrections yourself if needed. If you would like to suggest a more fundamental change, please contact the Grid'5000 team.

Introduction

Warning.png Warning

The redaction of this tutorial is still in progress!

This tutorial aims to present an example use of Grid'5000 in which we will configure a customized network topology for an experiment, using KaVLAN.

KaVLAN is a Grid'5000 tool that allows a user to manage VLANs in the platform. Behind the scene, KaVLAN actually changes the configuration of the network switches, to set the VLAN membership (VLAN ID) of the ports which correspond to the network interface of one or more nodes. This mechanism provides a layer 2 network isolation for experiments.

Three kinds of KaVLAN VLANs are available on Grid'5000. You can find more information in the KaVLAN page. In this tutorial, we will use global and local VLANs (no routed VLAN) only.

Topology setup

Reservations of the resources

We will work both in Rennes and Nancy. Please open 2 terminal windows, one for Rennes and one for Nancy.

In Rennes (Rennes terminal)

A global VLAN spreads all over Grid'5000, so we need to reserve it on one site only. We will use the site of Rennes for that. We also reserve a local VLAN.

First we log to Rennes' frontend in our first terminal, and create our reservation of resources:

Terminal.png rennes:frontend:
oarsub -l {"type='kavlan-global'"}/vlan=1+{"type='kavlan-local'"}/vlan=1+{"cluster='paravance'"}/nodes=3,walltime=3 -t deploy "sleep 10d"
Terminal.png rennes:frontend:
oarsub -C job_ID
Note.png Note

The use of a passive job here is not compulsory, you can do it with an interactive one ("oarsub -I") but later we will reconfigure some nodes' interfaces, and mistakes may make nodes unreachable and freeze SSH sessions. With a passive job we will be able to close terminals without losing the job. As a second solution, you can submit an interactive job in a first terminal and continue the tutorial in another terminal, starting with the "oarsub -C" command.

With this reservation, we have 1 kavlan-local, 1 kavlan-global and 3 nodes of the paravance cluster. Please note that the nodes of the paravance cluster have 2 network interfaces, which is required for this tutorial.

Note.png Note

To get the list of clusters that have nodes with several network interfaces, you can look at page Special_Features.

For the simplicity of this writing, let's call our 3 nodes in Rennes node1, node2 and node3 (in your real case, node1=<hostname of your first node> and so on, for instance you might have node1 = paravance-23).

Get your VLAN IDs :

Terminal.png rennes:frontend:
kavlan -V

You can know which VLAN ID is global and which one is local, using the following table:

KaVLAN name in OAR type first id last id
kavlan-local local 1 3
kavlan routed 4 9
kavlan-global global 10 21

(see KaVLAN for more details)

In Nancy (Nancy terminal)

The global VLAN was already reserved in Rennes, so we do not need to reserve it again in Nancy. Furthermore, we will not use any local VLAN in Nancy, so we just do a classic deploy job:

In our second terminal, we log to nancy's frontend, and execute:

Terminal.png nancy:frontend:
oarsub -l nodes=1,walltime=3 -t deploy "sleep 10d"
Terminal.png nancy:frontend:
oarsub -C job_ID

For simplicity's sake, let's call our node in Nancy node_nancy (in your real case, node_nancy=<hostname of your node>. For instance you might have node_nancy = graphite-3).

Deployment of our operating system for the experiment

In Rennes (Rennes terminal)

Now we will deploy our nodes in Rennes with the Debian jessie minimal image:

Terminal.png rennes:frontend:
kadeploy3 -e jessie-x64-min -k -f $OAR_NODEFILE
Note.png Note

If you deploy nodes that are to be put in the same VLAN, you can also use directly the --vlan <VLAN ID> option of kadeploy3. But in our case the 3 deployed nodes don't go in the same VLAN, so we will put them in their VLANs later.

In Debian jessie image, the default sshd configuration doesn't allow password authentication, as a result you cannot connect from a deployed node to another node via password. However, there are several solutions to overcome that issue:

kaconsole

Kaconsole is a tool provided in Grid'5000, basically it allows to connect to the serial console of a node, quite the same way you would connect to a virtual console of a GNU/Linux workstation (CTRL+ALT+F1).

Terminal.png frontend:
kaconsole3 -m node_hostname

Credentials for login on the console are: "root":"grid5000".

Add a SSH key pair on every node

Even though it is very useful in last resort, using the console is not very convenient. So we will actually push SSH keys on the nodes.

You can use the following bash commands from the reservation shell (where you did the oarsub), to generate and copy a SSH key pair in the home directory of the root user on your nodes:

#! /bin/bash
temp=$(mktemp -d XXXXX)
ssh-keygen -t rsa -f $temp/id_rsa -P ""
for i in $(uniq $OAR_NODEFILE) ; do
scp $temp/id_rsa root@$i:.ssh/ && ssh-copy-id -i $temp/id_rsa.pub root@$i
done
rm -r $temp

It will generate a temporary RSA key pair, copy the private key on each node, and finally authorize the connection using that key.

In Nancy (Nancy terminal)

In nancy we have a single node. We just deploy it.

Terminal.png nancy:frontend:
kadeploy3 -e jessie-x64-min -k -f $OAR_NODEFILE

Network configuration

In Nancy (Nancy terminal)

We will put a static IP on node_nancy, and put it in the global VLAN we reserved. To do so, we will configure the network interface.

The question is: what IP will we give to the node? If we set an arbitrary IP in an random IP range, the node won't be reachable anymore because we will try to contact it via the site router (default route), and the router won't know either where to send these packets. To address this problem, Grid'5000 is configured with IP ranges dedicated to each VLAN. You can check these ranges in the Golden Rules page.

(bpichot) ^ am I mistaking here?

Besides, there is a naming issue with giving an arbitrary IP to a node, because the DNS will obviously not be aware of this IP change. So basically we won't be able to run commands using the name node_nancy. Fortunately, there are static DNS entries in Grid'5000 for each node in each VLAN, to be always able to call nodes with a name instead of an IP. These names are based on this pattern: node_name-kavlan-VLAN_ID. For instance, the built-in name for the node graphite-3 in the global VLAN 16 is: "graphite-3-kavlan-16".

Consequently, it will be much more convenient to configure node_nancy with the IP already associated to its name in the DNS. This IP will be given by the 'host' command:

Terminal.png nancy:frontend:
host node_nancy-kavlan-Global_Vlan_Id

Example:

Terminal.png nancy:frontend:
host graphite-3-kavlan-16
graphite-3-kavlan-16.nancy.grid5000.fr has address 10.27.200.239


So let's configure the interface with this IP (we assume here that is is eth0, but you can check what interface is in use on the node with the "ip address" command), and the "/18" netmask mentioned in the Golden Rules page.

Terminal.png nancy:frontend:
ssh root@node_nancy
Terminal.png nancy:node:
vim /etc/network/interfaces
auto eth0
iface eth0 inet static
  address <node_nancy's IP in global VLAN>
  netmask 255.255.192.0
Note.png Note

To set directly a new IP on your node while it is still in the production VLAN is generally a bad idea, because you could create conflicts of addresses with other parts of the network.

So, we exit the node to put it in the global VLAN before applying the new network configuration. Let's try to do so with a naive approach:

Terminal.png nancy:frontend:
kavlan -s -i Global_Vlan_Id -m node_nancy.nancy.grid5000.fr --verbose

Now, we want to apply the new configuration on node_nancy:

Terminal.png nancy:frontend:
ssh root@node_nancy "service networking restart"

But this command should timeout! It makes sense, because the node is not available in the frontend's network anymore. To put node_nancy back in the production VLAN, do the following on the frontend:

Terminal.png nancy:frontend:
kavlan -s -i DEFAULT -m node_nancy.nancy.grid5000.fr --verbose

And of course, the opposite way would fail…

(bpichot) ^ uhh… either way, once in the global VLAN it will be unreachable unless using kaconsole3, since the global VLAN is not routed :/

The following command, based on the 'at' command, is a better approach, in order to reconfigure the network after setting the VLAN.

Terminal.png nancy:frontend:
ssh root@node_nancy "apt-get --yes install at && echo '(service networking restart ; killall dhclient)' | at now + 1 minute" && kavlan -s -i Global_Vlan_Id -m node_nancy.nancy.grid5000.fr --verbose
Note.png Note

The "killall dhclient" is here because, in order to reduce the number of steps in this tutorial, we modified the interface configuration before switching it down, and therefore restarting the networking service would have let the DHCP client running. The clean method would have been, with kaconsole3, to switch off the node's network (hence the use of kaconsole3), only then change the interface configuration, and restart the network.

In Rennes (Rennes terminal)

We need to put node1 in the local VLAN and restart the networking service to get a new IP address thanks to the DHCP service (which is provided in every kavlan-local VLAN). Like earlier, we need the 'at' command to do that, so let's install it on all the nodes using TakTuk. Moreover, we won't have access to the Internet from the nodes after putting them in their VLANs, so we will also install tcpdump to use it later in the tutorial:

Terminal.png rennes:frontend:
taktuk -s -l root -f $OAR_FILE_NODES broadcast exec [ "apt-get update; apt-get --yes install at tcpdump" ]
Terminal.png rennes:frontend:
ssh root@node1 "echo 'service networking restart' | at now + 1 minute" && kavlan -s -i Local_Vlan_Id -m node1.rennes.grid5000.fr
Warning.png Warning

Be careful not to mix up the syntax of kavlan with kadeploy's one: here the '-m' parameter must be followed by the node's complete network address.

Before changing node2's network configuration, and since we will need to have OpenVSwitch installed on that node in the second part of this tutorial, we first install the openvswitch-switch package before the node leaves the production VLAN, as it won't be able to access the Internet afterwards :

Terminal.png rennes:frontend:
ssh root@node2 "apt-get install openvswitch-switch"

We can now put node2 in between the local and global VLANs: the first interface will be on the local VLAN, while the second one will be on the global VLAN.

Like above, the first interface, which is in the local VLAN, will get an IP from the DHCP service which is provided here. But we have to setup a static IP configuration for the second interface: eth1. (according to the information of the Grid'5000 API, eth1 is indeed cabled to the switch: see eth1 in https://api.grid5000.fr/sid/sites/rennes/clusters/paravance/nodes/paravance-1.json?pretty).

For that purpose we need to edit /etc/network/interfaces on node2:

Terminal.png nancy:frontend:
ssh root@node2
Terminal.png nancy:node2:
vi /etc/network/interfaces
auto eth1
iface eth1 inet static
        address 192.168.1.2
        netmask 255.255.255.0

We now switch the first interface to the local VLAN and the second interface in the global VLAN:

Terminal.png rennes:frontend:
ssh root@node2 "echo 'service networking restart' | at now + 1 minute" && kavlan -s -i Local_Vlan_Id -m node1 && kavlan -s -i Global_Vlan_Id -m node2-eth1

First test of the topology

We should now have a setup as follows:

TP-Kavlan-School2016.png

Now we should be able to connect to node2 using SSH from the kavlan gateway, and ping 192.168.1.1, which is the node in nancy.

Setting up the routing between the VLANs

node1 and node_nancy are in two different VLANs (node1 is in a local VLAN, while node_nancy is in a global VLAN). As a result, it is for now impossible to send a packet between node1 and node_nancy.

To enable that, we will setup a route between the local VLAN and the global VLAN (192.168.1.0/24).

Since node2 has 2 network interfaces, with one in each of the VLANs, we will configure it to act as a router (also called gateway sometime).

Note.png Note

In order to know the IP network subnet used in the local VLAN, we can connect to node1, node2 or kavlan-ID and enter the command : ip route

Warning.png Warning

For the ease of writing of this tutorial, we will assume here that that network is 192.168.192.0/20

First, we have to enable routing on node2:

Terminal.png rennes:node2:
sysctl -w net.ipv4.ip_forward=1

Now, if an IP packet with a destination in a known network is sent to node2, it will be forwarded.

But node1 deosn't know how to reach node_nancy's network, and vice versa. Let's add a route to each node:

Terminal.png rennes:node1:
ip route add 192.168.1.0/24 via IP_of_node2
Warning.png Warning

Here, IP_of_node2 is the IP address of node2 in the 192.168.192.0/20 subnet

Terminal.png nancy:node:
ip route add 192.168.192.0/20 via 192.168.1.2

Testing

We first check if packets are able to find their way via our router.

We run on node1 and node2 (in Rennes) the following commands:

Terminal.png rennes:node2:
tcpdump -i eth0 icmp

(tcpdump is a network traffic sniffer, here filtering ICMP traffic on eth0)

Terminal.png rennes:node1:
ping 192.168.1.1

On the first terminal you should see that the ICMP packets are forwarded on node2 :

IP 192.168.200.7 > 192.168.1.1: ICMP echo request, id 4270, seq 1, length 64
IP 192.168.1.1 > 192.168.200.7: ICMP echo reply, id 4270, seq 1, length 64
IP 192.168.200.7 > 192.168.1.1: ICMP echo request, id 4270, seq 2, length 64
IP 192.168.1.1 > 192.168.200.7: ICMP echo reply, id 4270, seq 2, length 64
IP 192.168.200.7 > 192.168.1.1: ICMP echo request, id 4270, seq 3, length 64
IP 192.168.1.1 > 192.168.200.7: ICMP echo reply, id 4270, seq 3, length 64

If ip forwarding was disabled on node2 (see above the sysctl command to disable it), nothing will happen in tcpdump and ping should report errors.

We also check the packet route with traceroute from node1:

Terminal.png rennes:node1:
traceroute 192.168.1.1

We should see two hops: the intermediary router and the target:

traceroute to 192.168.1.1 (192.168.1.1), 30 hops max, 60 byte packets
 1  192.168.200.8 (192.168.200.8)  0.136 ms  0.122 ms  0.115 ms
 2  192.168.1.1 (192.168.1.1)  25.612 ms  25.617 ms  25.611 ms

The first hop is from node1 to node2, and the second one is from node2 to node_nancy.

With tcpdump we can check the complete isolation of nodes from the production VLAN (and from any other one) :

Terminal.png rennes:node1:
tcpdump port not 22

On node1 (in a 12 seconds time):

14:26:15.107927 STP 802.1w, Rapid STP, Flags [Learn, Forward], bridge-id 82bd.8c:60:4f:47:6c:bc.808e, length 43
14:26:17.109436 STP 802.1w, Rapid STP, Flags [Learn, Forward], bridge-id 82bd.8c:60:4f:47:6c:bc.808e, length 43
14:26:19.108669 STP 802.1w, Rapid STP, Flags [Learn, Forward], bridge-id 82bd.8c:60:4f:47:6c:bc.808e, length 43
14:26:21.108675 STP 802.1w, Rapid STP, Flags [Learn, Forward], bridge-id 82bd.8c:60:4f:47:6c:bc.808e, length 43
14:26:23.108669 STP 802.1w, Rapid STP, Flags [Learn, Forward], bridge-id 82bd.8c:60:4f:47:6c:bc.808e, length 43
14:26:25.108654 STP 802.1w, Rapid STP, Flags [Learn, Forward], bridge-id 82bd.8c:60:4f:47:6c:bc.808e, length 43
14:26:27.108674 STP 802.1w, Rapid STP, Flags [Learn, Forward], bridge-id 82bd.8c:60:4f:47:6c:bc.808e, length 43

The only frames received are spanning tree frames, from the switch.

In comparison, this is what can be captured on a node in the production VLAN, in only a one second time:

14:27:43.920934 IP paravance-60.rennes.grid5000.fr.38784 > dns.rennes.grid5000.fr.domain: 65121+ PTR? 5.98.16.172.in-addr.arpa. (42)
14:27:43.921384 IP dns.rennes.grid5000.fr.domain > paravance-60.rennes.grid5000.fr.38784: 65121* 1/1/0 PTR parapide-5.rennes.grid5000.fr. (103)
14:27:43.921510 IP paravance-60.rennes.grid5000.fr.49250 > dns.rennes.grid5000.fr.domain: 48890+ PTR? 111.111.16.172.in-addr.arpa. (45)
14:27:43.921816 IP dns.rennes.grid5000.fr.domain > paravance-60.rennes.grid5000.fr.49250: 48890* 1/1/0 PTR kadeploy.rennes.grid5000.fr. (104)
14:27:44.017208 ARP, Request who-has parapide-5.rennes.grid5000.fr tell dns.rennes.grid5000.fr, length 46
14:27:44.201278 IP6 fe80::214:4fff:feca:9470 > ff02::16: HBH ICMP6, multicast listener report v2, 1 group record(s), length 28
14:27:44.201416 IP paravance-60.rennes.grid5000.fr.34416 > dns.rennes.grid5000.fr.domain: 7912+ PTR? 6.1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.2.0.f.f.ip6.arpa. (90)
14:27:44.284641 ARP, Request who-has parapide-9.rennes.grid5000.fr tell kadeploy.rennes.grid5000.fr, length 46
14:27:44.307171 ARP, Request who-has parapide-5.rennes.grid5000.fr tell metroflux.rennes.grid5000.fr, length 46
14:27:44.398978 IP dns.rennes.grid5000.fr.domain > paravance-60.rennes.grid5000.fr.34416: 7912 NXDomain 0/1/0 (160)

Here we see ARP requests, DNS messages, multicast reports…

Communication without routing: OpenVSwitch

Setup

Our goal is to allow 2 nodes from 2 different (physical) VLANs to communicate without routing. To do this, nodes from each side must be in the same IP network. For that, our two layer 2 VLANs must be bridged. In this tutorial, we will not use Linux bridge (the brctl command), but OpenVSwitch, which provides enhanced functionalities.

First we need to change the IP of node_nancy to an IP in the local VLAN's subnet (but different from the IPs of node1 and node2 of course) (192.168.192.0/20 in our case, we can use 192.168.200.2 for instance). Then restart the networking service.

We already installed OpenVSwitch on our router node, node2, we can now use it.

Warning.png Warning

node2 will lose its IP address, so the SSH session will be lost, use Kaconsole instead for the following

Terminal.png rennes:frontend:
kaconsole -m node2
  • Create the bridge
Terminal.png rennes:node2:
ovs-vsctl add-br OVSbr
  • Remove IPs on both interfaces eth0 and eth1:
Terminal.png rennes:node2:
ifconfig eth0 0
Terminal.png rennes:node2:
ifconfig eth1 0

Add eth0 and eth1 to the bridge:

Terminal.png rennes:node2:
ovs-vsctl add-port OVSbr eth0
Terminal.png rennes:node2:
ovs-vsctl add-port OVSbr eth1
Tests

Now that it is done, you should be able to ping node_nancy (with its new IP!).

Also if you run traceroute, you will notice that there is only one hop between node1 and node_nancy.

Flow control

You can use openVSwitch to manage flows, for example you can DROP all packets from an IP with this command:

Terminal.png rennes:node2:
ovs-ofctl add-flow OVSbr "in_port=1,ip,nw_src=172.20.0.1,actions=drop"

"in_port=1" refers to eth0 in our case. You can list all the ports in the bridge with:

Terminal.png rennes:node2:
ovs-ifctl show OVSbr

You can also DROP all pings to a specific IP from a specific port:

Terminal.png rennes:node2:
ovs-ofctl add-flow OVSbr "in_port=2,ip,nw_dst=192.168.1.1,nw_proto=1,actions=drop"

And the following command will display all your flow rules :

Terminal.png rennes:node2:
ovs-ofctl dump-flows OVSbr
Note.png Note

If you want to know more about flow syntax go to this man page and look for the "Flow Syntax" paragraph

Automating the configuration with TopoMaker

TopoMaker is a totally new tool on Grid'5000 (software is still in beta testing for now), which allows to automate all steps of the "Topology setup" part (except the last paragraph about routing).

There is no documentations yet, but we will start from an example, to build the same topology as in the first paragraphs.

TopoMaker takes a Rspec XML file as input. That input is actually inspired by the syntax used in CloudLab (another experimental testbed). Here is the example we will use:

                                                                                                                                                               
<?xml version="1.0" encoding="UTF-8"?>                                                                                                                                                                          
<rspec>                                                                                                                                                                                                         
  <node client_id="node-2">                                                                                                                                                                                     
    <interface client_id="interface-0"/> <!--No IP precised equals to DHCP-->                                                                                                                                   
    <interface client_id="interface-1">                                                                                                                                                                         
      <ip address="192.168.1.3" type="ipv4" netmask="255.255.255.0" />                                                                                                                                          
    </interface>                                                                                                                                                                                                
    <sliver_type name="raw-pc">                                                                                                                                                                                 
      <disk_image name="jessie-x64-min" /> <!--This is just for the example because default OS chosen by topomaker is jessie-x64-min -->                                                                        
    </sliver_type>                                                                                                                                                                                              
    <auto_install>                                                                                                                                                                                              
      <apt name="tcpdump"/>                                                                                                                                                                                     
      <apt name="openvswitch-switch"/>                                                                                                                                                                          
    </auto_install>                                                                                                                                                                                             
  </node>                                                                                                                                                                                                       
  <node client_id="node-1">     <!--Here we will have DHCP on jessie x64 min-->                                                                                                                                 
    <interface client_id="interface-2"/>                                                                                                                                                                        
  </node>                                                                                                                                                                                                       
  <node client_id="node-3">     <!--node 3 is equivalent to our node_nancy here it will be on the same site, as we saw it doesn't matter with global vlan-->                                                    
    <interface client_id="interface-4">                                                                                                                                                                         
      <ip address="192.168.1.1" type="ipv4" netmask="255.255.255.0" />                                                                                                                                          
    </interface>                                                                                                                                                                                                
  </node>                                                                                                                                                                                                       
  <link client_id="link-0">     <!--link are equivalent to VLAN-->                                                                                                                                              
    <interface_ref client_id="interface-0" />                                                                                                                                                                   
    <interface_ref client_id="interface-2" />                                                                                                                                                                   
  </link>                                                                                                                                                                                                       
  <link client_id="link-1">                                                                                                                                                                                     
    <interface_ref client_id="interface-1" />                                                                                                                                                                   
    <interface_ref client_id="interface-4" />                                                                                                                                                                   
  </link>                                                                                                                                                                                                       
</rspec>                                                                                                                                                                                                        
                                    

As you can see, each part of this XML file is totally independent from the nodes you reserved. You can put the name of your choice for nodes and interfaces names. It just have to be relevant between names for interfaces and names for links.

TopoMaker isn't provided as a Grid'5000 tool yet, so we will download the source code and use it directly.

Terminal.png rennes:frontend:
cp ~agarnier/public/TopoMaker.zip ~/ && unzip ~/TopoMaker.zip && cd ~/TopoMaker

TopoMaker is developed in Ruby and requires some gems, which are not installed in the environement. We install them:

Terminal.png nancy:frontend:
gem install --user nokogiri

TopoMaker does not do the reservation for you. You should have a running job and provide its jobid.

The TopoMaker command line is as follows:

Terminal.png nancy:frontend:
ruby TopoMaker.rb -d -f sample_in/sample_tp.xml -j JOB_ID

You can add the "-v" option to show more information about the progress of the execution (the script might look like stuck sometime). In the end, a yaml file is prompted with all the required information, like IP or hostname.

TopoMaker doesn't know about DNS in kavlan-local (and IP are not retrieved), so the rules is the same as in the tutorial if you used DHCP: "hostname-kavlan-ID.rennes.grid5000.fr" from the gateway to connect to the node.

Once the script is done, you can do the same tests as above, and do the modification yourself to use openVSwitch with TopoMaker.

Warning.png Warning

If a node is in a VLAN and you want to run TopoMaker again, you have to keep the "-d" option