Network reconfiguration tutorial: Difference between revisions
Line 34: | Line 34: | ||
As a second solution, you can submit an interactive job in a first terminal and continue the tutorial in another terminal, starting with the "oarsub -C" command.}} | As a second solution, you can submit an interactive job in a first terminal and continue the tutorial in another terminal, starting with the "oarsub -C" command.}} | ||
With this reservation, we have 1 kavlan-local, 1 kavlan-global and 3 nodes | With this reservation, we have '''1 kavlan-local''', '''1 kavlan-global''' and '''3 nodes of the paravance cluster'''. Please note that the '''nodes of the paravance cluster have 2 network interfaces''', which is required for this tutorial. | ||
{{Note|text=To get the list of clusters that have nodes with several network interfaces, you can look at page [[Special_Features]].}} | {{Note|text=To get the list of clusters that have nodes with several network interfaces, you can look at page [[Special_Features]].}} |
Revision as of 16:18, 19 January 2016
Note | |
---|---|
This page is actively maintained by the Grid'5000 team. If you encounter problems, please report them (see the Support page). Additionally, as it is a wiki page, you are free to make minor corrections yourself if needed. If you would like to suggest a more fundamental change, please contact the Grid'5000 team. |
Introduction
This tutorial aims to present an example use of Grid'5000 in which we will configure a customized network topology for an experiment, using KaVLAN.
KaVLAN is a Grid'5000 tool that allows a user to manage VLANs in the platform. Behind the scene, KaVLAN actually changes the configuration of the network switches, to set the VLAN membership (VLAN ID) of the ports which correspond to the network interface of one or more nodes. This mechanism provides a layer 2 network isolation for experiments.
Three kinds of KaVLAN VLANs are available on Grid'5000. You can find more information in the KaVLAN page. In this tutorial, we will use global and local VLANs (no routed VLAN) only.
Topology setup
Reservations of the resources
We will work both in Rennes and Nancy. Please open 2 terminal windows, one for Rennes and one for Nancy.
In Rennes (Rennes terminal)
A global VLAN spreads all over Grid'5000, so we need to reserve it on one site only. We will use the site of Rennes for that. We also reserve a local VLAN.
First we log to Rennes' frontend in our first terminal, and execute:
rennes:frontend :
|
oarsub -l {"type='kavlan-global'"}/vlan=1+{"type='kavlan-local'"}/vlan=1+{"cluster='paravance'"}/nodes=3,walltime=3 -t deploy "sleep 10d" |
With this reservation, we have 1 kavlan-local, 1 kavlan-global and 3 nodes of the paravance cluster. Please note that the nodes of the paravance cluster have 2 network interfaces, which is required for this tutorial.
Note | |
---|---|
To get the list of clusters that have nodes with several network interfaces, you can look at page Special_Features. |
For the simplicity of this writing, let's call our 3 nodes in Rennes node1, node2 and node3 (in your real case, node1=<hostname of your first node> and so on, for instance you might have node1 = paravance-23).
Get your VLAN IDs :
You can know which VLAN ID is global and which one is local, using the following table:
KaVLAN name in OAR | type | first id | last id |
---|---|---|---|
kavlan-local | local | 1 | 3 |
kavlan | routed | 4 | 9 |
kavlan-global | global | 10 | 21 |
(see KaVLAN for more details)
In Nancy (Nancy terminal)
The global VLAN was already reserved in Rennes, so we do not need to reserve it again in Nancy. Furthermore, we will not use any local VLAN in Nancy, so we just do a classic deploy job:
In our second terminal, we log to nancy's frontend, and execute:
For simplicity's sake, let's call our node in Nancy node_nancy (in your real case, node_nancy=<hostname of your node>. For instance you might have node_nancy = graphite-3).
Deployment of our operating system for the experiment
In Rennes (Rennes terminal)
Now we will deploy our nodes in Rennes with the Debian jessie minimal image:
In Debian jessie image, the default sshd configuration doesn't allow password authentication, as a result you cannot connect from a deployed node to another node via password. However, there are several solutions to overcome that issue:
- kaconsole
Kaconsole is a tool provided in Grid'5000, basically it allows to connect to the serial console of a node, quite the same way you would connect to a virtual console of a GNU/Linux workstation (CTRL+ALT+F1).
Credentials for login on the console are: "root":"grid5000".
- Add a SSH key pair on every node
Even though it is very useful in last resort, using the console is not very convenient. So we will actually push SSH keys on the nodes.
You can use the following bash commands from the reservation shell (where you did the oarsub
), to generate and copy a SSH key pair in the home directory of the root user on your nodes:
#! /bin/bash temp=$(mktemp -d XXXXX) ssh-keygen -t rsa -f $temp/id_rsa -P "" for i in $(uniq $OAR_NODEFILE) ; do scp $temp/id_rsa root@$i:.ssh/ && ssh-copy-id -i $temp/id_rsa.pub root@$i done rm -r $temp
It will generate a temporary RSA key pair, copy the private key on each node, and finally authorize the connection using that key.
In Nancy (Nancy terminal)
In nancy we have a single node. We just deploy it.
Network configuration
In Nancy (Nancy terminal)
We will put a static IP on node_nancy, and put it in the global VLAN we reserved. To do so, we will configure the network interface.
The question is: what IP will we give to the node? If we set an arbitrary IP in an random IP range, the node won't be reachable anymore because we will try to contact it via the site router (default route), and the router won't know either where to send these packets. To address this problem, Grid'5000 is configured with IP ranges dedicated to each VLAN. You can check these ranges in the Golden Rules page.
(bpichot) ^ am I mistaking here?
Besides, there is a naming issue with giving an arbitrary IP to a node, because the DNS will obviously not be aware of this IP change. So basically we won't be able to run commands using the name node_nancy
. Fortunately, there are static DNS entries in Grid'5000 for each node in each VLAN, to be always able to call nodes with a name instead of an IP. These names are based on this pattern: node_name
-kavlan-VLAN_ID
. For instance, the built-in name for the node graphite-3 in the global VLAN 16 is: "graphite-3-kavlan-16".
Consequently, it will be much more convenient to configure node_nancy
with the IP already associated to its name in the DNS.
This IP will be given by the 'host' command:
Example:
graphite-3-kavlan-16.nancy.grid5000.fr has address 10.27.200.239
So let's configure the interface with this IP (we assume here that is is eth0, but you can check what interface is in use on the node with the "ip address" command), and the "/18" netmask mentioned in the Golden Rules page.
auto eth0 iface eth0 inet static address <node_nancy's IP in global VLAN> netmask 255.255.192.0
Note | |
---|---|
To set directly a new IP on your node while it is still in the production VLAN is generally a bad idea, because you could create conflicts of addresses with other parts of the network. |
So, we exit the node to put it in the global VLAN before applying the new network configuration. Let's try to do so with a naive approach:
Now, we want to apply the new configuration on node_nancy
:
But this command should timeout! It makes sense, because the node is not available in the frontend's network anymore.
To put node_nancy
back in the production VLAN, do the following on the frontend:
And of course, the opposite way would fail: trying to change the configuration before
The following is a better approach, in order to reconfigure the network after setting the VLAN. It makes use of the 'at' command:
nancy:frontend :
|
ssh root@node_nancy "apt-get --yes install at && echo 'service networking restart' | at now + 1 minute" && kavlan -s -i Global_Vlan_Id -m node_nancy .nancy.grid5000.fr --verbose |
In Rennes (Rennes terminal)
We need to put node1 in the local VLAN and restart the networking service to get a new IP address thanks to the DHCP service (which is provided in every kavlan-local VLAN). Like earlier, we need the 'at' command to do that, so let's install it on all the nodes using TakTuk. Moreover, we won't have access to the Internet from the nodes after putting them in their VLANs, so we will also install tcpdump to use it later in the tutorial:
rennes:frontend :
|
taktuk -s -l root -f $OAR_FILE_NODES broadcast exec [ "apt-get update; apt-get --yes install at tcpdump" ]
|
rennes:frontend :
|
ssh root@node1 "echo 'service networking restart' | at now + 1 minute" && kavlan -s -i Local_Vlan_Id -m node1 .rennes.grid5000.fr |
Warning | |
---|---|
Be careful not to mix up the syntax of |
Before changing node2's network configuration, and since we will need to have OpenVSwitch installed on that node in the second part of this tutorial, we first install the openvswitch-switch package before the node leaves the production VLAN, as it won't be able to access the Internet afterwards :
We can now put node2 in between the local and global VLANs: the first interface will be on the local VLAN, while the second one will be on the global VLAN.
Like above, the first interface, which is in the local VLAN, will get an IP from the DHCP service which is provided here. But we have to setup a static IP configuration for the second interface: eth1. (according to the information of the Grid'5000 API, eth1 is indeed cabled to the switch: see eth1 in https://api.grid5000.fr/sid/sites/rennes/clusters/paravance/nodes/paravance-1.json?pretty).
For that purpose we need to edit /etc/network/interfaces on node2:
auto eth1 iface eth1 inet static address 192.168.1.2 netmask 255.255.255.0
We now switch the first interface to the local VLAN and the second interface in the global VLAN:
rennes:frontend :
|
ssh root@node2 "echo 'service networking restart' | at now + 1 minute" && kavlan -s -i Local_Vlan_Id -m node1 && kavlan -s -i Global_Vlan_Id -m node2-eth1 |
First test of the topology
We should now have a setup as follows:
Now we should be able to connect to node2 using SSH from the kavlan gateway, and ping 192.168.1.1, which is the node in nancy.
Setting up the routing between the VLANs
node1 and node_nancy are in two different VLANs (node1 is in a local VLAN, while node_nancy is in a global VLAN). As a result, it is for now impossible to send a packet between node1 and node_nancy.
To enable that, we will setup a route between the local VLAN and the global VLAN (192.168.1.0/24).
Since node2 has 2 network interfaces, with one in each of the VLANs, we will configure it to act as a router (also called gateway sometime).
Note | |
---|---|
In order to know the IP network subnet used in the local VLAN, we can connect to node1, node2 or kavlan-ID and enter the command : ip route |
Warning | |
---|---|
For the ease of writing of this tutorial, we will assume here that that network is 192.168.192.0/20 |
First, we have to enable routing on node2:
Now, if an IP packet with a destination in a known network is sent to node2, it will be forwarded.
But node1 deosn't know how to reach node_nancy's network, and vice versa. Let's add a route to each node:
Testing
We first check if packets are able to find their way via our router.
We run on node1 and node2 (in Rennes) the following commands:
(tcpdump is a network traffic sniffer, here filtering ICMP traffic on eth0)
On the first terminal you should see that the ICMP packets are forwarded on node2 :
IP 192.168.200.7 > 192.168.1.1: ICMP echo request, id 4270, seq 1, length 64 IP 192.168.1.1 > 192.168.200.7: ICMP echo reply, id 4270, seq 1, length 64 IP 192.168.200.7 > 192.168.1.1: ICMP echo request, id 4270, seq 2, length 64 IP 192.168.1.1 > 192.168.200.7: ICMP echo reply, id 4270, seq 2, length 64 IP 192.168.200.7 > 192.168.1.1: ICMP echo request, id 4270, seq 3, length 64 IP 192.168.1.1 > 192.168.200.7: ICMP echo reply, id 4270, seq 3, length 64
If ip forwarding was disabled on node2 (see above the sysctl command to disable it), nothing will happen in tcpdump and ping should report errors.
We also check the packet route with traceroute from node1:
We should see two hops: the intermediary router and the target:
traceroute to 192.168.1.1 (192.168.1.1), 30 hops max, 60 byte packets 1 192.168.200.8 (192.168.200.8) 0.136 ms 0.122 ms 0.115 ms 2 192.168.1.1 (192.168.1.1) 25.612 ms 25.617 ms 25.611 ms
The first hop is from node1 to node2, and the second one is from node2 to node_nancy.
With tcpdump we can check the complete isolation of nodes from the production VLAN (and from any other one) :
On node1 (in a 12 seconds time):
14:26:15.107927 STP 802.1w, Rapid STP, Flags [Learn, Forward], bridge-id 82bd.8c:60:4f:47:6c:bc.808e, length 43 14:26:17.109436 STP 802.1w, Rapid STP, Flags [Learn, Forward], bridge-id 82bd.8c:60:4f:47:6c:bc.808e, length 43 14:26:19.108669 STP 802.1w, Rapid STP, Flags [Learn, Forward], bridge-id 82bd.8c:60:4f:47:6c:bc.808e, length 43 14:26:21.108675 STP 802.1w, Rapid STP, Flags [Learn, Forward], bridge-id 82bd.8c:60:4f:47:6c:bc.808e, length 43 14:26:23.108669 STP 802.1w, Rapid STP, Flags [Learn, Forward], bridge-id 82bd.8c:60:4f:47:6c:bc.808e, length 43 14:26:25.108654 STP 802.1w, Rapid STP, Flags [Learn, Forward], bridge-id 82bd.8c:60:4f:47:6c:bc.808e, length 43 14:26:27.108674 STP 802.1w, Rapid STP, Flags [Learn, Forward], bridge-id 82bd.8c:60:4f:47:6c:bc.808e, length 43
The only frames received are spanning tree frames, from the switch.
In comparison, this is what can be captured on a node in the production VLAN, in only a one second time:
14:27:43.920934 IP paravance-60.rennes.grid5000.fr.38784 > dns.rennes.grid5000.fr.domain: 65121+ PTR? 5.98.16.172.in-addr.arpa. (42) 14:27:43.921384 IP dns.rennes.grid5000.fr.domain > paravance-60.rennes.grid5000.fr.38784: 65121* 1/1/0 PTR parapide-5.rennes.grid5000.fr. (103) 14:27:43.921510 IP paravance-60.rennes.grid5000.fr.49250 > dns.rennes.grid5000.fr.domain: 48890+ PTR? 111.111.16.172.in-addr.arpa. (45) 14:27:43.921816 IP dns.rennes.grid5000.fr.domain > paravance-60.rennes.grid5000.fr.49250: 48890* 1/1/0 PTR kadeploy.rennes.grid5000.fr. (104) 14:27:44.017208 ARP, Request who-has parapide-5.rennes.grid5000.fr tell dns.rennes.grid5000.fr, length 46 14:27:44.201278 IP6 fe80::214:4fff:feca:9470 > ff02::16: HBH ICMP6, multicast listener report v2, 1 group record(s), length 28 14:27:44.201416 IP paravance-60.rennes.grid5000.fr.34416 > dns.rennes.grid5000.fr.domain: 7912+ PTR? 6.1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.2.0.f.f.ip6.arpa. (90) 14:27:44.284641 ARP, Request who-has parapide-9.rennes.grid5000.fr tell kadeploy.rennes.grid5000.fr, length 46 14:27:44.307171 ARP, Request who-has parapide-5.rennes.grid5000.fr tell metroflux.rennes.grid5000.fr, length 46 14:27:44.398978 IP dns.rennes.grid5000.fr.domain > paravance-60.rennes.grid5000.fr.34416: 7912 NXDomain 0/1/0 (160)
Here we see ARP requests, DNS messages, multicast reports…
Communication without routing: OpenVSwitch
- Setup
Our goal is to allow 2 nodes from 2 different (physical) VLANs to communicate without routing. To do this, nodes from each side must be in the same IP network. For that, our two layer 2 VLANs must be bridged. In this tutorial, we will not use Linux bridge (the brctl
command), but OpenVSwitch, which provides enhanced functionalities.
First we need to change the IP of node_nancy to an IP in the local VLAN's subnet (but different from the IPs of node1 and node2 of course) (192.168.192.0/20 in our case, we can use 192.168.200.2 for instance). Then restart the networking service.
We already installed OpenVSwitch on our router node, node2, we can now use it.
Warning | |
---|---|
node2 will lose its IP address, so the SSH session will be lost, use Kaconsole instead for the following |
- Create the bridge
- Remove IPs on both interfaces eth0 and eth1:
Add eth0 and eth1 to the bridge:
- Tests
Now that it is done, you should be able to ping node_nancy (with its new IP!).
Also if you run traceroute, you will notice that there is only one hop between node1 and node_nancy.
- Flow control
You can use openVSwitch to manage flows, for example you can DROP all packets from an IP with this command:
"in_port=1" refers to eth0 in our case. You can list all the ports in the bridge with:
You can also DROP all pings to a specific IP from a specific port:
And the following command will display all your flow rules :
Note | |
---|---|
If you want to know more about flow syntax go to this man page and look for the "Flow Syntax" paragraph |
Automating the configuration with TopoMaker
TopoMaker is a totally new tool on Grid'5000 (software is still in beta testing for now), which allows to automate all steps of the "Topology setup" part (except the last paragraph about routing).
There is no documentations yet, but we will start from an example, to build the same topology as in the first paragraphs.
TopoMaker takes a Rspec XML file as input. That input is actually inspired by the syntax used in CloudLab (another experimental testbed). Here is the example we will use:
<?xml version="1.0" encoding="UTF-8"?> <rspec> <node client_id="node-2"> <interface client_id="interface-0"/> <!--No IP precised equals to DHCP--> <interface client_id="interface-1"> <ip address="192.168.1.3" type="ipv4" netmask="255.255.255.0" /> </interface> <sliver_type name="raw-pc"> <disk_image name="jessie-x64-min" /> <!--This is just for the example because default OS chosen by topomaker is jessie-x64-min --> </sliver_type> <auto_install> <apt name="tcpdump"/> <apt name="openvswitch-switch"/> </auto_install> </node> <node client_id="node-1"> <!--Here we will have DHCP on jessie x64 min--> <interface client_id="interface-2"/> </node> <node client_id="node-3"> <!--node 3 is equivalent to our node_nancy here it will be on the same site, as we saw it doesn't matter with global vlan--> <interface client_id="interface-4"> <ip address="192.168.1.1" type="ipv4" netmask="255.255.255.0" /> </interface> </node> <link client_id="link-0"> <!--link are equivalent to VLAN--> <interface_ref client_id="interface-0" /> <interface_ref client_id="interface-2" /> </link> <link client_id="link-1"> <interface_ref client_id="interface-1" /> <interface_ref client_id="interface-4" /> </link> </rspec>
As you can see, each part of this XML file is totally independent from the nodes you reserved. You can put the name of your choice for nodes and interfaces names. It just have to be relevant between names for interfaces and names for links.
TopoMaker isn't provided as a Grid'5000 tool yet, so we will download the source code and use it directly.
TopoMaker is developed in Ruby and requires some gems, which are not installed in the environement. We install them:
TopoMaker does not do the reservation for you. You should have a running job and provide its jobid.
The TopoMaker command line is as follows:
You can add the "-v" option to show more information about the progress of the execution (the script might look like stuck sometime). In the end, a yaml file is prompted with all the required information, like IP or hostname.
TopoMaker doesn't know about DNS in kavlan-local (and IP are not retrieved), so the rules is the same as in the tutorial if you used DHCP: "hostname-kavlan-ID.rennes.grid5000.fr" from the gateway to connect to the node.
Once the script is done, you can do the same tests as above, and do the modification yourself to use openVSwitch with TopoMaker.