Sophia:Network: Difference between revisions
Pstefanescu (talk | contribs) mNo edit summary |
Lpouilloux (talk | contribs) No edit summary |
||
(48 intermediate revisions by 10 users not shown) | |||
Line 1: | Line 1: | ||
{{Template:Site link|Network}} | {{Template:Site link|Network}} | ||
{{Portal|Network}} | {{Portal|Network}} | ||
{{Portal|User}} | |||
'''See also:''' [[Sophia:Hardware|Hardware description for Sophia]] | |||
= | = Overview of Ethernet network topology = | ||
[[File:SophiaNetwork.svg|600px]] | |||
{{:Sophia:GeneratedNetwork}} | |||
= | = Old information (possibly outdated) = | ||
[[Image:Sophia_modified.jpg|center|thumb|Physical view of the compute network|600px]] | |||
[[Image: | |||
== Gigabit Ethernet == | == Gigabit Ethernet == | ||
=== Suno Cluster (Dell R410) === | |||
All the nodes are connected (in a non blocking way) to the main Foundry FastIron Super X switch. | |||
All the nodes are connected (in a non blocking way) to the main Foundry FastIron Super X switch | |||
=== | === UVB Cluster === | ||
All the nodes are connected | All the nodes are connected to two stacked Dell PowerConnect 6248 switches. The Powerconnect stack is connected to the Foundry FastIron Super X through a 10Gbps link. | ||
=== Topology === | === Topology === | ||
The main switch is a Foundry FastIron Super X. It has 2 dual 10GB modules, | The main switch is a Foundry FastIron Super X. It has 2 dual 10GB modules, 4 modules with 24 gigabit ports, and 12 gigabit ports on the management module (so 60 gigabit ports are available). 4 slots are currently free. | ||
== High Performance networks == | |||
=== Infiniband 40G on uvb === | |||
uvb cluster nodes are all connected to 40G infiniband switches. Since these two clusters are shared with the Nef procution cluster at INRIA Sophia, we are using Infiniband partitions to isolate the nodes from nef when they are available on grid5000. The partition dedicated to grid5000 is '''0x8100'''. The ipoib interfaces on nodes are therefore named ib0.8100 instead of ib0. | |||
To use the native openib driver of '''openmpi''', you must set: <code>btl_openib_pkey = 0x8100 </code> | |||
==== Nodes ==== | |||
*<code class="host">uvb-1</code> to <code class="host">uvb-44</code> have one QDR Infiniband card. | |||
* Card Model : Mellanox Technologies MT26428 [ConnectX IB QDR, PCIe 2.0 5GT/s]. | |||
* Driver : <code class="dir">mlx4_ib</code> | |||
* OAR property : ib_rate=40 | |||
* IP over IB addressing : | |||
** <code class="host">uvb-[1..44]-ib0</code>.sophia.grid5000.fr ( 172.18.132.[1..44] ) | |||
==== Switch ==== | |||
* three Mellanox IS50xx QDR Infiniband Switchs | |||
* Topology available here : https://wiki.inria.fr/ClustersSophia/Network (uvb are nef084-nef140 on the nef production cluster) | |||
==== Interconnection ==== | |||
Infiniband network is physically isolated from Ethernet networks. Therefore, Ethernet network emulated over Infiniband is isolated as well. There isn't any interconnexion, neither at the data link layer nor at the network layer. | |||
== Architecture == | |||
[[Image:Sophia-cluster_modified.png|center|thumb|view of the Baie|600px]] | |||
Latest revision as of 13:49, 12 March 2024
See also: Hardware description for Sophia
Overview of Ethernet network topology
Network devices models
- gw: Foundry
- swx1nef: Juniper EX2500-24F-FB
- swy1nef: Dell PowerConnect 6248
More details (including address ranges) are available from the Grid5000:Network page.
Old information (possibly outdated)
Gigabit Ethernet
Suno Cluster (Dell R410)
All the nodes are connected (in a non blocking way) to the main Foundry FastIron Super X switch.
UVB Cluster
All the nodes are connected to two stacked Dell PowerConnect 6248 switches. The Powerconnect stack is connected to the Foundry FastIron Super X through a 10Gbps link.
Topology
The main switch is a Foundry FastIron Super X. It has 2 dual 10GB modules, 4 modules with 24 gigabit ports, and 12 gigabit ports on the management module (so 60 gigabit ports are available). 4 slots are currently free.
High Performance networks
Infiniband 40G on uvb
uvb cluster nodes are all connected to 40G infiniband switches. Since these two clusters are shared with the Nef procution cluster at INRIA Sophia, we are using Infiniband partitions to isolate the nodes from nef when they are available on grid5000. The partition dedicated to grid5000 is 0x8100. The ipoib interfaces on nodes are therefore named ib0.8100 instead of ib0.
To use the native openib driver of openmpi, you must set: btl_openib_pkey = 0x8100
Nodes
uvb-1
touvb-44
have one QDR Infiniband card.- Card Model : Mellanox Technologies MT26428 [ConnectX IB QDR, PCIe 2.0 5GT/s].
- Driver :
mlx4_ib
- OAR property : ib_rate=40
- IP over IB addressing :
uvb-[1..44]-ib0
.sophia.grid5000.fr ( 172.18.132.[1..44] )
Switch
- three Mellanox IS50xx QDR Infiniband Switchs
- Topology available here : https://wiki.inria.fr/ClustersSophia/Network (uvb are nef084-nef140 on the nef production cluster)
Interconnection
Infiniband network is physically isolated from Ethernet networks. Therefore, Ethernet network emulated over Infiniband is isolated as well. There isn't any interconnexion, neither at the data link layer nor at the network layer.