Grenoble:Network: Difference between revisions

From Grid5000
Jump to navigation Jump to search
No edit summary
No edit summary
 
(7 intermediate revisions by 3 users not shown)
Line 2: Line 2:
{{Portal|Network}}
{{Portal|Network}}
{{Portal|User}}
{{Portal|User}}
'''See also:''' [[Grenoble:Hardware|Hardware description for Grenoble]]


= Overview of Ethernet network topology =
= Overview of Ethernet network topology =


[[File:GrenobleNetwork.png|400px]]
[[File:GrenobleNetwork.svg|400px]]


{{:Grenoble:GeneratedNetwork}}
{{:Grenoble:GeneratedNetwork}}
Line 13: Line 15:
== Infiniband 100G network ==
== Infiniband 100G network ==


An Infiniband 100G network interconnects nodes from the drac cluster. It has a single switch.
An Infiniband 100G network interconnects nodes from the drac cluster, beside the Ethernet network. That Infiniband network is composed of a single switch.
 
The ''subnet manager'' of this Infiniband network is provided by the switch.


* Switch: Mellanox SB7700 IB EDR / 100G
* Switch: Mellanox SB7700 IB EDR / 100G
Line 21: Line 25:
== Omni-Path 100G network ==
== Omni-Path 100G network ==


Nodes of the dahu, yeti and troll clusters are connected to Omni-Path switches (Intel Omni-Path 100Gbps), beside the Ethernet network.
The dahu, yeti and troll nodes are connected to a single Omni-Path switch (Intel Omni-Path 100Gbps), beside the Ethernet network.
 
This Omni-Path network interconnects both the Grid'5000 Grenoble nodes and the nodes of the HPC Center of Université Grenoble Alpes (Gricad mésocentre).
 
The topology used is a fat tree with a 2:1 blocking factor:
* 2 top switchs
* 6 leaves: switchs with 32 downlinks to nodes and 8 uplinks to each of the two top switches (48 ports total)


All 32 dahu nodes are connected to a same leaf. The 4 yeti and 4 troll nodes are connected to another leaf.
The fabric manager of this Omni-Path network is provided by one of the Grid'5000 service machines (digwatt).


Other ports are used by nodes of the HPC center of UGA.
* Switch: Intel® Omni-Path Edge Switch 100 Series/H1048-OPF
* Host adapter: Intel® Omni-Path Host Fabric Interface Adapter 100 Series 1 Port PCIe x16
* Each of the 44 dahu/yeti/troll nodes has one 100G connection to the switch


[[File:g5k-grenoble-network-opa.png]]
(since 2021-03-01, this Omni-Path network interconnect is not any more shared between both the Grid'5000 Grenoble nodes and the nodes of the HPC Center of Université Grenoble Alpes (Gricad mésocentre), see the history of this page for information about the previous configuration).


Topology, generated from <code>opareports -o topology</code>:


A generated topology (using the output of <code>opareports -o topology</code> :
[[File:topology-grenoble-omnipath-20211110.png|400px]]


[[File:Grenoble-DC-IMAG-OPA.svg|1000px]]
'''NB: OPA (Omni-Path Architecture) is currently not supported on Debian 12 environment.'''


= IP Range =
= IP Range =

Latest revision as of 09:28, 22 April 2024

See also: Hardware description for Grenoble

Overview of Ethernet network topology

GrenobleNetwork.svg


Network devices models

  • gw: Dell S5296F-ON
  • imag-1b-F1-prod-01: Aruba JL719C 8360-48Y6C v2 Switch
  • imag-1b-F2-prod-01: Aruba JL719C 8360-48Y6C v2 Switch
  • imag-1b-F3-prod-01: Aruba JL719C 8360-48Y6C v2 Switch

More details (including address ranges) are available from the Grid5000:Network page.

HPC Networks

Infiniband 100G network

An Infiniband 100G network interconnects nodes from the drac cluster, beside the Ethernet network. That Infiniband network is composed of a single switch.

The subnet manager of this Infiniband network is provided by the switch.

  • Switch: Mellanox SB7700 IB EDR / 100G
  • Host adapter: Mellanox MT27700 [ConnectX-4] dual-port
  • Each of the 12 drac nodes has two 100G connections to the switch

Omni-Path 100G network

The dahu, yeti and troll nodes are connected to a single Omni-Path switch (Intel Omni-Path 100Gbps), beside the Ethernet network.

The fabric manager of this Omni-Path network is provided by one of the Grid'5000 service machines (digwatt).

  • Switch: Intel® Omni-Path Edge Switch 100 Series/H1048-OPF
  • Host adapter: Intel® Omni-Path Host Fabric Interface Adapter 100 Series 1 Port PCIe x16
  • Each of the 44 dahu/yeti/troll nodes has one 100G connection to the switch

(since 2021-03-01, this Omni-Path network interconnect is not any more shared between both the Grid'5000 Grenoble nodes and the nodes of the HPC Center of Université Grenoble Alpes (Gricad mésocentre), see the history of this page for information about the previous configuration).

Topology, generated from opareports -o topology:

Topology-grenoble-omnipath-20211110.png

NB: OPA (Omni-Path Architecture) is currently not supported on Debian 12 environment.

IP Range

  • Ethernet: 172.16.16.0/20
  • HPC (Infiniband/Omni-Path): 172.18.16.0/20 (but Infiniband and Omni-path IPs are not reachable directly from one another)
  • Virtual: 10.132.0.0/14