Grenoble:Network: Difference between revisions

From Grid5000
Jump to navigation Jump to search
No edit summary
(Add IB network)
Line 9: Line 9:
{{:Grenoble:GeneratedNetwork}}
{{:Grenoble:GeneratedNetwork}}


= Omni-Path Topology =
= HPC Networks =


Nodes of the dahu, yeti and lama clusters are connected to Omni-Path switches (Intel Omni-Path 100Gbps), beside the Ethernet network.
== Infiniband 100G network ==
 
An Infiniband 100G network interconnects nodes from the drac cluster. It has a single switch.
 
* Switch: Mellanox SB7700 IB EDR / 100G
* Host adapter: Mellanox MT27700 [ConnectX-4] dual-port
* Each node has two 100G connections to the switch
 
== Omni-Path 100G network ==
 
Nodes of the dahu, yeti and troll clusters are connected to Omni-Path switches (Intel Omni-Path 100Gbps), beside the Ethernet network.


This Omni-Path network interconnects both the Grid'5000 Grenoble nodes and the nodes of the HPC Center of Université Grenoble Alpes (Gricad mésocentre).
This Omni-Path network interconnects both the Grid'5000 Grenoble nodes and the nodes of the HPC Center of Université Grenoble Alpes (Gricad mésocentre).
Line 19: Line 29:
* 6 leaves: switchs with 32 downlinks to nodes and 8 uplinks to each of the two top switches (48 ports total)
* 6 leaves: switchs with 32 downlinks to nodes and 8 uplinks to each of the two top switches (48 ports total)


All 32 dahu nodes are connected to a same leaf. The 4 yeti nodes are connected to another leaf. TBC for lama.
All 32 dahu nodes are connected to a same leaf. The 4 yeti nodes are connected to another leaf. TBC for troll.


Other ports are used by nodes of the HPC center of UGA.
Other ports are used by nodes of the HPC center of UGA.

Revision as of 15:54, 3 February 2021

Overview of Ethernet network topology

GrenobleNetwork.png


Network devices models

  • gw: Dell S5296F-ON

More details (including address ranges) are available from the Grid5000:Network page.

HPC Networks

Infiniband 100G network

An Infiniband 100G network interconnects nodes from the drac cluster. It has a single switch.

  • Switch: Mellanox SB7700 IB EDR / 100G
  • Host adapter: Mellanox MT27700 [ConnectX-4] dual-port
  • Each node has two 100G connections to the switch

Omni-Path 100G network

Nodes of the dahu, yeti and troll clusters are connected to Omni-Path switches (Intel Omni-Path 100Gbps), beside the Ethernet network.

This Omni-Path network interconnects both the Grid'5000 Grenoble nodes and the nodes of the HPC Center of Université Grenoble Alpes (Gricad mésocentre).

The topology used is a fat tree with a 2:1 blocking factor:

  • 2 top switchs
  • 6 leaves: switchs with 32 downlinks to nodes and 8 uplinks to each of the two top switches (48 ports total)

All 32 dahu nodes are connected to a same leaf. The 4 yeti nodes are connected to another leaf. TBC for troll.

Other ports are used by nodes of the HPC center of UGA.

G5k-grenoble-network-opa.png


A generated topology (using the output of opareports -o topology :

Grenoble-DC-IMAG-OPA.svg

IP Range

  • Computing : 172.16.16.0/20
  • Omni-Path : 172.18.16.0/20
  • Virtual : 10.132.0.0/14