Grenoble:Network

From Grid5000
Revision as of 18:07, 2 December 2019 by Snoir (talk | contribs)
Jump to navigation Jump to search

Overview of Ethernet network topology

GrenobleNetwork.png


Network devices models

  • gw: Dell S5296F-ON
  • imag-1b-F1-admin-01: Aruba R9W97A 8100-40XT8XF4C switch
  • imag-1b-F1-prod-01: Aruba JL719C 8360-48Y6C v2 Switch
  • imag-1b-F2-admin-01: Aruba R9W97A 8100-40XT8XF4C switch
  • imag-1b-F2-prod-01: Aruba JL719C 8360-48Y6C v2 Switch
  • imag-1b-F3-admin-01: Aruba R9W97A 8100-40XT8XF4C switch
  • imag-1b-F3-prod-01: Aruba JL719C 8360-48Y6C v2 Switch
  • opa-grenoble: Intel Omni-Path Edge Switch 100 Series/H1048-OPF
  • skinovis2-admin-01: Dell PowerConnect 6248
  • skinovis2-prod-01: N9K-C93360YC-FX2
  • sw-ib-mellanox: Mellanox IB switch

More details (including address ranges) are available from the Grid5000:Network page.

Omni-Path Topology

Nodes of the dahu, yeti and lama clusters are connected to Omni-Path switches (Intel Omni-Path 100Gbps), beside the Ethernet network.

This Omni-Path network interconnects both the Grid'5000 Grenoble nodes and the nodes of the HPC Center of Université Grenoble Alpes (Gricad mésocentre).

The topology used is a fat tree with a 2:1 blocking factor:

  • 2 top switchs
  • 6 leaves: switchs with 32 downlinks to nodes and 8 uplinks to each top switch (48 ports total)

All 32 dahu nodes are connected to a same leaf. The 4 yeti nodes are connected to another leaf. TBC for lama.

G5k-grenoble-network-opa.png

IP Range

  • Computing : 172.16.16.0/20
  • Omni-Path : 172.18.16.0/20
  • Virtual : 10.132.0.0/14