Nancy:Network: Difference between revisions

From Grid5000
Jump to navigation Jump to search
No edit summary
No edit summary
 
(17 intermediate revisions by 7 users not shown)
Line 2: Line 2:
{{Portal|Network}}
{{Portal|Network}}
{{Portal|User}}
{{Portal|User}}
'''See also:''' [[Nancy:Hardware|Hardware description for Nancy]]
= Overview of Ethernet network topology =
= Overview of Ethernet network topology =


[[File:NancyNetwork.png|900px]]
[[File:NancyNetwork.svg|800px]]


{{Nancy:GeneratedNetwork}}
{{:Nancy:GeneratedNetwork}}


= HPC Networks =
= HPC Networks =
Several HPC Networks are available.


== Omni-Path 100G on grele and grimani nodes  ==
== Omni-Path 100G on grele and grimani nodes  ==
Line 15: Line 20:
*<code class="host">grimani-1</code> to <code class="host">grimani-6</code> have one 100GB Omni-Path card.
*<code class="host">grimani-1</code> to <code class="host">grimani-6</code> have one 100GB Omni-Path card.


* Card Model :  Intel Omni-Path Host Fabric adaptateur series 100 1 Port PCIe x8
* Card Model:  Intel Omni-Path Host Fabric adaptateur series 100 1 Port PCIe x8


== Infiniband 20G on griffon nodes  ==
== Omni-Path 100G on grvingt nodes  ==
''Infiniband has been removed from these nodes''


== Infiniband 20G on graphene nodes ==
There's another, separate Omni-Path network connecting the 64 grvingt nodes and some servers. The topology is a non blocking fat tree (1:1).
Topology, generated from <code>opareports -o topology</code>:


[[File:Topology-grvingt.png|400px]]


*<code class="host">graphene-1</code> to <code class="host">graphene-144</code> have one 20GB Infiniband card.
More information about using Omni-Path with MPI is available from the [[Run_MPI_On_Grid%275000]] tutorial.


* Card Model : Mellanox Technologies MT26418 [ConnectX IB DDR, PCIe 2.0 5GT/s] ( [http://www.mellanox.com/content/pages.php?pg=products_dyn&product_family=4&menu_section=41 ConnectX] ).
'''NB: OPA (Omni-Path Architecture) is currently not supported on Debian 12 environment.'''
* Driver : <code class="dir">mlx4_ib</code>
* OAR property : ib_rate=20
* IP over IB addressing : <code class="host">graphene-[1..144]-ib0</code>.nancy.grid5000.fr ( 172.18.64.[1..144] )


=== Switch ===
=== Switch ===
Line 39: Line 42:
=== Interconnection ===
=== Interconnection ===


Infiniband network is physically isolated from Ethernet networks. Therefore, Ethernet network emulated over Infiniband is isolated as well. There isn't any interconnexion, neither at the data link layer nor at the network layer.
Omnipath network is physically isolated from Ethernet networks. Therefore, Ethernet network emulated over Infiniband is isolated as well. There isn't any interconnexion, neither at the L2 or L3 layers.
 
== Infiniband 56G on graphite/graoully/grimoire/grisou nodes  ==
 
*<code class="host">graoully-[1-16]</code> have one 56GB Infiniband card.
*<code class="host">grimoire-[1-8]</code> have one 56GB Infiniband card.
*<code class="host">graphite-[1-4]</code> have one 56GB Infiniband card.
*<code class="host">grisou-[50-51]</code> have one 56GB Infiniband card.
 
* Card Model : Mellanox Technologies MT27500 Family [ConnectX-3] ( [http://www.mellanox.com/related-docs/user_manuals/ConnectX-3_VPI_Single_and_Dual_QSFP_Port_Adapter_Card_User_Manual.pdf ConnectX-3] ).
* Driver : <code class="dir">mlx4_core</code>
* OAR property : ib_rate='56'
* IP over IB addressing : <code class="host">graoully-[1-16]-ib0</code>.nancy.grid5000.fr ( 172.18.70.[1-16] )
* IP over IB addressing : <code class="host">grimoire-[1-8]-ib0</code>.nancy.grid5000.fr ( 172.18.71.[1-8] )
* IP over IB addressing : <code class="host">graphite-[1-4]-ib0</code>.nancy.grid5000.fr ( 172.16.68.[9-12] )
* IP over IB addressing : <code class="host">grisou-[50-51]-ib0</code>.nancy.grid5000.fr ( 172.16.72.[50-51] )
 
=== Switch ===
 
* 36-port Mellanox InfiniBand SX6036
* [http://www.mellanox.com/page/products_dyn?product_family=132 Documentation]
* 36 FDR (56Gb/s) ports in a 1U switch
* 4.032Tb/s switching capacity
* FDR/FDR10 support for Forward Error Correction (FEC)
 
=== Interconnection ===
 
Infiniband network is physically isolated from Ethernet networks. Therefore, Ethernet network emulated over Infiniband is isolated as well. There isn't any interconnexion, neither at the data link layer nor at the network layer.

Latest revision as of 07:22, 19 June 2024

See also: Hardware description for Nancy

Overview of Ethernet network topology

NancyNetwork.svg


Network devices models

  • gw-next: Aruba 8325-48Y8C JL635A
  • gw: Cisco Nexus 9508
  • sgrappe: Dell S5224F-ON
  • sgrdix: Aruba 8325-48Y8C
  • sgrdixib: Mellanox QM8700
  • sgrele-opf: Omni-Path
  • sgros1: Dell Z9264F-ON
  • sgros2: Dell Z9264F-ON
  • sgruss: Dell S5224F-ON
  • sgrvingt: Dell S4048

More details (including address ranges) are available from the Grid5000:Network page.

HPC Networks

Several HPC Networks are available.

Omni-Path 100G on grele and grimani nodes

  • grele-1 to grele-14 have one 100GB Omni-Path card.
  • grimani-1 to grimani-6 have one 100GB Omni-Path card.
  • Card Model: Intel Omni-Path Host Fabric adaptateur series 100 1 Port PCIe x8

Omni-Path 100G on grvingt nodes

There's another, separate Omni-Path network connecting the 64 grvingt nodes and some servers. The topology is a non blocking fat tree (1:1). Topology, generated from opareports -o topology:

Topology-grvingt.png

More information about using Omni-Path with MPI is available from the Run_MPI_On_Grid'5000 tutorial.

NB: OPA (Omni-Path Architecture) is currently not supported on Debian 12 environment.

Switch

  • Infiniband Switch 4X DDR
  • Model based on Infiniscale_III
  • 1 commutation card Flextronics F-X43M204
  • 12 line cards 4X 12 ports DDR Flextronics F-X43M203

Interconnection

Omnipath network is physically isolated from Ethernet networks. Therefore, Ethernet network emulated over Infiniband is isolated as well. There isn't any interconnexion, neither at the L2 or L3 layers.