Nancy:Network: Difference between revisions
No edit summary |
No edit summary |
||
Line 31: | Line 31: | ||
More information about using Omni-Path with MPI is available from the [[Run_MPI_On_Grid%275000]] tutorial. | More information about using Omni-Path with MPI is available from the [[Run_MPI_On_Grid%275000]] tutorial. | ||
NB: OPA (Omni-Path Architecture) is currently not supported on Debian 12 environment. | '''NB: OPA (Omni-Path Architecture) is currently not supported on Debian 12 environment.''' | ||
=== Switch === | === Switch === |
Revision as of 09:27, 22 April 2024
See also: Hardware description for Nancy
Overview of Ethernet network topology
Network devices models
- gw-next: Aruba 8325-48Y8C JL635A
- gw: Cisco Nexus 9508
- sgrappe: Dell S5224F-ON
- sgrdix: Aruba 8325-48Y8C
- sgrdixib: Mellanox QM8700
- sgrele-opf: Omni-Path
- sgros1: Dell Z9264F-ON
- sgros2: Dell Z9264F-ON
- sgruss: Dell S5224F-ON
- sgrvingt: Dell S4048
More details (including address ranges) are available from the Grid5000:Network page.
HPC Networks
Several HPC Networks are available.
Omni-Path 100G on grele and grimani nodes
grele-1
togrele-14
have one 100GB Omni-Path card.grimani-1
togrimani-6
have one 100GB Omni-Path card.
- Card Model: Intel Omni-Path Host Fabric adaptateur series 100 1 Port PCIe x8
Omni-Path 100G on grvingt nodes
There's another, separate Omni-Path network connecting the 64 grvingt nodes and some servers. The topology is a non blocking fat tree (1:1).
Topology, generated from opareports -o topology
:
More information about using Omni-Path with MPI is available from the Run_MPI_On_Grid'5000 tutorial.
NB: OPA (Omni-Path Architecture) is currently not supported on Debian 12 environment.
Switch
- Infiniband Switch 4X DDR
- Model based on Infiniscale_III
- 1 commutation card Flextronics F-X43M204
- 12 line cards 4X 12 ports DDR Flextronics F-X43M203
Interconnection
Omnipath network is physically isolated from Ethernet networks. Therefore, Ethernet network emulated over Infiniband is isolated as well. There isn't any interconnexion, neither at the L2 or L3 layers.
Infiniband 56G on graphite/graoully/grimoire/grisou nodes
graoully-[1-16]
have one 56GB Infiniband card.grimoire-[1-8]
have one 56GB Infiniband card.grisou-[50-51]
have one 56GB Infiniband card.
- Card Model : Mellanox Technologies MT27500 Family [ConnectX-3] ( ConnectX-3 ).
- Driver :
mlx4_core
- OAR property : ib_rate='56'
- IP over IB addressing :
graoully-[1-16]-ib0
.nancy.grid5000.fr ( 172.18.70.[1-16] ) - IP over IB addressing :
grimoire-[1-8]-ib0
.nancy.grid5000.fr ( 172.18.71.[1-8] ) - IP over IB addressing :
grisou-[50-51]-ib0
.nancy.grid5000.fr ( 172.16.72.[50-51] )
Switch
- 36-port Mellanox InfiniBand SX6036
- Documentation
- 36 FDR (56Gb/s) ports in a 1U switch
- 4.032Tb/s switching capacity
- FDR/FDR10 support for Forward Error Correction (FEC)
Interconnection
Infiniband network is physically isolated from Ethernet networks. Therefore, Ethernet network emulated over Infiniband is isolated as well. There isn't any interconnexion, neither at the L2 or L3 layers.