Difference between revisions of "Grenoble:Hardware"

From Grid5000
Jump to: navigation, search
Line 16: Line 16:
 
{|class="wikitable sortable"
 
{|class="wikitable sortable"
 
|-
 
|-
!|Cluster
+
!rowspan=2|Cluster
!|Access Condition
+
!rowspan=2|Access Condition
!|Date of arrival
+
!rowspan=2|Date of arrival
!data-sort-type="number"|Nodes
+
!data-sort-type="number" rowspan=2|Nodes
!|CPU
+
!colspan=4|CPU
 +
!data-sort-type="number" rowspan=2|Memory
 +
!data-sort-type="number" rowspan=2|Storage
 +
!data-sort-type="number" rowspan=2|Network
 +
!rowspan=2|Accelerators
 +
|-
 +
!data-sort-type="number"|#
 +
!|Name
 
!data-sort-type="number"|Cores
 
!data-sort-type="number"|Cores
!data-sort-type="number"|Memory
+
!|Architecture
!data-sort-type="number"|Storage
+
|-
!data-sort-type="number"|Network
+
 
!|Accelerators
+
|[[#dahu|dahu]]||||2018-03-22||32||2||Intel Xeon Gold 6130||16&nbsp;cores/CPU||x86_64||192&nbsp;GiB||data-sort-value="4396"|<b>240&nbsp;GB&nbsp;SSD</b> +&nbsp;480&nbsp;GB&nbsp;SSD +&nbsp;4.0&nbsp;TB&nbsp;HDD||data-sort-value="110000"|10&nbsp;Gbps&nbsp;+&nbsp;100&nbsp;Gbps&nbsp;Omni-Path||
 
|-
 
|-
|[[#dahu|dahu]]||||2018-03-22||32||2&nbsp;x&nbsp;Intel&nbsp;Xeon&nbsp;Gold&nbsp;6130||16&nbsp;cores/CPU||192&nbsp;GiB||data-sort-value="4396"|<b>240&nbsp;GB&nbsp;SSD</b> +&nbsp;480&nbsp;GB&nbsp;SSD +&nbsp;4.0&nbsp;TB&nbsp;HDD||data-sort-value="110000"|10&nbsp;Gbps&nbsp;+&nbsp;100&nbsp;Gbps&nbsp;Omni-Path||
+
|[[#drac|drac]]||<b>[[Getting_Started#Selecting_specific_resources|exotic]]</b>&nbsp;job&nbsp;type||2020-10-05||12||2||POWER8NVL 1.0||10&nbsp;cores/CPU||ppc64le||128&nbsp;GiB||data-sort-value="1862"|<b>1.0&nbsp;TB&nbsp;HDD</b> +&nbsp;1.0&nbsp;TB&nbsp;HDD||data-sort-value="210000"|10&nbsp;Gbps&nbsp;+&nbsp;2&nbsp;x&nbsp;100&nbsp;Gbps&nbsp;InfiniBand||4&nbsp;x&nbsp;Nvidia Tesla&nbsp;P100&nbsp;(16&nbsp;GiB)
 
|-
 
|-
|[[#drac|drac]]||<b>[[Getting_Started#Selecting_specific_resources|exotic]]</b>&nbsp;job&nbsp;type||2020-10-05||12||2&nbsp;x&nbsp;Power&nbsp;POWER8NVL&nbsp;1.0||10&nbsp;cores/CPU||128&nbsp;GiB||data-sort-value="1862"|<b>1&nbsp;x&nbsp;1.0&nbsp;TB&nbsp;HDD</b> +&nbsp;1&nbsp;x&nbsp;1.0&nbsp;TB&nbsp;HDD||data-sort-value="210000"|10&nbsp;Gbps&nbsp;+&nbsp;2&nbsp;x&nbsp;100&nbsp;Gbps&nbsp;InfiniBand||4&nbsp;x&nbsp;Nvidia Tesla&nbsp;P100&nbsp;(16&nbsp;GiB)
+
|[[#servan|servan]]||<b>testing</b>&nbsp;queue,<br/><b>[[Getting_Started#Selecting_specific_resources|exotic]]</b>&nbsp;job&nbsp;type||2021-12-15||2||2||AMD EPYC 7352||24&nbsp;cores/CPU||x86_64||128&nbsp;GiB||data-sort-value="2980"|<b>1.6&nbsp;TB&nbsp;SSD</b> +&nbsp;1.6&nbsp;TB&nbsp;SSD||data-sort-value="225000"|25&nbsp;Gbps&nbsp;+&nbsp;2&nbsp;x&nbsp;100&nbsp;Gbps&nbsp;Ethernet-FPGA||Xilinx Alveo&nbsp;U200
 
|-
 
|-
|[[#servan|servan]]||<b>testing</b>&nbsp;queue,<br/><b>[[Getting_Started#Selecting_specific_resources|exotic]]</b>&nbsp;job&nbsp;type||2021-12-15||2||2&nbsp;x&nbsp;AMD&nbsp;EPYC&nbsp;7352||24&nbsp;cores/CPU||128&nbsp;GiB||data-sort-value="2980"|<b>1&nbsp;x&nbsp;1.6&nbsp;TB&nbsp;SSD</b> +&nbsp;1&nbsp;x&nbsp;1.6&nbsp;TB&nbsp;SSD||data-sort-value="225000"|25&nbsp;Gbps&nbsp;+&nbsp;2&nbsp;x&nbsp;100&nbsp;Gbps&nbsp;Ethernet-FPGA||Xilinx Alveo&nbsp;U200
+
|[[#troll|troll]]||<b>[[Getting_Started#Selecting_specific_resources|exotic]]</b>&nbsp;job&nbsp;type||2019-12-23||4||2||Intel Xeon Gold 5218||16&nbsp;cores/CPU||x86_64||384&nbsp;GiB + 1.5&nbsp;TiB [[PMEM]]||data-sort-value="1937"|<b>480&nbsp;GB&nbsp;SSD</b> +&nbsp;1.6&nbsp;TB&nbsp;SSD||data-sort-value="125000"|25&nbsp;Gbps&nbsp;(SR&#8209;IOV)&nbsp;+&nbsp;100&nbsp;Gbps&nbsp;Omni-Path||
 
|-
 
|-
|[[#troll|troll]]||<b>[[Getting_Started#Selecting_specific_resources|exotic]]</b>&nbsp;job&nbsp;type||2019-12-23||4||2&nbsp;x&nbsp;Intel&nbsp;Xeon&nbsp;Gold&nbsp;5218||16&nbsp;cores/CPU||384&nbsp;GiB + 1.5&nbsp;TiB [[PMEM]]||data-sort-value="1937"|<b>480&nbsp;GB&nbsp;SSD</b> +&nbsp;1.6&nbsp;TB&nbsp;SSD||data-sort-value="125000"|25&nbsp;Gbps&nbsp;(SR&#8209;IOV)&nbsp;+&nbsp;100&nbsp;Gbps&nbsp;Omni-Path||
+
|[[#yeti|yeti]]||<b>[[Getting_Started#Selecting_specific_resources|exotic]]</b>&nbsp;job&nbsp;type||2018-01-16||4||4||Intel Xeon Gold 6130||16&nbsp;cores/CPU||x86_64||768&nbsp;GiB||data-sort-value="9016"|<b>480&nbsp;GB&nbsp;SSD</b> +&nbsp;3&nbsp;x&nbsp;2.0&nbsp;TB&nbsp;HDD[[Disk_reservation|*]] +&nbsp;2&nbsp;x&nbsp;1.6&nbsp;TB&nbsp;SSD||data-sort-value="110000"|10&nbsp;Gbps&nbsp;+&nbsp;100&nbsp;Gbps&nbsp;Omni-Path||
 
|-
 
|-
|[[#yeti|yeti]]||<b>[[Getting_Started#Selecting_specific_resources|exotic]]</b>&nbsp;job&nbsp;type||2018-01-16||4||4&nbsp;x&nbsp;Intel&nbsp;Xeon&nbsp;Gold&nbsp;6130||16&nbsp;cores/CPU||768&nbsp;GiB||data-sort-value="9016"|<b>480&nbsp;GB&nbsp;SSD</b> +&nbsp;3&nbsp;x&nbsp;2.0&nbsp;TB&nbsp;HDD[[Disk_reservation|*]] +&nbsp;2&nbsp;x&nbsp;1.6&nbsp;TB&nbsp;SSD||data-sort-value="110000"|10&nbsp;Gbps&nbsp;+&nbsp;100&nbsp;Gbps&nbsp;Omni-Path||
 
 
 
|}
 
|}
 
''*: disk is [[Disk_reservation|reservable]]''
 
''*: disk is [[Disk_reservation|reservable]]''
Line 326: Line 331:
 
| Xilinx Alveo&nbsp;U200<br/>
 
| Xilinx Alveo&nbsp;U200<br/>
 
|-
 
|-
|}''<small>Last generated from the Grid'5000 Reference API on 2022-05-24 ([https://github.com/grid5000/reference-repository/commit/e3c52eea14 commit e3c52eea14])</small>''
+
|}''<small>Last generated from the Grid'5000 Reference API on 2022-06-07 ([https://github.com/grid5000/reference-repository/commit/5d4b87f3e6 commit 5d4b87f3e6])</small>''

Revision as of 14:38, 7 June 2022

See also: Network topology for Grenoble

Summary

  • 5 clusters
  • 54 nodes
  • 1744 CPU cores
  • 48 GPUs
  • 12.25 TiB RAM + 6.0 TiB PMEM
  • 88 SSDs and 68 HDDs on nodes (total: 228.53 TB)
  • 101.9 TFLOPS (excluding GPUs)

Clusters

Cluster Access Condition Date of arrival Nodes CPU Memory Storage Network Accelerators
# Name Cores Architecture
dahu 2018-03-22 32 2 Intel Xeon Gold 6130 16 cores/CPU x86_64 192 GiB 240 GB SSD + 480 GB SSD + 4.0 TB HDD 10 Gbps + 100 Gbps Omni-Path
drac exotic job type 2020-10-05 12 2 POWER8NVL 1.0 10 cores/CPU ppc64le 128 GiB 1.0 TB HDD + 1.0 TB HDD 10 Gbps + 2 x 100 Gbps InfiniBand 4 x Nvidia Tesla P100 (16 GiB)
servan testing queue,
exotic job type
2021-12-15 2 2 AMD EPYC 7352 24 cores/CPU x86_64 128 GiB 1.6 TB SSD + 1.6 TB SSD 25 Gbps + 2 x 100 Gbps Ethernet-FPGA Xilinx Alveo U200
troll exotic job type 2019-12-23 4 2 Intel Xeon Gold 5218 16 cores/CPU x86_64 384 GiB + 1.5 TiB PMEM 480 GB SSD + 1.6 TB SSD 25 Gbps (SR‑IOV) + 100 Gbps Omni-Path
yeti exotic job type 2018-01-16 4 4 Intel Xeon Gold 6130 16 cores/CPU x86_64 768 GiB 480 GB SSD + 3 x 2.0 TB HDD* + 2 x 1.6 TB SSD 10 Gbps + 100 Gbps Omni-Path

*: disk is reservable

Clusters in the default queue

dahu

32 nodes, 64 cpus, 1024 cores (json)

Reservation example:

Terminal.png fgrenoble:
oarsub -p dahu -I
Model: Dell PowerEdge C6420
Date of arrival: 2018-03-22
CPU: Intel Xeon Gold 6130 (Skylake, 2.10GHz, 2 CPUs/node, 16 cores/CPU)
Memory: 192 GiB
Storage:
  • disk0, 240 GB SSD SATA Samsung MZ7KM240HMHQ0D3 (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:00:11.5-ata-3) (primary disk)
  • disk1, 480 GB SSD SATA Samsung MZ7KM480HMHQ0D3 (dev: /dev/disk1, by-path: /dev/disk/by-path/pci-0000:00:11.5-ata-4)
  • disk2, 4.0 TB HDD SATA Seagate ST4000NM0265-2DC (dev: /dev/disk2, by-path: /dev/disk/by-path/pci-0000:00:11.5-ata-5)
Network:
  • eth0/enp24s0f0, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e
  • eth1/enp24s0f1, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
  • ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1

drac

12 nodes, 24 cpus, 240 cores (json)

Reservation example:

Terminal.png fgrenoble:
oarsub -t exotic -p drac -I
Access condition: exotic job type
Model: IBM PowerNV S822LC (8335-GTB)
Date of arrival: 2020-10-05
CPU: POWER8NVL 1.0 (POWER8, 2 CPUs/node, 10 cores/CPU)
Memory: 128 GiB
Storage:
  • disk0, 1.0 TB HDD SATA Seagate ST1000NX0313 (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0009:04:00.0-ata-1) (primary disk)
  • disk1, 1.0 TB HDD SATA Seagate ST1000NX0313 (dev: /dev/disk1, by-path: /dev/disk/by-path/pci-0009:04:00.0-ata-2)
Network:
  • eth0/enP1p1s0f0, Ethernet, configured rate: 10 Gbps, model: Broadcom Inc. and subsidiaries NetXtreme II BCM57800 1/10 Gigabit Ethernet, driver: bnx2x
  • eth1/enP1p1s0f1, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme II BCM57800 1/10 Gigabit Ethernet, driver: bnx2x - unavailable for experiment
  • eth2/enP1p1s0f2, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme II BCM57800 1/10 Gigabit Ethernet, driver: bnx2x - unavailable for experiment
  • eth3/enP1p1s0f3, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme II BCM57800 1/10 Gigabit Ethernet, driver: bnx2x - unavailable for experiment
  • eth4/enP9p7s0f0, Ethernet, configured rate: 1 Gbps, model: Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • eth5/enP9p7s0f1, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • ib0, InfiniBand, configured rate: 100 Gbps, model: Mellanox Technologies MT27700 Family [ConnectX-4], driver: mlx5_core
  • ib1, InfiniBand, configured rate: 100 Gbps, model: Mellanox Technologies MT27700 Family [ConnectX-4], driver: mlx5_core
GPU: 4 x Nvidia Tesla P100-SXM2-16GB (16 GiB)
Compute capability: 6.0

troll

4 nodes, 8 cpus, 128 cores (json)

Reservation example:

Terminal.png fgrenoble:
oarsub -t exotic -p troll -I
Access condition: exotic job type
Model: Dell PowerEdge R640
Date of arrival: 2019-12-23
CPU: Intel Xeon Gold 5218 (Cascade Lake-SP, 2.30GHz, 2 CPUs/node, 16 cores/CPU)
Memory: 384 GiB + 1.5 TiB PMEM
Storage:
  • disk0, 480 GB SSD SATA Micron MTFDDAK480TDN (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:0:0) (primary disk)
  • disk1, 1.6 TB SSD NVME Dell Samsung PM1735 (dev: /dev/disk1, by-path: /dev/disk/by-path/pci-0000:d8:00.0-nvme-1)
Network:
  • eth0/eno1, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27710 Family [ConnectX-4 Lx], driver: mlx5_core, SR-IOV enabled
  • eth1/eno2, Ethernet, model: Mellanox Technologies MT27710 Family [ConnectX-4 Lx], driver: mlx5_core - unavailable for experiment
  • ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1

yeti

4 nodes, 16 cpus, 256 cores, split as follows due to differences between nodes (json)

Reservation example:

Terminal.png fgrenoble:
oarsub -t exotic -p yeti -I
yeti-1 (1 node, 4 cpus, 64 cores)
Access condition: exotic job type
Model: Dell PowerEdge R940
Date of arrival: 2018-01-16
CPU: Intel Xeon Gold 6130 (Skylake, 2.10GHz, 4 CPUs/node, 16 cores/CPU)
Memory: 768 GiB
Storage:
  • disk0, 480 GB SSD SATA Intel SSDSC2KG480G7R (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:0:0) (primary disk)
  • disk1, 2.0 TB HDD SAS Seagate ST2000NX0463 (dev: /dev/disk1*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:1:0) (reservable)
  • disk2, 2.0 TB HDD SAS Seagate ST2000NX0463 (dev: /dev/disk2*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:2:0) (reservable)
  • disk3, 2.0 TB HDD SAS Seagate ST2000NX0463 (dev: /dev/disk3*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:3:0) (reservable)
  • disk4, 1.6 TB SSD NVME Dell Dell Express Flash PM1725b 1.6TB AIC (dev: /dev/disk4, by-path: /dev/disk/by-path/pci-0000:59:00.0-nvme-1)
  • disk5, 1.6 TB SSD NVME Dell Dell Express Flash NVMe PM1725 1.6TB AIC (dev: /dev/disk5, by-path: /dev/disk/by-path/pci-0000:6d:00.0-nvme-1)
Network:
  • eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e
  • eth1/eno2, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
  • eth2/eno3, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
  • eth3/eno4, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
  • ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1

yeti-[2,4] (2 nodes, 8 cpus, 128 cores)
Access condition: exotic job type
Model: Dell PowerEdge R940
Date of arrival: 2018-01-16
CPU: Intel Xeon Gold 6130 (Skylake, 2.10GHz, 4 CPUs/node, 16 cores/CPU)
Memory: 768 GiB
Storage:
  • disk0, 480 GB SSD SATA Intel SSDSC2KG480G7R (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:0:0) (primary disk)
  • disk1, 2.0 TB HDD SAS Seagate ST2000NX0463 (dev: /dev/disk1*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:1:0) (reservable)
  • disk2, 2.0 TB HDD SAS Seagate ST2000NX0463 (dev: /dev/disk2*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:2:0) (reservable)
  • disk3, 2.0 TB HDD SAS Seagate ST2000NX0463 (dev: /dev/disk3*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:3:0) (reservable)
  • disk4, 1.6 TB SSD NVME Dell Dell Express Flash NVMe PM1725 1.6TB AIC (dev: /dev/disk4, by-path: /dev/disk/by-path/pci-0000:59:00.0-nvme-1)
  • disk5, 1.6 TB SSD NVME Dell Dell Express Flash NVMe PM1725 1.6TB AIC (dev: /dev/disk5, by-path: /dev/disk/by-path/pci-0000:6d:00.0-nvme-1)
Network:
  • eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e
  • eth1/eno2, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
  • eth2/eno3, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
  • eth3/eno4, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
  • ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1

yeti-3 (1 node, 4 cpus, 64 cores)
Access condition: exotic job type
Model: Dell PowerEdge R940
Date of arrival: 2018-01-16
CPU: Intel Xeon Gold 6130 (Skylake, 2.10GHz, 4 CPUs/node, 16 cores/CPU)
Memory: 768 GiB
Storage:
  • disk0, 480 GB SSD SATA Intel SSDSC2KG480G8R (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:0:0) (primary disk)
  • disk1, 2.0 TB HDD SAS Seagate ST2000NX0463 (dev: /dev/disk1*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:1:0) (reservable)
  • disk2, 2.0 TB HDD SAS Seagate ST2000NX0463 (dev: /dev/disk2*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:2:0) (reservable)
  • disk3, 2.0 TB HDD SAS Seagate ST2000NX0463 (dev: /dev/disk3*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:3:0) (reservable)
  • disk4, 1.6 TB SSD NVME Dell Dell Express Flash NVMe PM1725 1.6TB AIC (dev: /dev/disk4, by-path: /dev/disk/by-path/pci-0000:59:00.0-nvme-1)
  • disk5, 1.6 TB SSD NVME Dell Dell Express Flash NVMe PM1725 1.6TB AIC (dev: /dev/disk5, by-path: /dev/disk/by-path/pci-0000:6d:00.0-nvme-1)
Network:
  • eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e
  • eth1/eno2, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
  • eth2/eno3, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
  • eth3/eno4, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
  • ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1

Clusters in the testing queue

servan

2 nodes, 4 cpus, 96 cores (json)

Reservation example:

Terminal.png fgrenoble:
oarsub -q testing -t exotic -p servan -I
Access condition: testing queue, exotic job type
Model: Dell PowerEdge R7525
Date of arrival: 2021-12-15
CPU: AMD EPYC 7352 (Zen 2, 2 CPUs/node, 24 cores/CPU)
Memory: 128 GiB
Storage:
  • disk0, 1.6 TB SSD NVME Dell Dell Ent NVMe v2 AGN MU U.2 1.6TB (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:c3:00.0-nvme-1) (primary disk)
  • disk1, 1.6 TB SSD NVME Dell Dell Ent NVMe v2 AGN MU U.2 1.6TB (dev: /dev/disk1, by-path: /dev/disk/by-path/pci-0000:c4:00.0-nvme-1)
Network:
  • eth0/eno33, Ethernet, configured rate: 25 Gbps, model: Intel Ethernet Controller E810-C for SFP, driver: ice
  • eth1/eno34, Ethernet, model: Intel Ethernet Controller E810-C for SFP, driver: ice - unavailable for experiment
  • eth2/eno35, Ethernet, model: Intel Ethernet Controller E810-C for SFP, driver: ice - unavailable for experiment
  • eth3/eno36, Ethernet, model: Intel Ethernet Controller E810-C for SFP, driver: ice - unavailable for experiment
  • fpga0, Ethernet-FPGA, configured rate: 100 Gbps, model: Xilinx Alveo U200 QSFP28 (100GbE), driver: xrt
  • fpga1, Ethernet-FPGA, configured rate: 100 Gbps, model: Xilinx Alveo U200 QSFP28 (100GbE), driver: xrt
FPGA: Xilinx Alveo U200
Last generated from the Grid'5000 Reference API on 2022-06-07 (commit 5d4b87f3e6)