|
|
Line 1: |
Line 1: |
| __NOEDITSECTION__ | | __NOEDITSECTION__ |
| {{Portal|User}} | | {{Portal|User}} |
| <div class="sitelink">Hardware: [[Hardware|Global]] | [[Grenoble:Hardware|Grenoble]] | [[Lille:Hardware|Lille]] | [[Luxembourg:Hardware|Luxembourg]] | [[Lyon:Hardware|Lyon]] | [[Nancy:Hardware|Nancy]] | [[Nantes:Hardware|Nantes]] | [[Rennes:Hardware|Rennes]] | [[Sophia:Hardware|Sophia]] | [[Toulouse:Hardware|Toulouse]]</div> | | <div class="sitelink">Hardware: [[Hardware|Global]] | [[Grenoble:Hardware|Grenoble]] | [[Lille:Hardware|Lille]] | [[Luxembourg:Hardware|Luxembourg]] | [[Lyon:Hardware|Lyon]] | [[Nancy:Hardware|Nancy]] | [[Nantes:Hardware|Nantes]] | [[Rennes:Hardware|Rennes]] | [[Sophia:Hardware|Sophia]] | [[Strasbourg:Hardware|Strasbourg]] | [[Toulouse:Hardware|Toulouse]]</div> |
| '''See also:''' [[Grenoble:Network|Network topology for Grenoble]] | | '''See also:''' [[Grenoble:Network|Network topology for Grenoble]] |
| = Summary = | | = Summary = |
Line 471: |
Line 471: |
| |} | | |} |
|
| |
|
| '''Note:''' This cluster is defined as exotic. Please read the '''[[Exotic#Grenoble:_yeti|exotic]]''' page for more information.<br/>''<small>Last generated from the Grid'5000 Reference API on 2024-04-23 ([https://gitlab.inria.fr/grid5000/reference-repository/commit/20d7864f54 commit 20d7864f54])</small>'' | | '''Note:''' This cluster is defined as exotic. Please read the '''[[Exotic#Grenoble:_yeti|exotic]]''' page for more information.<br/>''<small>Last generated from the Grid'5000 Reference API on 2024-05-16 ([https://gitlab.inria.fr/grid5000/reference-repository/commit/037c3db019 commit 037c3db019])</small>'' |
Revision as of 17:35, 16 May 2024
See also: Network topology for Grenoble
Summary
- 5 clusters
- 54 nodes
- 1744 CPU cores
- 48 GPUs
- 172032 GPUs cores
- 12.25 TiB RAM + 6.0 TiB PMEM
- 88 SSDs and 68 HDDs on nodes (total: 228.53 TB)
- 101.9 TFLOPS (excluding GPUs)
Clusters summary
Cluster
|
Access Condition
|
Date of arrival
|
Manufacturing date
|
Nodes
|
CPU
|
Memory
|
Storage
|
Network
|
Accelerators
|
#
|
Name
|
Cores
|
Architecture
|
dahu |
|
2018-03-22 |
2017-12-12 |
32 |
2 |
Intel Xeon Gold 6130 |
16 cores/CPU |
x86_64 |
192 GiB |
240 GB SSD + 480 GB SSD + 4.0 TB HDD |
10 Gbps + 100 Gbps Omni-Path |
|
drac |
exotic job type |
2020-10-05 |
2016-10-17 |
12 |
2 |
IBM POWER8NVL 1.0 |
10 cores/CPU |
ppc64le |
128 GiB |
1.0 TB HDD + 1.0 TB HDD |
10 Gbps + 2 x 100 Gbps InfiniBand |
4 x Nvidia Tesla P100 (16 GiB)
|
servan |
exotic job type |
2021-12-15 |
2021-12-10 |
2 |
2 |
AMD EPYC 7352 |
24 cores/CPU |
x86_64 |
128 GiB |
1.6 TB SSD + 1.6 TB SSD |
25 Gbps + 2 x 100 Gbps FPGA/Ethernet |
Xilinx Alveo U200
|
troll |
exotic job type |
2019-12-23 |
2019-11-21 |
4 |
2 |
Intel Xeon Gold 5218 |
16 cores/CPU |
x86_64 |
384 GiB + 1.5 TiB PMEM |
480 GB SSD + 1.6 TB SSD |
25 Gbps (SR‑IOV) + 100 Gbps Omni-Path |
|
yeti |
exotic job type |
2018-01-16 |
2017-12-26 |
4 |
4 |
Intel Xeon Gold 6130 |
16 cores/CPU |
x86_64 |
768 GiB |
480 GB SSD + 3 x 2.0 TB HDD* + 2 x 1.6 TB SSD |
10 Gbps + 100 Gbps Omni-Path |
|
*: disk is reservable
**: crossed GPUs are not supported by Grid'5000 default environments
***: OPA (Omni-Path Architecture) is currently not supported on Debian 12 environment
32 nodes, 64 cpus, 1024 cores, split as follows due to differences between nodes (json)
Reservation example:
|
fgrenoble :
|
oarsub -p dahu -I
|
- dahu-[1,4-32] (30 nodes, 60 cpus, 960 cores)
Model:
|
Dell PowerEdge C6420
|
Manufacturing date:
|
2017-12-12
|
Date of arrival:
|
2018-03-22
|
CPU:
|
Intel Xeon Gold 6130 (Skylake), x86_64, 2.10GHz, 2 CPUs/node, 16 cores/CPU
|
Memory:
|
192 GiB
|
Storage:
|
- disk0, 240 GB SSD SATA Samsung MZ7KM240HMHQ0D3 (dev:
/dev/disk0 , by-path: /dev/disk/by-path/pci-0000:00:11.5-ata-3 ) (primary disk)
- disk1, 480 GB SSD SATA Samsung MZ7KM480HMHQ0D3 (dev:
/dev/disk1 , by-path: /dev/disk/by-path/pci-0000:00:11.5-ata-4 )
- disk2, 4.0 TB HDD SATA Seagate ST4000NM0265-2DC (dev:
/dev/disk2 , by-path: /dev/disk/by-path/pci-0000:00:11.5-ata-5 )
|
Network:
|
- eth0/enp24s0f0, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e
- eth1/enp24s0f1, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
- ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1
|
- dahu-2 (1 node, 2 cpus, 32 cores)
Model:
|
Dell PowerEdge C6420
|
Manufacturing date:
|
2017-12-12
|
Date of arrival:
|
2018-03-22
|
CPU:
|
Intel Xeon Gold 6130 (Skylake), x86_64, 2.10GHz, 2 CPUs/node, 16 cores/CPU
|
Memory:
|
192 GiB
|
Storage:
|
- disk0, 240 GB SSD SATA Samsung MZ7KM240HMHQ0D3 (dev:
/dev/disk0 , by-path: /dev/disk/by-path/pci-0000:00:11.5-ata-3 ) (primary disk)
- disk1, 480 GB SSD SATA Samsung MZ7KM480HMHQ0D3 (dev:
/dev/disk1 , by-path: /dev/disk/by-path/pci-0000:00:11.5-ata-4 )
- disk2, 4.0 TB HDD SATA Toshiba TOSHIBA MG08ADA4 (dev:
/dev/disk2 , by-path: /dev/disk/by-path/pci-0000:00:11.5-ata-5 )
|
Network:
|
- eth0/enp24s0f0, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e
- eth1/enp24s0f1, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
- ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1
|
- dahu-3 (1 node, 2 cpus, 32 cores)
Model:
|
Dell PowerEdge C6420
|
Manufacturing date:
|
2017-12-12
|
Date of arrival:
|
2018-03-22
|
CPU:
|
Intel Xeon Gold 6130 (Skylake), x86_64, 2.10GHz, 2 CPUs/node, 16 cores/CPU
|
Memory:
|
192 GiB
|
Storage:
|
- disk0, 240 GB SSD SATA Samsung MZ7KM240HMHQ0D3 (dev:
/dev/disk0 , by-path: /dev/disk/by-path/pci-0000:00:11.5-ata-3 ) (primary disk)
- disk1, 480 GB SSD SATA Samsung MZ7KM480HMHQ0D3 (dev:
/dev/disk1 , by-path: /dev/disk/by-path/pci-0000:00:11.5-ata-4 )
- disk2, 4.0 TB HDD SATA Seagate ST4000NM018B-2TF (dev:
/dev/disk2 , by-path: /dev/disk/by-path/pci-0000:00:11.5-ata-5 )
|
Network:
|
- eth0/enp24s0f0, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e
- eth1/enp24s0f1, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
- ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1
|
12 nodes, 24 cpus, 240 cores (json)
Reservation example:
|
fgrenoble :
|
oarsub -t exotic -p drac -I
|
Access condition:
|
exotic job type
|
Model:
|
IBM PowerNV S822LC (8335-GTB)
|
Manufacturing date:
|
2016-10-17
|
Date of arrival:
|
2020-10-05
|
CPU:
|
IBM POWER8NVL 1.0 (POWER8), ppc64le, 2 CPUs/node, 10 cores/CPU
|
Memory:
|
128 GiB
|
Storage:
|
- disk0, 1.0 TB HDD SATA Seagate ST1000NX0313 (dev:
/dev/disk0 , by-path: /dev/disk/by-path/pci-0009:04:00.0-ata-1 ) (primary disk)
- disk1, 1.0 TB HDD SATA Seagate ST1000NX0313 (dev:
/dev/disk1 , by-path: /dev/disk/by-path/pci-0009:04:00.0-ata-2 )
|
Network:
|
- eth0/enP1p1s0f0, Ethernet, configured rate: 10 Gbps, model: Broadcom Inc. and subsidiaries NetXtreme II BCM57800 1/10 Gigabit Ethernet, driver: bnx2x
- eth1/enP1p1s0f1, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme II BCM57800 1/10 Gigabit Ethernet, driver: bnx2x - unavailable for experiment
- eth2/enP1p1s0f2, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme II BCM57800 1/10 Gigabit Ethernet, driver: bnx2x - unavailable for experiment
- eth3/enP1p1s0f3, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme II BCM57800 1/10 Gigabit Ethernet, driver: bnx2x - unavailable for experiment
- eth4/enP9p7s0f0, Ethernet, configured rate: 1 Gbps, model: Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
- eth5/enP9p7s0f1, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
- ib0, InfiniBand, configured rate: 100 Gbps, model: Mellanox Technologies MT27700 Family [ConnectX-4], driver: mlx5_core
- ib1, InfiniBand, configured rate: 100 Gbps, model: Mellanox Technologies MT27700 Family [ConnectX-4], driver: mlx5_core
|
GPU:
|
4 x Nvidia Tesla P100-SXM2-16GB (16 GiB) Compute capability: 6.0
|
Note: This cluster is defined as exotic. Please read the exotic page for more information.
2 nodes, 4 cpus, 96 cores (json)
Reservation example:
|
fgrenoble :
|
oarsub -t exotic -p servan -I
|
Access condition:
|
exotic job type
|
Model:
|
Dell PowerEdge R7525
|
Manufacturing date:
|
2021-12-10
|
Date of arrival:
|
2021-12-15
|
CPU:
|
AMD EPYC 7352 (Zen 2), x86_64, 2 CPUs/node, 24 cores/CPU
|
Memory:
|
128 GiB
|
Storage:
|
- disk0, 1.6 TB SSD NVME Dell Dell Ent NVMe v2 AGN MU U.2 1.6TB (dev:
/dev/disk0 , by-path: /dev/disk/by-path/pci-0000:c3:00.0-nvme-1 ) (primary disk)
- disk1, 1.6 TB SSD NVME Dell Dell Ent NVMe v2 AGN MU U.2 1.6TB (dev:
/dev/disk1 , by-path: /dev/disk/by-path/pci-0000:c4:00.0-nvme-1 )
|
Network:
|
- eth0/eno33, Ethernet, configured rate: 25 Gbps, model: Intel Ethernet Controller E810-C for SFP, driver: ice
- eth1/eno34, Ethernet, model: Intel Ethernet Controller E810-C for SFP, driver: ice - unavailable for experiment
- eth2/eno35, Ethernet, model: Intel Ethernet Controller E810-C for SFP, driver: ice - unavailable for experiment
- eth3/eno36, Ethernet, model: Intel Ethernet Controller E810-C for SFP, driver: ice - unavailable for experiment
- fpga0, FPGA/Ethernet, configured rate: 100 Gbps, model: Xilinx Alveo U200 QSFP28 (100GbE), driver: n/a
- fpga1, FPGA/Ethernet, configured rate: 100 Gbps, model: Xilinx Alveo U200 QSFP28 (100GbE), driver: n/a
|
FPGA:
|
Xilinx Alveo U200
|
Note: This cluster is defined as exotic. Please read the exotic page for more information.
4 nodes, 8 cpus, 128 cores (json)
Reservation example:
|
fgrenoble :
|
oarsub -t exotic -p troll -I
|
Access condition:
|
exotic job type
|
Model:
|
Dell PowerEdge R640
|
Manufacturing date:
|
2019-11-21
|
Date of arrival:
|
2019-12-23
|
CPU:
|
Intel Xeon Gold 5218 (Cascade Lake-SP), x86_64, 2.30GHz, 2 CPUs/node, 16 cores/CPU
|
Memory:
|
384 GiB + 1.5 TiB PMEM
|
Storage:
|
- disk0, 480 GB SSD SATA Micron MTFDDAK480TDN (dev:
/dev/disk0 , by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:0:0 ) (primary disk)
- disk1, 1.6 TB SSD NVME Dell Samsung PM1735 (dev:
/dev/disk1 , by-path: /dev/disk/by-path/pci-0000:d8:00.0-nvme-1 )
|
Network:
|
- eth0/eno1, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27710 Family [ConnectX-4 Lx], driver: mlx5_core, SR-IOV enabled
- eth1/eno2, Ethernet, model: Mellanox Technologies MT27710 Family [ConnectX-4 Lx], driver: mlx5_core - unavailable for experiment
- ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1
|
Note: This cluster is defined as exotic. Please read the exotic page for more information.
4 nodes, 16 cpus, 256 cores, split as follows due to differences between nodes (json)
Reservation example:
|
fgrenoble :
|
oarsub -t exotic -p yeti -I
|
- yeti-1 (1 node, 4 cpus, 64 cores)
Access condition:
|
exotic job type
|
Model:
|
Dell PowerEdge R940
|
Manufacturing date:
|
2017-12-26
|
Date of arrival:
|
2018-01-16
|
CPU:
|
Intel Xeon Gold 6130 (Skylake), x86_64, 2.10GHz, 4 CPUs/node, 16 cores/CPU
|
Memory:
|
768 GiB
|
Storage:
|
- disk0, 480 GB SSD SATA Intel SSDSC2KG480G7R (dev:
/dev/disk0 , by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:0:0 ) (primary disk)
- disk1, 2.0 TB HDD SAS Seagate ST2000NX0463 (dev:
/dev/disk1 *, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:1:0 ) (reservable)
- disk2, 2.0 TB HDD SAS Seagate ST2000NX0463 (dev:
/dev/disk2 *, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:2:0 ) (reservable)
- disk3, 2.0 TB HDD SAS Seagate ST2000NX0463 (dev:
/dev/disk3 *, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:3:0 ) (reservable)
- disk4, 1.6 TB SSD NVME Dell Dell Express Flash PM1725b 1.6TB AIC (dev:
/dev/disk4 , by-path: /dev/disk/by-path/pci-0000:59:00.0-nvme-1 )
- disk5, 1.6 TB SSD NVME Dell Dell Express Flash NVMe PM1725 1.6TB AIC (dev:
/dev/disk5 , by-path: /dev/disk/by-path/pci-0000:6d:00.0-nvme-1 )
|
Network:
|
- eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e
- eth1/eno2, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
- eth2/eno3, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
- eth3/eno4, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
- ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1
|
- yeti-2 (1 node, 4 cpus, 64 cores)
Access condition:
|
exotic job type
|
Model:
|
Dell PowerEdge R940
|
Manufacturing date:
|
2017-12-26
|
Date of arrival:
|
2018-01-16
|
CPU:
|
Intel Xeon Gold 6130 (Skylake), x86_64, 2.10GHz, 4 CPUs/node, 16 cores/CPU
|
Memory:
|
768 GiB
|
Storage:
|
- disk0, 480 GB SSD SATA Intel SSDSC2KG480G7R (dev:
/dev/disk0 , by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:0:0 ) (primary disk)
- disk1, 2.0 TB HDD SAS Seagate ST2000NX0463 (dev:
/dev/disk1 *, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:1:0 ) (reservable)
- disk2, 2.0 TB HDD SAS Seagate ST2000NX0463 (dev:
/dev/disk2 *, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:2:0 ) (reservable)
- disk3, 2.0 TB HDD SAS Seagate ST2000NX0463 (dev:
/dev/disk3 *, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:3:0 ) (reservable)
- disk4, 1.6 TB SSD NVME Dell Dell Express Flash NVMe PM1725 1.6TB AIC (dev:
/dev/disk4 , by-path: /dev/disk/by-path/pci-0000:59:00.0-nvme-1 )
- disk5, 1.6 TB SSD NVME Dell Dell Express Flash NVMe PM1725 1.6TB AIC (dev:
/dev/disk5 , by-path: /dev/disk/by-path/pci-0000:6d:00.0-nvme-1 )
|
Network:
|
- eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e
- eth1/eno2, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
- eth2/eno3, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
- eth3/eno4, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
- ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1
|
- yeti-3 (1 node, 4 cpus, 64 cores)
Access condition:
|
exotic job type
|
Model:
|
Dell PowerEdge R940
|
Manufacturing date:
|
2017-12-26
|
Date of arrival:
|
2018-01-16
|
CPU:
|
Intel Xeon Gold 6130 (Skylake), x86_64, 2.10GHz, 4 CPUs/node, 16 cores/CPU
|
Memory:
|
768 GiB
|
Storage:
|
- disk0, 480 GB SSD SATA Intel SSDSC2KG480G8R (dev:
/dev/disk0 , by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:0:0 ) (primary disk)
- disk1, 2.0 TB HDD SAS Seagate ST2000NX0463 (dev:
/dev/disk1 *, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:1:0 ) (reservable)
- disk2, 2.0 TB HDD SAS Seagate ST2000NX0463 (dev:
/dev/disk2 *, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:2:0 ) (reservable)
- disk3, 2.0 TB HDD SAS Seagate ST2000NX0463 (dev:
/dev/disk3 *, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:3:0 ) (reservable)
- disk4, 1.6 TB SSD NVME Dell Dell Express Flash NVMe PM1725 1.6TB AIC (dev:
/dev/disk4 , by-path: /dev/disk/by-path/pci-0000:59:00.0-nvme-1 )
- disk5, 1.6 TB SSD NVME Dell Dell Express Flash NVMe PM1725 1.6TB AIC (dev:
/dev/disk5 , by-path: /dev/disk/by-path/pci-0000:6d:00.0-nvme-1 )
|
Network:
|
- eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e
- eth1/eno2, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
- eth2/eno3, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
- eth3/eno4, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
- ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1
|
- yeti-4 (1 node, 4 cpus, 64 cores)
Access condition:
|
exotic job type
|
Model:
|
Dell PowerEdge R940
|
Manufacturing date:
|
2017-12-26
|
Date of arrival:
|
2018-01-16
|
CPU:
|
Intel Xeon Gold 6130 (Skylake), x86_64, 2.10GHz, 4 CPUs/node, 16 cores/CPU
|
Memory:
|
768 GiB
|
Storage:
|
- disk0, 480 GB SSD SATA Intel SSDSC2KG480G7R (dev:
/dev/disk0 , by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:0:0 ) (primary disk)
- disk1, 2.0 TB HDD SAS Seagate ST2000NX0463 (dev:
/dev/disk1 *, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:1:0 ) (reservable)
- disk2, 2.0 TB HDD SAS Seagate ST2000NX0463 (dev:
/dev/disk2 *, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:2:0 ) (reservable)
- disk3, 2.0 TB HDD SAS Seagate ST2000NX0463 (dev:
/dev/disk3 *, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:3:0 ) (reservable)
- disk4, 1.6 TB SSD NVME Dell Dell Express Flash NVMe PM1725 1.6TB AIC (dev:
/dev/disk4 , by-path: /dev/disk/by-path/pci-0000:ed:00.0-nvme-1 )
- disk5, 1.6 TB SSD NVME Dell Dell Express Flash NVMe PM1725 1.6TB AIC (dev:
/dev/disk5 , by-path: /dev/disk/by-path/pci-0000:6d:00.0-nvme-1 )
|
Network:
|
- eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e
- eth1/eno2, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
- eth2/eno3, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
- eth3/eno4, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
- ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1
|
Note: This cluster is defined as exotic. Please read the exotic page for more information.
Last generated from the Grid'5000 Reference API on 2024-05-16 (commit 037c3db019)