See also: Network topology for Grenoble
Summary
- 5 clusters
- 54 nodes
- 1744 CPU cores
- 48 GPUs
- 12.25 TiB RAM + 6.0 TiB PMEM
- 88 SSDs and 68 HDDs on nodes (total: 228.53 TB)
- 101.9 TFLOPS (excluding GPUs)
Clusters
Cluster
|
Access Condition
|
Date of arrival
|
Nodes
|
CPU
|
Memory
|
Storage
|
Network
|
Accelerators
|
#
|
Name
|
Cores
|
Architecture
|
#
|
Name
|
Compute capability
|
Micro-architecture
|
dahu |
|
2018-03-22 |
32 |
2 |
Intel Xeon Gold 6130 |
16 cores/CPU |
x86_64 |
192 GiB |
240 GB SSD + 480 GB SSD + 4.0 TB HDD |
10 Gbps + 100 Gbps Omni-Path |
|
|
|
|
drac |
exotic job type |
2020-10-05 |
12 |
2 |
POWER8NVL 1.0 |
10 cores/CPU |
ppc64le |
128 GiB |
1.0 TB HDD + 1.0 TB HDD |
10 Gbps + 2 x 100 Gbps InfiniBand |
4 |
Nvidia Tesla P100 (16 GiB) |
6.0 |
Pascal
|
servan |
exotic job type |
2021-12-15 |
2 |
2 |
AMD EPYC 7352 |
24 cores/CPU |
x86_64 |
128 GiB |
1.6 TB SSD + 1.6 TB SSD |
25 Gbps + 2 x 100 Gbps Ethernet-FPGA |
1 |
Xilinx Alveo U200 |
|
|
troll |
exotic job type |
2019-12-23 |
4 |
2 |
Intel Xeon Gold 5218 |
16 cores/CPU |
x86_64 |
384 GiB + 1.5 TiB PMEM |
480 GB SSD + 1.6 TB SSD |
25 Gbps (SR‑IOV) + 100 Gbps Omni-Path |
|
|
|
|
yeti |
exotic job type |
2018-01-16 |
4 |
4 |
Intel Xeon Gold 6130 |
16 cores/CPU |
x86_64 |
768 GiB |
480 GB SSD + 3 x 2.0 TB HDD* + 2 x 1.6 TB SSD |
10 Gbps + 100 Gbps Omni-Path |
|
|
|
|
*: disk is reservable
32 nodes, 64 cpus, 1024 cores (json)
Reservation example:
|
fgrenoble :
|
oarsub -p dahu -I
|
Model:
|
Dell PowerEdge C6420
|
Date of arrival:
|
2018-03-22
|
CPU:
|
Intel Xeon Gold 6130 (Skylake, 2.10GHz, 2 CPUs/node, 16 cores/CPU)
|
Memory:
|
192 GiB
|
Storage:
|
- disk0, 240 GB SSD SATA Samsung MZ7KM240HMHQ0D3 (dev:
/dev/disk0 , by-path: /dev/disk/by-path/pci-0000:00:11.5-ata-3 ) (primary disk)
- disk1, 480 GB SSD SATA Samsung MZ7KM480HMHQ0D3 (dev:
/dev/disk1 , by-path: /dev/disk/by-path/pci-0000:00:11.5-ata-4 )
- disk2, 4.0 TB HDD SATA Seagate ST4000NM0265-2DC (dev:
/dev/disk2 , by-path: /dev/disk/by-path/pci-0000:00:11.5-ata-5 )
|
Network:
|
- eth0/enp24s0f0, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e
- eth1/enp24s0f1, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
- ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1
|
12 nodes, 24 cpus, 240 cores (json)
Reservation example:
|
fgrenoble :
|
oarsub -t exotic -p drac -I
|
Access condition:
|
exotic job type
|
Model:
|
IBM PowerNV S822LC (8335-GTB)
|
Date of arrival:
|
2020-10-05
|
CPU:
|
POWER8NVL 1.0 (POWER8, 2 CPUs/node, 10 cores/CPU)
|
Memory:
|
128 GiB
|
Storage:
|
- disk0, 1.0 TB HDD SATA Seagate ST1000NX0313 (dev:
/dev/disk0 , by-path: /dev/disk/by-path/pci-0009:04:00.0-ata-1 ) (primary disk)
- disk1, 1.0 TB HDD SATA Seagate ST1000NX0313 (dev:
/dev/disk1 , by-path: /dev/disk/by-path/pci-0009:04:00.0-ata-2 )
|
Network:
|
- eth0/enP1p1s0f0, Ethernet, configured rate: 10 Gbps, model: Broadcom Inc. and subsidiaries NetXtreme II BCM57800 1/10 Gigabit Ethernet, driver: bnx2x
- eth1/enP1p1s0f1, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme II BCM57800 1/10 Gigabit Ethernet, driver: bnx2x - unavailable for experiment
- eth2/enP1p1s0f2, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme II BCM57800 1/10 Gigabit Ethernet, driver: bnx2x - unavailable for experiment
- eth3/enP1p1s0f3, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme II BCM57800 1/10 Gigabit Ethernet, driver: bnx2x - unavailable for experiment
- eth4/enP9p7s0f0, Ethernet, configured rate: 1 Gbps, model: Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
- eth5/enP9p7s0f1, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
- ib0, InfiniBand, configured rate: 100 Gbps, model: Mellanox Technologies MT27700 Family [ConnectX-4], driver: mlx5_core
- ib1, InfiniBand, configured rate: 100 Gbps, model: Mellanox Technologies MT27700 Family [ConnectX-4], driver: mlx5_core
|
GPU:
|
4 x Nvidia Tesla P100-SXM2-16GB (16 GiB) Compute capability: 6.0, Micro-architecture: Pascal
|
2 nodes, 4 cpus, 96 cores (json)
Reservation example:
|
fgrenoble :
|
oarsub -t exotic -p servan -I
|
Access condition:
|
exotic job type
|
Model:
|
Dell PowerEdge R7525
|
Date of arrival:
|
2021-12-15
|
CPU:
|
AMD EPYC 7352 (Zen 2, 2 CPUs/node, 24 cores/CPU)
|
Memory:
|
128 GiB
|
Storage:
|
- disk0, 1.6 TB SSD NVME Dell Dell Ent NVMe v2 AGN MU U.2 1.6TB (dev:
/dev/disk0 , by-path: /dev/disk/by-path/pci-0000:c3:00.0-nvme-1 ) (primary disk)
- disk1, 1.6 TB SSD NVME Dell Dell Ent NVMe v2 AGN MU U.2 1.6TB (dev:
/dev/disk1 , by-path: /dev/disk/by-path/pci-0000:c4:00.0-nvme-1 )
|
Network:
|
- eth0/eno33, Ethernet, configured rate: 25 Gbps, model: Intel Ethernet Controller E810-C for SFP, driver: ice
- eth1/eno34, Ethernet, model: Intel Ethernet Controller E810-C for SFP, driver: ice - unavailable for experiment
- eth2/eno35, Ethernet, model: Intel Ethernet Controller E810-C for SFP, driver: ice - unavailable for experiment
- eth3/eno36, Ethernet, model: Intel Ethernet Controller E810-C for SFP, driver: ice - unavailable for experiment
- fpga0, Ethernet-FPGA, configured rate: 100 Gbps, model: Xilinx Alveo U200 QSFP28 (100GbE), driver: xrt
- fpga1, Ethernet-FPGA, configured rate: 100 Gbps, model: Xilinx Alveo U200 QSFP28 (100GbE), driver: xrt
|
FPGA:
|
Xilinx Alveo U200
|
4 nodes, 8 cpus, 128 cores (json)
Reservation example:
|
fgrenoble :
|
oarsub -t exotic -p troll -I
|
Access condition:
|
exotic job type
|
Model:
|
Dell PowerEdge R640
|
Date of arrival:
|
2019-12-23
|
CPU:
|
Intel Xeon Gold 5218 (Cascade Lake-SP, 2.30GHz, 2 CPUs/node, 16 cores/CPU)
|
Memory:
|
384 GiB + 1.5 TiB PMEM
|
Storage:
|
- disk0, 480 GB SSD SATA Micron MTFDDAK480TDN (dev:
/dev/disk0 , by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:0:0 ) (primary disk)
- disk1, 1.6 TB SSD NVME Dell Samsung PM1735 (dev:
/dev/disk1 , by-path: /dev/disk/by-path/pci-0000:d8:00.0-nvme-1 )
|
Network:
|
- eth0/eno1, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27710 Family [ConnectX-4 Lx], driver: mlx5_core, SR-IOV enabled
- eth1/eno2, Ethernet, model: Mellanox Technologies MT27710 Family [ConnectX-4 Lx], driver: mlx5_core - unavailable for experiment
- ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1
|
4 nodes, 16 cpus, 256 cores, split as follows due to differences between nodes (json)
Reservation example:
|
fgrenoble :
|
oarsub -t exotic -p yeti -I
|
- yeti-1 (1 node, 4 cpus, 64 cores)
Access condition:
|
exotic job type
|
Model:
|
Dell PowerEdge R940
|
Date of arrival:
|
2018-01-16
|
CPU:
|
Intel Xeon Gold 6130 (Skylake, 2.10GHz, 4 CPUs/node, 16 cores/CPU)
|
Memory:
|
768 GiB
|
Storage:
|
- disk0, 480 GB SSD SATA Intel SSDSC2KG480G7R (dev:
/dev/disk0 , by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:0:0 ) (primary disk)
- disk1, 2.0 TB HDD SAS Seagate ST2000NX0463 (dev:
/dev/disk1 *, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:1:0 ) (reservable)
- disk2, 2.0 TB HDD SAS Seagate ST2000NX0463 (dev:
/dev/disk2 *, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:2:0 ) (reservable)
- disk3, 2.0 TB HDD SAS Seagate ST2000NX0463 (dev:
/dev/disk3 *, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:3:0 ) (reservable)
- disk4, 1.6 TB SSD NVME Dell Dell Express Flash PM1725b 1.6TB AIC (dev:
/dev/disk4 , by-path: /dev/disk/by-path/pci-0000:59:00.0-nvme-1 )
- disk5, 1.6 TB SSD NVME Dell Dell Express Flash NVMe PM1725 1.6TB AIC (dev:
/dev/disk5 , by-path: /dev/disk/by-path/pci-0000:6d:00.0-nvme-1 )
|
Network:
|
- eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e
- eth1/eno2, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
- eth2/eno3, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
- eth3/eno4, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
- ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1
|
- yeti-[2,4] (2 nodes, 8 cpus, 128 cores)
Access condition:
|
exotic job type
|
Model:
|
Dell PowerEdge R940
|
Date of arrival:
|
2018-01-16
|
CPU:
|
Intel Xeon Gold 6130 (Skylake, 2.10GHz, 4 CPUs/node, 16 cores/CPU)
|
Memory:
|
768 GiB
|
Storage:
|
- disk0, 480 GB SSD SATA Intel SSDSC2KG480G7R (dev:
/dev/disk0 , by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:0:0 ) (primary disk)
- disk1, 2.0 TB HDD SAS Seagate ST2000NX0463 (dev:
/dev/disk1 *, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:1:0 ) (reservable)
- disk2, 2.0 TB HDD SAS Seagate ST2000NX0463 (dev:
/dev/disk2 *, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:2:0 ) (reservable)
- disk3, 2.0 TB HDD SAS Seagate ST2000NX0463 (dev:
/dev/disk3 *, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:3:0 ) (reservable)
- disk4, 1.6 TB SSD NVME Dell Dell Express Flash NVMe PM1725 1.6TB AIC (dev:
/dev/disk4 , by-path: /dev/disk/by-path/pci-0000:59:00.0-nvme-1 )
- disk5, 1.6 TB SSD NVME Dell Dell Express Flash NVMe PM1725 1.6TB AIC (dev:
/dev/disk5 , by-path: /dev/disk/by-path/pci-0000:6d:00.0-nvme-1 )
|
Network:
|
- eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e
- eth1/eno2, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
- eth2/eno3, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
- eth3/eno4, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
- ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1
|
- yeti-3 (1 node, 4 cpus, 64 cores)
Access condition:
|
exotic job type
|
Model:
|
Dell PowerEdge R940
|
Date of arrival:
|
2018-01-16
|
CPU:
|
Intel Xeon Gold 6130 (Skylake, 2.10GHz, 4 CPUs/node, 16 cores/CPU)
|
Memory:
|
768 GiB
|
Storage:
|
- disk0, 480 GB SSD SATA Intel SSDSC2KG480G8R (dev:
/dev/disk0 , by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:0:0 ) (primary disk)
- disk1, 2.0 TB HDD SAS Seagate ST2000NX0463 (dev:
/dev/disk1 *, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:1:0 ) (reservable)
- disk2, 2.0 TB HDD SAS Seagate ST2000NX0463 (dev:
/dev/disk2 *, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:2:0 ) (reservable)
- disk3, 2.0 TB HDD SAS Seagate ST2000NX0463 (dev:
/dev/disk3 *, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:3:0 ) (reservable)
- disk4, 1.6 TB SSD NVME Dell Dell Express Flash NVMe PM1725 1.6TB AIC (dev:
/dev/disk4 , by-path: /dev/disk/by-path/pci-0000:59:00.0-nvme-1 )
- disk5, 1.6 TB SSD NVME Dell Dell Express Flash NVMe PM1725 1.6TB AIC (dev:
/dev/disk5 , by-path: /dev/disk/by-path/pci-0000:6d:00.0-nvme-1 )
|
Network:
|
- eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e
- eth1/eno2, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
- eth2/eno3, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
- eth3/eno4, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
- ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1
|
Last generated from the Grid'5000 Reference API on 2022-08-23 (commit bf57134755)