Grenoble:Hardware

From Grid5000
Revision as of 12:42, 30 March 2022 by Pringot (talk | contribs)
Jump to navigation Jump to search
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.

See also: Network topology for Grenoble

Summary

5 clusters, 54 nodes, 1744 cores, 101.9 TFLOPS

Cluster Access Condition Date of arrival Nodes CPU Cores Memory Storage Network Accelerators
dahu 2018-03-22 32 2 x Intel Xeon Gold 6130 16 cores/CPU 192 GiB 240 GB SSD + 480 GB SSD + 4.0 TB HDD 10 Gbps + 100 Gbps Omni-Path
drac exotic job type 2020-10-05 12 2 x Power POWER8NVL 1.0 10 cores/CPU 128 GiB 1 x 1.0 TB HDD + 1 x 1.0 TB HDD 10 Gbps + 2 x 100 Gbps InfiniBand 4 x Nvidia Tesla P100 (16 GiB)
servan testing queue,
exotic job type
2021-12-15 2 2 x AMD EPYC 7352 24 cores/CPU 128 GiB 1 x 1.6 TB SSD + 1 x 1.6 TB SSD 25 Gbps 
troll exotic job type 2019-12-23 4 2 x Intel Xeon Gold 5218 16 cores/CPU 384 GiB + 1.5 TiB PMEM 480 GB SSD + 1.6 TB SSD 25 Gbps (SR‑IOV) + 100 Gbps Omni-Path
yeti exotic job type 2018-01-16 4 4 x Intel Xeon Gold 6130 16 cores/CPU 768 GiB 480 GB SSD + 3 x 2.0 TB HDD* + 2 x 1.6 TB SSD 10 Gbps + 100 Gbps Omni-Path

*: disk is reservable

Clusters in the default queue

dahu

32 nodes, 64 cpus, 1024 cores (json)

Reservation example:

Terminal.png fgrenoble:
oarsub -p "cluster='dahu'" -I
Model: Dell PowerEdge C6420
Date of arrival: 2018-03-22
CPU: Intel Xeon Gold 6130 (Skylake, 2.10GHz, 2 CPUs/node, 16 cores/CPU)
Memory: 192 GiB
Storage:
  • disk0, 240 GB SSD SATA Samsung MZ7KM240HMHQ0D3 (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:00:11.5-ata-3) (primary disk)
  • disk1, 480 GB SSD SATA Samsung MZ7KM480HMHQ0D3 (dev: /dev/disk1, by-path: /dev/disk/by-path/pci-0000:00:11.5-ata-4)
  • disk2, 4.0 TB HDD SATA Seagate ST4000NM0265-2DC (dev: /dev/disk2, by-path: /dev/disk/by-path/pci-0000:00:11.5-ata-5)
Network:
  • eth0/enp24s0f0, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e
  • eth1/enp24s0f1, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
  • ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1

drac

12 nodes, 24 cpus, 240 cores (json)

Reservation example:

Terminal.png fgrenoble:
oarsub -t exotic -p "cluster='drac'" -I
Access condition: exotic job type
Model: IBM PowerNV S822LC (8335-GTB)
Date of arrival: 2020-10-05
CPU: POWER8NVL 1.0 (POWER8, 2 CPUs/node, 10 cores/CPU)
Memory: 128 GiB
Storage:
  • disk0, 1.0 TB HDD SATA Seagate ST1000NX0313 (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0009:04:00.0-ata-1) (primary disk)
  • disk1, 1.0 TB HDD SATA Seagate ST1000NX0313 (dev: /dev/disk1, by-path: /dev/disk/by-path/pci-0009:04:00.0-ata-2)
Network:
  • eth0/enP1p1s0f0, Ethernet, configured rate: 10 Gbps, model: Broadcom Inc. and subsidiaries NetXtreme II BCM57800 1/10 Gigabit Ethernet, driver: bnx2x
  • eth1/enP1p1s0f1, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme II BCM57800 1/10 Gigabit Ethernet, driver: bnx2x - unavailable for experiment
  • eth2/enP1p1s0f2, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme II BCM57800 1/10 Gigabit Ethernet, driver: bnx2x - unavailable for experiment
  • eth3/enP1p1s0f3, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme II BCM57800 1/10 Gigabit Ethernet, driver: bnx2x - unavailable for experiment
  • eth4/enP9p7s0f0, Ethernet, configured rate: 1 Gbps, model: Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • eth5/enP9p7s0f1, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • ib0, InfiniBand, configured rate: 100 Gbps, model: Mellanox Technologies MT27700 Family [ConnectX-4], driver: mlx5_core
  • ib1, InfiniBand, configured rate: 100 Gbps, model: Mellanox Technologies MT27700 Family [ConnectX-4], driver: mlx5_core
GPU: 4 x Nvidia Tesla P100-SXM2-16GB (16 GiB)
Compute capability: 6.0

troll

4 nodes, 8 cpus, 128 cores (json)

Reservation example:

Terminal.png fgrenoble:
oarsub -t exotic -p "cluster='troll'" -I
Access condition: exotic job type
Model: Dell PowerEdge R640
Date of arrival: 2019-12-23
CPU: Intel Xeon Gold 5218 (Cascade Lake-SP, 2.30GHz, 2 CPUs/node, 16 cores/CPU)
Memory: 384 GiB + 1.5 TiB PMEM
Storage:
  • disk0, 480 GB SSD SATA Micron MTFDDAK480TDN (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:0:0) (primary disk)
  • disk1, 1.6 TB SSD NVME Dell Samsung PM1735 (dev: /dev/disk1, by-path: /dev/disk/by-path/pci-0000:d8:00.0-nvme-1)
Network:
  • eth0/eno1, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27710 Family [ConnectX-4 Lx], driver: mlx5_core, SR-IOV enabled
  • eth1/eno2, Ethernet, model: Mellanox Technologies MT27710 Family [ConnectX-4 Lx], driver: mlx5_core - unavailable for experiment
  • ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1

yeti

4 nodes, 16 cpus, 256 cores, split as follows due to differences between nodes (json)

Reservation example:

Terminal.png fgrenoble:
oarsub -t exotic -p "cluster='yeti'" -I
yeti-1 (1 node, 4 cpus, 64 cores)
Access condition: exotic job type
Model: Dell PowerEdge R940
Date of arrival: 2018-01-16
CPU: Intel Xeon Gold 6130 (Skylake, 2.10GHz, 4 CPUs/node, 16 cores/CPU)
Memory: 768 GiB
Storage:
  • disk0, 480 GB SSD SATA Intel SSDSC2KG480G7R (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:0:0) (primary disk)
  • disk1, 2.0 TB HDD SAS Seagate ST2000NX0463 (dev: /dev/disk1*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:1:0) (reservable)
  • disk2, 2.0 TB HDD SAS Seagate ST2000NX0463 (dev: /dev/disk2*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:2:0) (reservable)
  • disk3, 2.0 TB HDD SAS Seagate ST2000NX0463 (dev: /dev/disk3*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:3:0) (reservable)
  • disk4, 1.6 TB SSD NVME Dell Dell Express Flash PM1725b 1.6TB AIC (dev: /dev/disk4, by-path: /dev/disk/by-path/pci-0000:59:00.0-nvme-1)
  • disk5, 1.6 TB SSD NVME Dell Dell Express Flash NVMe PM1725 1.6TB AIC (dev: /dev/disk5, by-path: /dev/disk/by-path/pci-0000:6d:00.0-nvme-1)
Network:
  • eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e
  • eth1/eno2, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
  • eth2/eno3, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
  • eth3/eno4, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
  • ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1

yeti-[2,4] (2 nodes, 8 cpus, 128 cores)
Access condition: exotic job type
Model: Dell PowerEdge R940
Date of arrival: 2018-01-16
CPU: Intel Xeon Gold 6130 (Skylake, 2.10GHz, 4 CPUs/node, 16 cores/CPU)
Memory: 768 GiB
Storage:
  • disk0, 480 GB SSD SATA Intel SSDSC2KG480G7R (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:0:0) (primary disk)
  • disk1, 2.0 TB HDD SAS Seagate ST2000NX0463 (dev: /dev/disk1*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:1:0) (reservable)
  • disk2, 2.0 TB HDD SAS Seagate ST2000NX0463 (dev: /dev/disk2*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:2:0) (reservable)
  • disk3, 2.0 TB HDD SAS Seagate ST2000NX0463 (dev: /dev/disk3*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:3:0) (reservable)
  • disk4, 1.6 TB SSD NVME Dell Dell Express Flash NVMe PM1725 1.6TB AIC (dev: /dev/disk4, by-path: /dev/disk/by-path/pci-0000:59:00.0-nvme-1)
  • disk5, 1.6 TB SSD NVME Dell Dell Express Flash NVMe PM1725 1.6TB AIC (dev: /dev/disk5, by-path: /dev/disk/by-path/pci-0000:6d:00.0-nvme-1)
Network:
  • eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e
  • eth1/eno2, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
  • eth2/eno3, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
  • eth3/eno4, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
  • ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1

yeti-3 (1 node, 4 cpus, 64 cores)
Access condition: exotic job type
Model: Dell PowerEdge R940
Date of arrival: 2018-01-16
CPU: Intel Xeon Gold 6130 (Skylake, 2.10GHz, 4 CPUs/node, 16 cores/CPU)
Memory: 768 GiB
Storage:
  • disk0, 480 GB SSD SATA Intel SSDSC2KG480G8R (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:0:0) (primary disk)
  • disk1, 2.0 TB HDD SAS Seagate ST2000NX0463 (dev: /dev/disk1*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:1:0) (reservable)
  • disk2, 2.0 TB HDD SAS Seagate ST2000NX0463 (dev: /dev/disk2*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:2:0) (reservable)
  • disk3, 2.0 TB HDD SAS Seagate ST2000NX0463 (dev: /dev/disk3*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:3:0) (reservable)
  • disk4, 1.6 TB SSD NVME Dell Dell Express Flash NVMe PM1725 1.6TB AIC (dev: /dev/disk4, by-path: /dev/disk/by-path/pci-0000:59:00.0-nvme-1)
  • disk5, 1.6 TB SSD NVME Dell Dell Express Flash NVMe PM1725 1.6TB AIC (dev: /dev/disk5, by-path: /dev/disk/by-path/pci-0000:6d:00.0-nvme-1)
Network:
  • eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e
  • eth1/eno2, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
  • eth2/eno3, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
  • eth3/eno4, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
  • ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1

Clusters in the testing queue

servan

2 nodes, 4 cpus, 96 cores (json)

Reservation example:

Terminal.png fgrenoble:
oarsub -q testing -t exotic -p "cluster='servan'" -I
Access condition: testing queue, exotic job type
Model: Dell PowerEdge R7525
Date of arrival: 2021-12-15
CPU: AMD EPYC 7352 (Zen 2, 2 CPUs/node, 24 cores/CPU)
Memory: 128 GiB
Storage:
  • disk0, 1.6 TB SSD NVME Dell Dell Ent NVMe v2 AGN MU U.2 1.6TB (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:c3:00.0-nvme-1) (primary disk)
  • disk1, 1.6 TB SSD NVME Dell Dell Ent NVMe v2 AGN MU U.2 1.6TB (dev: /dev/disk1, by-path: /dev/disk/by-path/pci-0000:c4:00.0-nvme-1)
Network:
  • eth0/eno33, Ethernet, configured rate: 25 Gbps, model: Intel Ethernet Controller E810-C for SFP, driver: ice
  • eth1/eno34, Ethernet, model: Intel Ethernet Controller E810-C for SFP, driver: ice - unavailable for experiment
  • eth2/eno35, Ethernet, model: Intel Ethernet Controller E810-C for SFP, driver: ice - unavailable for experiment
  • eth3/eno36, Ethernet, model: Intel Ethernet Controller E810-C for SFP, driver: ice - unavailable for experiment

Last generated from the Grid'5000 Reference API on 2022-03-30 (commit 07ffde5274)