Grenoble:Hardware

From Grid5000
Revision as of 14:44, 16 December 2020 by Bjonglez (talk | contribs)
Jump to navigation Jump to search
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.

Summary

4 clusters, 52 nodes, 1648 cores, 98.4 TFLOPS

Cluster Access Condition Date of arrival Nodes CPU Cores Memory Storage Network Accelerators
dahu 2018-03-22 32 2 x Intel Xeon Gold 6130 16 cores/CPU 192 GiB 240 GB SSD + 480 GB SSD + 4.0 TB HDD 10 Gbps + 100 Gbps Omni-Path
drac testing queue,
exotic job type
2020-10-05 12 2 x POWER8NVL 1.0 10 cores/CPU 128 GiB 1 x 1.0 TB HDD + 1 x 1.0 TB HDD 10 Gbps + 2 x 100 Gbps InfiniBand 4 x Nvidia Tesla P100
troll 2019-12-23 4 2 x Intel Xeon Gold 5218 16 cores/CPU 384 GiB + 1.5 TiB PMEM 480 GB SSD + 1.6 TB SSD 10 Gbps + 100 Gbps Omni-Path
yeti 2018-01-16 4 4 x Intel Xeon Gold 6130 16 cores/CPU 768 GiB 480 GB SSD + 3 x 2.0 TB HDD* + 1.6 TB SSD 10 Gbps + 100 Gbps Omni-Path

*: disk is reservable

Clusters in default queue

dahu

32 nodes, 64 cpus, 1024 cores (json)

Reservation example:
Terminal.png fgrenoble:
oarsub -p "cluster='dahu'" -I

Model: Dell PowerEdge C6420
Date of arrival: 2018-03-22
CPU: Intel Xeon Gold 6130 (Skylake, 2.10GHz, 2 CPUs/node, 16 cores/CPU)
Memory: 192 GiB
Storage:
  • 240 GB SSD SATA Samsung MZ7KM240HMHQ0D3 (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:00:11.5-ata-3) (primary disk)
  • 480 GB SSD SATA Samsung MZ7KM480HMHQ0D3 (dev: /dev/sdb, by-path: /dev/disk/by-path/pci-0000:00:11.5-ata-4)
  • 4.0 TB HDD SATA Seagate ST4000NM0265-2DC (dev: /dev/sdc, by-path: /dev/disk/by-path/pci-0000:00:11.5-ata-5)
Network:
  • eth0/enp24s0f0, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e
  • eth1/enp24s0f1, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
  • ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1

troll

4 nodes, 8 cpus, 128 cores (json)

Reservation example:
Terminal.png fgrenoble:
oarsub -p "cluster='troll'" -I

Model: Dell PowerEdge R640
Date of arrival: 2019-12-23
CPU: Intel Xeon Gold 5218 (Cascade Lake-SP, 2.30GHz, 2 CPUs/node, 16 cores/CPU)
Memory: 384 GiB + 1.5 TiB PMEM
Storage:
  • 480 GB SSD SATA Micron MTFDDAK480TDN (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:0:0) (primary disk)
  • 1.6 TB SSD NVME Dell Dell Express Flash NVMe PM1725 1.6TB AIC (dev: /dev/nvme0n1, by-path: /dev/disk/by-path/pci-0000:d8:00.0-nvme-1)
Network:
  • eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Mellanox Technologies MT27710 Family [ConnectX-4 Lx], driver: mlx5_core
  • eth1/eno2, Ethernet, model: Mellanox Technologies MT27710 Family [ConnectX-4 Lx], driver: mlx5_core - unavailable for experiment
  • ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1

yeti

4 nodes, 16 cpus, 256 cores, split as follows due to differences between nodes (json)

yeti-[1-2,4] (3 nodes, 12 cpus, 192 cores)
Reservation example:
Terminal.png fgrenoble:
oarsub -p "cluster='yeti'" -I

Model: Dell PowerEdge R940
Date of arrival: 2018-01-16
CPU: Intel Xeon Gold 6130 (Skylake, 2.10GHz, 4 CPUs/node, 16 cores/CPU)
Memory: 768 GiB
Storage:
  • 480 GB SSD SAS Intel SSDSC2KG480G7R (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:0:0) (primary disk)
  • 2.0 TB HDD SAS Seagate ST2000NX0463 (dev: /dev/sdb*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:1:0) (reservable)
  • 2.0 TB HDD SAS Seagate ST2000NX0463 (dev: /dev/sdc*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:2:0) (reservable)
  • 2.0 TB HDD SAS Seagate ST2000NX0463 (dev: /dev/sdd*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:3:0) (reservable)
  • 1.6 TB SSD NVME Dell Dell Express Flash NVMe PM1725 1.6TB AIC (dev: /dev/nvme0n1, by-path: /dev/disk/by-path/pci-0000:6d:00.0-nvme-1)

*: the disk block device name /dev/sd? may vary in deployed environments, prefer referring to the by-path identifier

Network:
  • eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e
  • eth1/eno2, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
  • eth2/eno3, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
  • eth3/eno4, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
  • ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1

yeti-3 (1 node, 4 cpus, 64 cores)
Reservation example:
Terminal.png fgrenoble:
oarsub -p "cluster='yeti'" -I

Model: Dell PowerEdge R940
Date of arrival: 2018-01-16
CPU: Intel Xeon Gold 6130 (Skylake, 2.10GHz, 4 CPUs/node, 16 cores/CPU)
Memory: 768 GiB
Storage:
  • 480 GB SSD SAS Intel SSDSC2KG480G8R (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:0:0) (primary disk)
  • 2.0 TB HDD SAS Seagate ST2000NX0463 (dev: /dev/sdb*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:1:0) (reservable)
  • 2.0 TB HDD SAS Seagate ST2000NX0463 (dev: /dev/sdc*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:2:0) (reservable)
  • 2.0 TB HDD SAS Seagate ST2000NX0463 (dev: /dev/sdd*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:3:0) (reservable)
  • 1.6 TB SSD NVME Dell Dell Express Flash NVMe PM1725 1.6TB AIC (dev: /dev/nvme0n1, by-path: /dev/disk/by-path/pci-0000:6d:00.0-nvme-1)

*: the disk block device name /dev/sd? may vary in deployed environments, prefer referring to the by-path identifier

Network:
  • eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e
  • eth1/eno2, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
  • eth2/eno3, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
  • eth3/eno4, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
  • ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1

Clusters in testing queue

drac

12 nodes, 24 cpus, 240 cores (json)

Access condition: testing queue, exotic job type
Reservation example:
Terminal.png fgrenoble:
oarsub -q testing -t exotic -p "cluster='drac'" -I

Model: IBM PowerNV S822LC (8335-GTB)
Date of arrival: 2020-10-05
CPU: POWER8NVL 1.0 (POWER8, 2 CPUs/node, 10 cores/CPU)
Memory: 128 GiB
Storage:
  • 1.0 TB HDD SATA Seagate ST1000NX0313 (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0009:04:00.0-ata-1) (primary disk)
  • 1.0 TB HDD SATA Seagate ST1000NX0313 (dev: /dev/sdb, by-path: /dev/disk/by-path/pci-0009:04:00.0-ata-2)
Network:
  • eth0/enP1p1s0f0, Ethernet, configured rate: 10 Gbps, model: Broadcom NetXtreme II BCM57800 1/10 Gigabit Ethernet, driver: bnx2x
  • eth1/enP1p1s0f1, Ethernet, model: Broadcom NetXtreme II BCM57800 1/10 Gigabit Ethernet, driver: bnx2x - unavailable for experiment
  • eth2/enP1p1s0f2, Ethernet, model: Broadcom NetXtreme II BCM57800 1/10 Gigabit Ethernet, driver: bnx2x - unavailable for experiment
  • eth3/enP1p1s0f3, Ethernet, model: Broadcom NetXtreme II BCM57800 1/10 Gigabit Ethernet, driver: bnx2x - unavailable for experiment
  • eth4/enP9p7s0f0, Ethernet, configured rate: 1 Gbps, model: Broadcom NetXtreme BCM5719 Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • eth5/enP9p7s0f1, Ethernet, model: Broadcom NetXtreme BCM5719 Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • ib0, InfiniBand, configured rate: 100 Gbps, model: Mellanox Technologies MT27700 Family [ConnectX-4], driver: mlx5_core
  • ib1, InfiniBand, configured rate: 100 Gbps, model: Mellanox Technologies MT27700 Family [ConnectX-4], driver: mlx5_core
GPU: 4 x Nvidia Tesla P100-SXM2-16GB

Last generated from the Grid'5000 Reference API on 2020-12-16 (commit fda405fcc)