Grenoble:Hardware: Difference between revisions

From Grid5000
Jump to navigation Jump to search
No edit summary
No edit summary
Line 32: Line 32:
|}
|}
''*: disk is [[Disk_reservation|reservable]]''
''*: disk is [[Disk_reservation|reservable]]''
= Clusters in default queue =
= Clusters in the [https://intranet.grid5000.fr/oar/Grenoble/drawgantt-svg/ default queue] =


== dahu ==
== [https://intranet.grid5000.fr/oar/Grenoble/drawgantt-svg/?filter=dahu%20only dahu] ==


'''32 nodes, 64 cpus, 1024 cores''' ([https://public-api.grid5000.fr/stable/sites/grenoble/clusters/dahu/nodes.json?pretty=1 json])
'''32 nodes, 64 cpus, 1024 cores''' ([https://public-api.grid5000.fr/stable/sites/grenoble/clusters/dahu/nodes.json?pretty=1 json])
Line 70: Line 70:
|}
|}


== drac ==
== [https://intranet.grid5000.fr/oar/Grenoble/drawgantt-svg/?filter=drac%20only drac] ==


'''12 nodes, 24 cpus, 240 cores''' ([https://public-api.grid5000.fr/stable/sites/grenoble/clusters/drac/nodes.json?pretty=1 json])
'''12 nodes, 24 cpus, 240 cores''' ([https://public-api.grid5000.fr/stable/sites/grenoble/clusters/drac/nodes.json?pretty=1 json])
Line 116: Line 116:
|}
|}


== troll ==
== [https://intranet.grid5000.fr/oar/Grenoble/drawgantt-svg/?filter=troll%20only troll] ==


'''4 nodes, 8 cpus, 128 cores''' ([https://public-api.grid5000.fr/stable/sites/grenoble/clusters/troll/nodes.json?pretty=1 json])
'''4 nodes, 8 cpus, 128 cores''' ([https://public-api.grid5000.fr/stable/sites/grenoble/clusters/troll/nodes.json?pretty=1 json])
Line 154: Line 154:
|}
|}


== yeti ==
== [https://intranet.grid5000.fr/oar/Grenoble/drawgantt-svg/?filter=yeti%20only yeti] ==


'''4 nodes, 16 cpus, 256 cores,''' split as follows due to differences between nodes ([https://public-api.grid5000.fr/stable/sites/grenoble/clusters/yeti/nodes.json?pretty=1 json])
'''4 nodes, 16 cpus, 256 cores,''' split as follows due to differences between nodes ([https://public-api.grid5000.fr/stable/sites/grenoble/clusters/yeti/nodes.json?pretty=1 json])
Line 275: Line 275:
|}
|}


= Clusters in testing queue =
= Clusters in the [https://intranet.grid5000.fr/oar/Grenoble/drawgantt-svg/?filter=with%20testing testing queue] =


== servan ==
== [https://intranet.grid5000.fr/oar/Grenoble/drawgantt-svg/?filter=with%20testing?filter=servan%20only servan] ==


'''2 nodes, 4 cpus, 96 cores''' ([https://public-api.grid5000.fr/stable/sites/grenoble/clusters/servan/nodes.json?pretty=1 json])
'''2 nodes, 4 cpus, 96 cores''' ([https://public-api.grid5000.fr/stable/sites/grenoble/clusters/servan/nodes.json?pretty=1 json])
Line 314: Line 314:
* <span style="color:grey">eth3/eno36, Ethernet, model: Intel Ethernet Controller E810-C for SFP, driver: ice - unavailable for experiment</span><br/>
* <span style="color:grey">eth3/eno36, Ethernet, model: Intel Ethernet Controller E810-C for SFP, driver: ice - unavailable for experiment</span><br/>
|-
|-
|}''<small>Last generated from the Grid'5000 Reference API on 2022-02-16 ([https://github.com/grid5000/reference-repository/commit/0f1ecf9aba commit 0f1ecf9aba])</small>''
|}''<small>Last generated from the Grid'5000 Reference API on 2022-03-30 ([https://github.com/grid5000/reference-repository/commit/07ffde5274 commit 07ffde5274])</small>''

Revision as of 12:38, 30 March 2022

See also: Network topology for Grenoble

Summary

5 clusters, 54 nodes, 1744 cores, 101.9 TFLOPS

Cluster Access Condition Date of arrival Nodes CPU Cores Memory Storage Network Accelerators
dahu 2018-03-22 32 2 x Intel Xeon Gold 6130 16 cores/CPU 192 GiB 240 GB SSD + 480 GB SSD + 4.0 TB HDD 10 Gbps + 100 Gbps Omni-Path
drac exotic job type 2020-10-05 12 2 x Power POWER8NVL 1.0 10 cores/CPU 128 GiB 1 x 1.0 TB HDD + 1 x 1.0 TB HDD 10 Gbps + 2 x 100 Gbps InfiniBand 4 x Nvidia Tesla P100 (16 GiB)
servan testing queue,
exotic job type
2021-12-15 2 2 x AMD EPYC 7352 24 cores/CPU 128 GiB 1 x 1.6 TB SSD + 1 x 1.6 TB SSD 25 Gbps 
troll exotic job type 2019-12-23 4 2 x Intel Xeon Gold 5218 16 cores/CPU 384 GiB + 1.5 TiB PMEM 480 GB SSD + 1.6 TB SSD 25 Gbps (SR‑IOV) + 100 Gbps Omni-Path
yeti exotic job type 2018-01-16 4 4 x Intel Xeon Gold 6130 16 cores/CPU 768 GiB 480 GB SSD + 3 x 2.0 TB HDD* + 2 x 1.6 TB SSD 10 Gbps + 100 Gbps Omni-Path

*: disk is reservable

Clusters in the default queue

dahu

32 nodes, 64 cpus, 1024 cores (json)

Reservation example:

Terminal.png fgrenoble:
oarsub -p "cluster='dahu'" -I
Model: Dell PowerEdge C6420
Date of arrival: 2018-03-22
CPU: Intel Xeon Gold 6130 (Skylake, 2.10GHz, 2 CPUs/node, 16 cores/CPU)
Memory: 192 GiB
Storage:
  • disk0, 240 GB SSD SATA Samsung MZ7KM240HMHQ0D3 (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:00:11.5-ata-3) (primary disk)
  • disk1, 480 GB SSD SATA Samsung MZ7KM480HMHQ0D3 (dev: /dev/disk1, by-path: /dev/disk/by-path/pci-0000:00:11.5-ata-4)
  • disk2, 4.0 TB HDD SATA Seagate ST4000NM0265-2DC (dev: /dev/disk2, by-path: /dev/disk/by-path/pci-0000:00:11.5-ata-5)
Network:
  • eth0/enp24s0f0, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e
  • eth1/enp24s0f1, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
  • ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1

drac

12 nodes, 24 cpus, 240 cores (json)

Reservation example:

Terminal.png fgrenoble:
oarsub -t exotic -p "cluster='drac'" -I
Access condition: exotic job type
Model: IBM PowerNV S822LC (8335-GTB)
Date of arrival: 2020-10-05
CPU: POWER8NVL 1.0 (POWER8, 2 CPUs/node, 10 cores/CPU)
Memory: 128 GiB
Storage:
  • disk0, 1.0 TB HDD SATA Seagate ST1000NX0313 (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0009:04:00.0-ata-1) (primary disk)
  • disk1, 1.0 TB HDD SATA Seagate ST1000NX0313 (dev: /dev/disk1, by-path: /dev/disk/by-path/pci-0009:04:00.0-ata-2)
Network:
  • eth0/enP1p1s0f0, Ethernet, configured rate: 10 Gbps, model: Broadcom Inc. and subsidiaries NetXtreme II BCM57800 1/10 Gigabit Ethernet, driver: bnx2x
  • eth1/enP1p1s0f1, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme II BCM57800 1/10 Gigabit Ethernet, driver: bnx2x - unavailable for experiment
  • eth2/enP1p1s0f2, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme II BCM57800 1/10 Gigabit Ethernet, driver: bnx2x - unavailable for experiment
  • eth3/enP1p1s0f3, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme II BCM57800 1/10 Gigabit Ethernet, driver: bnx2x - unavailable for experiment
  • eth4/enP9p7s0f0, Ethernet, configured rate: 1 Gbps, model: Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • eth5/enP9p7s0f1, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • ib0, InfiniBand, configured rate: 100 Gbps, model: Mellanox Technologies MT27700 Family [ConnectX-4], driver: mlx5_core
  • ib1, InfiniBand, configured rate: 100 Gbps, model: Mellanox Technologies MT27700 Family [ConnectX-4], driver: mlx5_core
GPU: 4 x Nvidia Tesla P100-SXM2-16GB (16 GiB)
Compute capability: 6.0

troll

4 nodes, 8 cpus, 128 cores (json)

Reservation example:

Terminal.png fgrenoble:
oarsub -t exotic -p "cluster='troll'" -I
Access condition: exotic job type
Model: Dell PowerEdge R640
Date of arrival: 2019-12-23
CPU: Intel Xeon Gold 5218 (Cascade Lake-SP, 2.30GHz, 2 CPUs/node, 16 cores/CPU)
Memory: 384 GiB + 1.5 TiB PMEM
Storage:
  • disk0, 480 GB SSD SATA Micron MTFDDAK480TDN (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:0:0) (primary disk)
  • disk1, 1.6 TB SSD NVME Dell Samsung PM1735 (dev: /dev/disk1, by-path: /dev/disk/by-path/pci-0000:d8:00.0-nvme-1)
Network:
  • eth0/eno1, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27710 Family [ConnectX-4 Lx], driver: mlx5_core, SR-IOV enabled
  • eth1/eno2, Ethernet, model: Mellanox Technologies MT27710 Family [ConnectX-4 Lx], driver: mlx5_core - unavailable for experiment
  • ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1

yeti

4 nodes, 16 cpus, 256 cores, split as follows due to differences between nodes (json)

Reservation example:

Terminal.png fgrenoble:
oarsub -t exotic -p "cluster='yeti'" -I
yeti-1 (1 node, 4 cpus, 64 cores)
Access condition: exotic job type
Model: Dell PowerEdge R940
Date of arrival: 2018-01-16
CPU: Intel Xeon Gold 6130 (Skylake, 2.10GHz, 4 CPUs/node, 16 cores/CPU)
Memory: 768 GiB
Storage:
  • disk0, 480 GB SSD SATA Intel SSDSC2KG480G7R (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:0:0) (primary disk)
  • disk1, 2.0 TB HDD SAS Seagate ST2000NX0463 (dev: /dev/disk1*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:1:0) (reservable)
  • disk2, 2.0 TB HDD SAS Seagate ST2000NX0463 (dev: /dev/disk2*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:2:0) (reservable)
  • disk3, 2.0 TB HDD SAS Seagate ST2000NX0463 (dev: /dev/disk3*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:3:0) (reservable)
  • disk4, 1.6 TB SSD NVME Dell Dell Express Flash PM1725b 1.6TB AIC (dev: /dev/disk4, by-path: /dev/disk/by-path/pci-0000:59:00.0-nvme-1)
  • disk5, 1.6 TB SSD NVME Dell Dell Express Flash NVMe PM1725 1.6TB AIC (dev: /dev/disk5, by-path: /dev/disk/by-path/pci-0000:6d:00.0-nvme-1)
Network:
  • eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e
  • eth1/eno2, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
  • eth2/eno3, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
  • eth3/eno4, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
  • ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1

yeti-[2,4] (2 nodes, 8 cpus, 128 cores)
Access condition: exotic job type
Model: Dell PowerEdge R940
Date of arrival: 2018-01-16
CPU: Intel Xeon Gold 6130 (Skylake, 2.10GHz, 4 CPUs/node, 16 cores/CPU)
Memory: 768 GiB
Storage:
  • disk0, 480 GB SSD SATA Intel SSDSC2KG480G7R (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:0:0) (primary disk)
  • disk1, 2.0 TB HDD SAS Seagate ST2000NX0463 (dev: /dev/disk1*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:1:0) (reservable)
  • disk2, 2.0 TB HDD SAS Seagate ST2000NX0463 (dev: /dev/disk2*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:2:0) (reservable)
  • disk3, 2.0 TB HDD SAS Seagate ST2000NX0463 (dev: /dev/disk3*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:3:0) (reservable)
  • disk4, 1.6 TB SSD NVME Dell Dell Express Flash NVMe PM1725 1.6TB AIC (dev: /dev/disk4, by-path: /dev/disk/by-path/pci-0000:59:00.0-nvme-1)
  • disk5, 1.6 TB SSD NVME Dell Dell Express Flash NVMe PM1725 1.6TB AIC (dev: /dev/disk5, by-path: /dev/disk/by-path/pci-0000:6d:00.0-nvme-1)
Network:
  • eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e
  • eth1/eno2, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
  • eth2/eno3, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
  • eth3/eno4, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
  • ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1

yeti-3 (1 node, 4 cpus, 64 cores)
Access condition: exotic job type
Model: Dell PowerEdge R940
Date of arrival: 2018-01-16
CPU: Intel Xeon Gold 6130 (Skylake, 2.10GHz, 4 CPUs/node, 16 cores/CPU)
Memory: 768 GiB
Storage:
  • disk0, 480 GB SSD SATA Intel SSDSC2KG480G8R (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:0:0) (primary disk)
  • disk1, 2.0 TB HDD SAS Seagate ST2000NX0463 (dev: /dev/disk1*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:1:0) (reservable)
  • disk2, 2.0 TB HDD SAS Seagate ST2000NX0463 (dev: /dev/disk2*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:2:0) (reservable)
  • disk3, 2.0 TB HDD SAS Seagate ST2000NX0463 (dev: /dev/disk3*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:3:0) (reservable)
  • disk4, 1.6 TB SSD NVME Dell Dell Express Flash NVMe PM1725 1.6TB AIC (dev: /dev/disk4, by-path: /dev/disk/by-path/pci-0000:59:00.0-nvme-1)
  • disk5, 1.6 TB SSD NVME Dell Dell Express Flash NVMe PM1725 1.6TB AIC (dev: /dev/disk5, by-path: /dev/disk/by-path/pci-0000:6d:00.0-nvme-1)
Network:
  • eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e
  • eth1/eno2, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
  • eth2/eno3, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
  • eth3/eno4, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
  • ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1

Clusters in the testing queue

servan

2 nodes, 4 cpus, 96 cores (json)

Reservation example:

Terminal.png fgrenoble:
oarsub -q testing -t exotic -p "cluster='servan'" -I
Access condition: testing queue, exotic job type
Model: Dell PowerEdge R7525
Date of arrival: 2021-12-15
CPU: AMD EPYC 7352 (Zen 2, 2 CPUs/node, 24 cores/CPU)
Memory: 128 GiB
Storage:
  • disk0, 1.6 TB SSD NVME Dell Dell Ent NVMe v2 AGN MU U.2 1.6TB (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:c3:00.0-nvme-1) (primary disk)
  • disk1, 1.6 TB SSD NVME Dell Dell Ent NVMe v2 AGN MU U.2 1.6TB (dev: /dev/disk1, by-path: /dev/disk/by-path/pci-0000:c4:00.0-nvme-1)
Network:
  • eth0/eno33, Ethernet, configured rate: 25 Gbps, model: Intel Ethernet Controller E810-C for SFP, driver: ice
  • eth1/eno34, Ethernet, model: Intel Ethernet Controller E810-C for SFP, driver: ice - unavailable for experiment
  • eth2/eno35, Ethernet, model: Intel Ethernet Controller E810-C for SFP, driver: ice - unavailable for experiment
  • eth3/eno36, Ethernet, model: Intel Ethernet Controller E810-C for SFP, driver: ice - unavailable for experiment

Last generated from the Grid'5000 Reference API on 2022-03-30 (commit 07ffde5274)