Nancy:Hardware

From Grid5000
Revision as of 11:21, 4 March 2024 by Pneyron (talk | contribs)
Jump to navigation Jump to search

See also: Network topology for Nancy

Summary

  • 12 clusters
  • 291 nodes
  • 6872 CPU cores
  • 120 GPUs
  • 547840 GPUs cores
  • 44.13 TiB RAM
  • 288 SSDs and 220 HDDs on nodes (total: 512.11 TB)
  • 309.6 TFLOPS (excluding GPUs)

Clusters

Cluster Access Condition Date of arrival Manufacturing date Nodes CPU Memory Storage Network Accelerators
# Name Cores Architecture
graffiti production queue 2019-06-07 2019-05-27 13 2 Intel Xeon Silver 4110 8 cores/CPU x86_64 128 GiB 479 GB HDD 10 Gbps  [1-12]: 4 x Nvidia RTX 2080 Ti (11 GiB)
13: 4 x Nvidia Quadro RTX 6000 (23 GiB)
grappe production queue 2020-08-20 2020-07-09 16 2 Intel Xeon Gold 5218R 20 cores/CPU x86_64 96 GiB 480 GB SSD + 8.0 TB HDD* 25 Gbps 
grat production queue 2022-09-07 2022-06-22 1 2 AMD EPYC 7513 32 cores/CPU x86_64 512 GiB 3.84 TB SSD + 7 x 3.84 TB SSD 25 Gbps (SR‑IOV)  8 x Nvidia A100 (40 GiB)
grele production queue 2017-06-26 2017-06-07 14 2 Intel Xeon E5-2650 v4 12 cores/CPU x86_64 128 GiB 299 GB HDD + 299 GB HDD 10 Gbps (SR‑IOV) + 100 Gbps Omni-Path 2 x Nvidia GTX 1080 Ti (11 GiB)
grisou 2016-01-04 2015-10-23 46 2 Intel Xeon E5-2630 v3 8 cores/CPU x86_64 128 GiB 600 GB HDD + 600 GB HDD 1 Gbps + 4 x 10 Gbps (SR‑IOV) 
gros 2019-09-04 2019-07-16 124 1 Intel Xeon Gold 5220 18 cores/CPU x86_64 96 GiB 480 GB SSD + 960 GB SSD* 2 x 25 Gbps (SR‑IOV) 
grosminet production queue 2023-12-05 2023-11-30 1 4 Intel Xeon Gold 6240L 18 cores/CPU x86_64 6.0 TiB 1.6 TB SSD + 7 x 1.6 TB SSD 25 Gbps (SR‑IOV) 
grostiti production queue 2024-01-10 2015-10-23 1 4 Intel Xeon E7-4850 v3 14 cores/CPU x86_64 1.5 TiB 599 GB HDD + 4.0 TB HDD 10 Gbps (SR‑IOV) + 100 Gbps Omni-Path
grouille exotic job type 2021-01-13 2020-12-07 2 2 AMD EPYC 7452 32 cores/CPU x86_64 128 GiB 1.92 TB SSD + 960 GB SSD* 25 Gbps  2 x Nvidia A100 (40 GiB)
grue production queue 2019-11-25 2019-11-15 5 2 AMD EPYC 7351 16 cores/CPU x86_64 128 GiB 479 GB HDD 10 Gbps  4 x Nvidia Tesla T4 (15 GiB)
gruss production queue 2021-08-26 2021-06-24 4 2 AMD EPYC 7352 24 cores/CPU x86_64 256 GiB 1.92 TB SSD 25 Gbps  2 x Nvidia A40 (45 GiB)
grvingt production queue 2018-04-11 2018-04-01 64 2 Intel Xeon Gold 6130 16 cores/CPU x86_64 192 GiB 1.0 TB HDD 10 Gbps + 100 Gbps Omni-Path

*: disk is reservable

**: crossed GPUs are not supported by Grid'5000 default environments

Clusters in the default queue

grisou

46 nodes, 92 cpus, 736 cores (json)

Reservation example:

Terminal.png fnancy:
oarsub -p grisou -I
Model: Dell PowerEdge R630
Manufacturing date: 2015-10-23
Date of arrival: 2016-01-04
CPU: Intel Xeon E5-2630 v3 (Haswell), x86_64, 2.40GHz, 2 CPUs/node, 8 cores/CPU
Memory: 128 GiB
Storage:
  • disk0, 600 GB HDD SAS Seagate ST600MM0088 (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:0) (primary disk)
  • disk1, 600 GB HDD SAS Seagate ST600MM0088 (dev: /dev/disk1, by-path: /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:1:0)
Network:
  • eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe, SR-IOV enabled
  • eth1/eno2, Ethernet, configured rate: 10 Gbps, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe, SR-IOV enabled (multi NICs example)
  • eth2/enp4s0f0, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet 10G 2P X520 Adapter, driver: ixgbe, SR-IOV enabled (multi NICs example)
  • eth3/enp4s0f1, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet 10G 2P X520 Adapter, driver: ixgbe, SR-IOV enabled (multi NICs example)
  • eth4/eno3, Ethernet, configured rate: 1 Gbps, model: Intel I350 Gigabit Network Connection, driver: igb (multi NICs example)
  • eth5/eno4, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment

gros

124 nodes, 124 cpus, 2232 cores, split as follows due to differences between nodes (json)

Reservation example:

Terminal.png fnancy:
oarsub -p gros -I
gros-[1-26,28-67,69-124] (122 nodes, 122 cpus, 2196 cores)
Model: Dell PowerEdge R640
Manufacturing date: 2019-07-16
Date of arrival: 2019-09-04
CPU: Intel Xeon Gold 5220 (Cascade Lake-SP), x86_64, 2.20GHz, 1 CPU/node, 18 cores/CPU
Memory: 96 GiB
Storage:
  • disk0, 480 GB SSD SATA Micron MTFDDAK480TDN (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:17:00.0-scsi-0:0:0:0) (primary disk)
  • disk1, 960 GB SSD SATA Micron MTFDDAK960TDN (dev: /dev/disk1*, by-path: /dev/disk/by-path/pci-0000:17:00.0-scsi-0:0:1:0) (reservable)
Network:
  • eth0/eno1, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27710 Family [ConnectX-4 Lx], driver: mlx5_core, SR-IOV enabled
  • eth1/eno2, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27710 Family [ConnectX-4 Lx], driver: mlx5_core, SR-IOV enabled (multi NICs example)

gros-27 (1 node, 1 cpu, 18 cores)
Model: Dell PowerEdge R640
Manufacturing date: 2019-07-16
Date of arrival: 2019-09-04
CPU: Intel Xeon Gold 5220 (Cascade Lake-SP), x86_64, 2.20GHz, 1 CPU/node, 18 cores/CPU
Memory: 96 GiB
Storage:
  • disk0, 480 GB SSD SATA Samsung MZ7KH480HAHQ0D3 (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:17:00.0-scsi-0:0:0:0) (primary disk)
  • disk1, 960 GB SSD SATA Micron MTFDDAK960TDN (dev: /dev/disk1*, by-path: /dev/disk/by-path/pci-0000:17:00.0-scsi-0:0:1:0) (reservable)
Network:
  • eth0/eno1, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27710 Family [ConnectX-4 Lx], driver: mlx5_core, SR-IOV enabled
  • eth1/eno2, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27710 Family [ConnectX-4 Lx], driver: mlx5_core, SR-IOV enabled (multi NICs example)

gros-68 (1 node, 1 cpu, 18 cores)
Model: Dell PowerEdge R640
Manufacturing date: 2019-07-16
Date of arrival: 2019-09-04
CPU: Intel Xeon Gold 5220 (Cascade Lake-SP), x86_64, 2.20GHz, 1 CPU/node, 18 cores/CPU
Memory: 96 GiB
Storage:
  • disk0, 480 GB SSD SATA Micron MTFDDAK480TDN (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:17:00.0-scsi-0:0:0:0) (primary disk)
  • disk1, 960 GB SSD SATA Intel SSDSC2KG960G8R (dev: /dev/disk1*, by-path: /dev/disk/by-path/pci-0000:17:00.0-scsi-0:0:1:0) (reservable)
Network:
  • eth0/eno1, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27710 Family [ConnectX-4 Lx], driver: mlx5_core, SR-IOV enabled
  • eth1/eno2, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27710 Family [ConnectX-4 Lx], driver: mlx5_core, SR-IOV enabled (multi NICs example)

grouille

2 nodes, 4 cpus, 128 cores (json)

Reservation example:

Terminal.png fnancy:
oarsub -t exotic -p grouille -I
Access condition: exotic job type
Model: Dell PowerEdge R7525
Manufacturing date: 2020-12-07
Date of arrival: 2021-01-13
CPU: AMD EPYC 7452 (Zen 2), x86_64, 2 CPUs/node, 32 cores/CPU
Memory: 128 GiB
Storage:
  • disk0, 1.92 TB SSD SAS Toshiba KRM5XVUG1T92 (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:01:00.0-scsi-0:0:1:0) (primary disk)
  • disk1, 960 GB SSD SATA Micron MTFDDAK960TDT (dev: /dev/disk1*, by-path: /dev/disk/by-path/pci-0000:01:00.0-scsi-0:0:2:0) (reservable)
Network:
  • eth0/eno1, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • eth1/eno2, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • eth2/eno33, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27800 Family [ConnectX-5], driver: mlx5_core
  • eth3/eno34, Ethernet, model: Mellanox Technologies MT27800 Family [ConnectX-5], driver: mlx5_core - unavailable for experiment
GPU: 2 x Nvidia A100-PCIE-40GB (40 GiB)
Compute capability: 8.0

Note: This cluster is defined as exotic. Please read the exotic page for more information.

Clusters in the production queue

graffiti

13 nodes, 26 cpus, 208 cores, split as follows due to differences between nodes (json)

Reservation example:

Terminal.png fnancy:
oarsub -q production -p graffiti -I

Max walltime per nodes:

  • graffiti-[1-3]: 24h
  • graffiti-[4-6]: 48h
  • graffiti-[7-13]: 168h
graffiti-[1-12] (12 nodes, 24 cpus, 192 cores)
Access condition: production queue
Model: Dell PowerEdge T640
Manufacturing date: 2019-05-27
Date of arrival: 2019-06-07
CPU: Intel Xeon Silver 4110 (Skylake), x86_64, 2.10GHz, 2 CPUs/node, 8 cores/CPU
Memory: 128 GiB
Storage: disk0, 479 GB HDD SATA Dell PERC H330 Adp (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:19:00.0-scsi-0:2:0:0) (primary disk)
Network:
  • eth0/eno1np0, Ethernet, model: Broadcom Inc. and subsidiaries BCM57416 NetXtreme-E Dual-Media 10G RDMA Ethernet Controller, driver: bnxt_en - unavailable for experiment
  • eth1/eno2np1, Ethernet, model: Broadcom Inc. and subsidiaries BCM57416 NetXtreme-E Dual-Media 10G RDMA Ethernet Controller, driver: bnxt_en - unavailable for experiment
  • eth2/ens4f0, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e
  • eth3/ens4f1, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
GPU: 4 x Nvidia GeForce RTX 2080 Ti (11 GiB)
Compute capability: 7.5

graffiti-13 (1 node, 2 cpus, 16 cores)
Access condition: production queue
Model: Dell PowerEdge T640
Manufacturing date: 2019-05-27
Date of arrival: 2019-06-07
CPU: Intel Xeon Silver 4110 (Skylake), x86_64, 2.10GHz, 2 CPUs/node, 8 cores/CPU
Memory: 128 GiB
Storage: disk0, 479 GB HDD SATA Dell PERC H330 Adp (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:19:00.0-scsi-0:2:0:0) (primary disk)
Network:
  • eth0/eno1np0, Ethernet, model: Broadcom Inc. and subsidiaries BCM57416 NetXtreme-E Dual-Media 10G RDMA Ethernet Controller, driver: bnxt_en - unavailable for experiment
  • eth1/eno2np1, Ethernet, model: Broadcom Inc. and subsidiaries BCM57416 NetXtreme-E Dual-Media 10G RDMA Ethernet Controller, driver: bnxt_en - unavailable for experiment
  • eth2/ens4f0, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e
  • eth3/ens4f1, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
GPU: 4 x Nvidia Quadro RTX 6000 (23 GiB)
Compute capability: 7.5

grappe

16 nodes, 32 cpus, 640 cores (json)

Reservation example:

Terminal.png fnancy:
oarsub -q production -p grappe -I

Max walltime per nodes:

  • grappe-[1-4]: 48h
  • grappe-[5-8]: 96h
  • grappe-[9-16]: 168h
Access condition: production queue
Model: Dell PowerEdge R640
Manufacturing date: 2020-07-09
Date of arrival: 2020-08-20
CPU: Intel Xeon Gold 5218R (Cascade Lake-SP), x86_64, 2.10GHz, 2 CPUs/node, 20 cores/CPU
Memory: 96 GiB
Storage:
  • disk0, 480 GB SSD SATA Intel SSDSC2KG480G8R (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:1a:00.0-scsi-0:0:0:0) (primary disk)
  • disk1, 8.0 TB HDD SAS Seagate ST8000NM0185 (dev: /dev/disk1*, by-path: /dev/disk/by-path/pci-0000:1a:00.0-scsi-0:0:1:0) (reservable)
Network:
  • eth0/ens1f0, Ethernet, configured rate: 25 Gbps, model: Intel Ethernet Controller XXV710 for 25GbE SFP28, driver: i40e
  • eth1/ens1f1, Ethernet, model: Intel Ethernet Controller XXV710 for 25GbE SFP28, driver: i40e - unavailable for experiment
  • eth2/eno1, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • eth3/eno2, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • eth4/eno3, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • eth5/eno4, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment

grat

1 node, 2 cpus, 64 cores (json)

Reservation example:

Terminal.png fnancy:
oarsub -q production -p grat -I

Max walltime per nodes:

  • grat-1: 168h
Access condition: production queue
Model: HPE Apollo 6500
Manufacturing date: 2022-06-22
Date of arrival: 2022-09-07
CPU: AMD EPYC 7513 (Zen 3), x86_64, 2 CPUs/node, 32 cores/CPU
Memory: 512 GiB
Storage:
  • disk0, 3.84 TB SSD SAS HP VO003840RZWVA (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:64:00.0-sas-0x58ce38ee21ce4fea-lun-0) (primary disk)
  • disk1, 3.84 TB SSD SAS HP VO003840RZWVA (dev: /dev/disk1, by-path: /dev/disk/by-path/pci-0000:64:00.0-sas-0x58ce38ee21ce4fee-lun-0)
  • disk2, 3.84 TB SSD SAS HP VO003840RZWVA (dev: /dev/disk2, by-path: /dev/disk/by-path/pci-0000:64:00.0-sas-0x58ce38ee21ce5002-lun-0)
  • disk3, 3.84 TB SSD SAS HP VO003840RZWVA (dev: /dev/disk3, by-path: /dev/disk/by-path/pci-0000:64:00.0-sas-0x58ce38ee21ce500e-lun-0)
  • disk4, 3.84 TB SSD SAS HP VO003840RZWVA (dev: /dev/disk4, by-path: /dev/disk/by-path/pci-0000:64:00.0-sas-0x58ce38ee21ce5012-lun-0)
  • disk5, 3.84 TB SSD SAS HP VO003840RZWVA (dev: /dev/disk5, by-path: /dev/disk/by-path/pci-0000:64:00.0-sas-0x58ce38ee21ce5016-lun-0)
  • disk6, 3.84 TB SSD SAS HP VO003840RZWVA (dev: /dev/disk6, by-path: /dev/disk/by-path/pci-0000:64:00.0-sas-0x58ce38ee21ce501a-lun-0)
  • disk7, 3.84 TB SSD SAS HP VO003840RZWVA (dev: /dev/disk7, by-path: /dev/disk/by-path/pci-0000:64:00.0-sas-0x58ce38ee21ce502e-lun-0)
Network:
  • eth0/ens21f0np0, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27800 Family [ConnectX-5], driver: mlx5_core, SR-IOV enabled
  • eth1/ens21f1np1, Ethernet, model: Mellanox Technologies MT27800 Family [ConnectX-5], driver: mlx5_core - unavailable for experiment
GPU: 8 x Nvidia A100-SXM4-40GB (40 GiB)
Compute capability: 8.0

grele

14 nodes, 28 cpus, 336 cores (json)

Reservation example:

Terminal.png fnancy:
oarsub -q production -p grele -I

Max walltime per nodes:

  • grele-[1-3]: 24h
  • grele-[4-6]: 48h
  • grele-[7-14]: 168h
Access condition: production queue
Model: Dell PowerEdge R730
Manufacturing date: 2017-06-07
Date of arrival: 2017-06-26
CPU: Intel Xeon E5-2650 v4 (Broadwell), x86_64, 2.20GHz, 2 CPUs/node, 12 cores/CPU
Memory: 128 GiB
Storage:
  • disk0, 299 GB HDD SAS Dell PERC H730 Mini (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:0:0) (primary disk)
  • disk1, 299 GB HDD SAS Dell PERC H730 Mini (dev: /dev/disk1, by-path: /dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:1:0)
Network:
  • eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe, SR-IOV enabled
  • eth1/eno2, Ethernet, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe - unavailable for experiment
  • eth2/eno3, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth3/eno4, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1
GPU: 2 x Nvidia GeForce GTX 1080 Ti (11 GiB)
Compute capability: 6.1

grosminet

1 node, 4 cpus, 72 cores (json)

Reservation example:

Terminal.png fnancy:
oarsub -q production -p grosminet -I

Max walltime per nodes:

  • grosminet-1: 24h
Access condition: production queue
Model: Proliant DL560 Gen10
Manufacturing date: 2023-11-30
Date of arrival: 2023-12-05
CPU: Intel Xeon Gold 6240L (Cascade Lake-SP), x86_64, 2.60GHz, 4 CPUs/node, 18 cores/CPU
Memory: 6.0 TiB
Storage:
  • disk0, 1.6 TB SSD SAS HPE VO001600JWZJQ (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:28:00.0-sas-0x50000f0b02a4b0c2-lun-0) (primary disk)
  • disk1, 1.6 TB SSD SAS HPE VO001600JWZJQ (dev: /dev/disk1, by-path: /dev/disk/by-path/pci-0000:28:00.0-sas-0x50000f0b02a4b0e2-lun-0)
  • disk2, 1.6 TB SSD SAS HPE VO001600JWZJQ (dev: /dev/disk2, by-path: /dev/disk/by-path/pci-0000:28:00.0-sas-0x50000f0b02a4b0f2-lun-0)
  • disk3, 1.6 TB SSD SAS HPE VO001600JWZJQ (dev: /dev/disk3, by-path: /dev/disk/by-path/pci-0000:28:00.0-sas-0x50000f0b02a4b102-lun-0)
  • disk4, 1.6 TB SSD SAS HPE VO001600JWZJQ (dev: /dev/disk4, by-path: /dev/disk/by-path/pci-0000:28:00.0-sas-0x50000f0b02a4b112-lun-0)
  • disk5, 1.6 TB SSD SAS HPE VO001600JWZJQ (dev: /dev/disk5, by-path: /dev/disk/by-path/pci-0000:28:00.0-sas-0x50000f0b02a4b122-lun-0)
  • disk6, 1.6 TB SSD SAS HPE VO001600JWZJQ (dev: /dev/disk6, by-path: /dev/disk/by-path/pci-0000:28:00.0-sas-0x50000f0b02a4b132-lun-0)
  • disk7, 1.6 TB SSD SAS HPE VO001600JWZJQ (dev: /dev/disk7, by-path: /dev/disk/by-path/pci-0000:28:00.0-sas-0x50000f0b02a4b182-lun-0)
Network:
  • eth0/ens1f0, Ethernet, configured rate: 25 Gbps, model: Intel Ethernet Controller E810-XXV for SFP, driver: ice, SR-IOV enabled - no KaVLAN
  • eth1/ens1f1, Ethernet, model: Intel Ethernet Controller E810-XXV for SFP, driver: ice - unavailable for experiment

grostiti

1 node, 4 cpus, 56 cores (json)

Reservation example:

Terminal.png fnancy:
oarsub -q production -p grostiti -I

Max walltime per nodes:

  • grostiti-1: 168h
Access condition: production queue
Model: Dell PowerEdge R930
Manufacturing date: 2015-10-23
Date of arrival: 2024-01-10
CPU: Intel Xeon E7-4850 v3 (Haswell), x86_64, 2.20GHz, 4 CPUs/node, 14 cores/CPU
Memory: 1.5 TiB
Storage:
  • disk0, 599 GB HDD SAS Dell PERC H730P Adp (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:04:00.0-scsi-0:2:0:0) (primary disk)
  • disk1, 4.0 TB HDD SAS Dell PERC H730P Adp (dev: /dev/disk1, by-path: /dev/disk/by-path/pci-0000:04:00.0-scsi-0:2:1:0)
Network:
  • eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe, SR-IOV enabled
  • eth1/eno2, Ethernet, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe - unavailable for experiment
  • eth2/eno3, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth3/eno4, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • ib0, InfiniBand, model: Mellanox Technologies MT27500 Family [ConnectX-3], driver: mlx4_core - unavailable for experiment
  • ib1, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1
  • ib2, InfiniBand, model: Mellanox Technologies MT27500 Family [ConnectX-3], driver: mlx4_core - unavailable for experiment

grue

5 nodes, 10 cpus, 160 cores (json)

Reservation example:

Terminal.png fnancy:
oarsub -q production -p grue -I

Max walltime per nodes:

  • grue-[1-2]: 24h
  • grue-[3-4]: 48h
  • grue-5: 168h
Access condition: production queue
Model: Dell PowerEdge R7425
Manufacturing date: 2019-11-15
Date of arrival: 2019-11-25
CPU: AMD EPYC 7351 (Zen), x86_64, 2 CPUs/node, 16 cores/CPU
Memory: 128 GiB
Storage: disk0, 479 GB HDD SAS Dell PERC H730P Adp (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:e1:00.0-scsi-0:2:0:0) (primary disk)
Network:
  • eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e
  • eth1/eno2, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
  • eth2/eno3, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth3/eno4, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
GPU: 4 x Nvidia Tesla T4 (15 GiB)
Compute capability: 7.5

gruss

4 nodes, 8 cpus, 192 cores, split as follows due to differences between nodes (json)

Reservation example:

Terminal.png fnancy:
oarsub -q production -p gruss -I

Max walltime per nodes:

  • gruss-[1-2]: 24h
  • gruss-3: 48h
  • gruss-4: 168h
gruss-1 (1 node, 2 cpus, 48 cores)
Access condition: production queue
Model: Dell PowerEdge R7525
Manufacturing date: 2021-06-24
Date of arrival: 2021-08-26
CPU: AMD EPYC 7352 (Zen 2), x86_64, 2 CPUs/node, 24 cores/CPU
Memory: 256 GiB
Storage: disk0, 1.92 TB SSD SATA Samsung MZ7KH1T9HAJR0D3 (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:01:00.0-scsi-0:0:1:0) (primary disk)
Network:
  • eth0/eno1, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • eth1/eno2, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • eth2/eno33, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27800 Family [ConnectX-5], driver: mlx5_core
  • eth3/eno34, Ethernet, model: Mellanox Technologies MT27800 Family [ConnectX-5], driver: mlx5_core - unavailable for experiment
GPU: 2 x Nvidia A40 (45 GiB)
Compute capability: 8.6

gruss-[2-4] (3 nodes, 6 cpus, 144 cores)
Access condition: production queue
Model: Dell PowerEdge R7525
Manufacturing date: 2021-06-24
Date of arrival: 2021-08-26
CPU: AMD EPYC 7352 (Zen 2), x86_64, 2 CPUs/node, 24 cores/CPU
Memory: 256 GiB
Storage: disk0, 1.92 TB SSD SATA Sk Hynix HFS1T9G32FEH-BA1 (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:01:00.0-scsi-0:0:0:0) (primary disk)
Network:
  • eth0/eno1, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • eth1/eno2, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • eth2/eno33, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27800 Family [ConnectX-5], driver: mlx5_core
  • eth3/eno34, Ethernet, model: Mellanox Technologies MT27800 Family [ConnectX-5], driver: mlx5_core - unavailable for experiment
GPU: 2 x Nvidia A40 (45 GiB)
Compute capability: 8.6

grvingt

64 nodes, 128 cpus, 2048 cores (json)

Reservation example:

Terminal.png fnancy:
oarsub -q production -p grvingt -I

Max walltime per nodes:

  • grvingt-[1-8]: 4h
  • grvingt-[9-16]: 12h
  • grvingt-[17-64]: 168h
Access condition: production queue
Model: Dell PowerEdge C6420
Manufacturing date: 2018-04-01
Date of arrival: 2018-04-11
CPU: Intel Xeon Gold 6130 (Skylake), x86_64, 2.10GHz, 2 CPUs/node, 16 cores/CPU
Memory: 192 GiB
Storage: disk0, 1.0 TB HDD SATA Seagate ST1000NX0443 (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:00:11.5-ata-3) (primary disk)
Network:
  • eth0/enp24s0f0, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e
  • eth1/enp24s0f1, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
  • ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1

Last generated from the Grid'5000 Reference API on 2024-03-04 (commit 7f4aeb69ef)