Sophia:Hardware

From Grid5000
Revision as of 09:26, 13 February 2025 by Lpouilloux (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

See also: Network topology for Sophia

Summary

  • 5 clusters
  • 22 nodes
  • 556 CPU cores
  • 25 GPUs
  • 250880 GPUs cores
  • 6.25 TiB RAM
  • 11 SSDs and 18 HDDs on nodes (total: 59.17 TB)
  • 48.0 TFLOPS (excluding GPUs)

Default queue resources

  • 1 cluster
  • 4 nodes
  • 48 CPU cores
  • 384 GiB RAM
  • 4 HDDs on nodes (total: 1.0 TB)
  • 0.6 TFLOPS (excluding GPUs)

Production queue resources

  • 3 clusters
  • 15 nodes
  • 448 CPU cores
  • 14 GPUs
  • 211456 GPUs cores
  • 5.5 TiB RAM
  • 8 SSDs and 8 HDDs on nodes (total: 49.77 TB)
  • 45.3 TFLOPS (excluding GPUs)

Clusters summary

Default queue resources

Cluster Access Condition Date of arrival Manufacturing date Nodes CPU Memory Storage Network Accelerators
# Name Cores Architecture
uvb 2011-01-04 2011-01-04 4 2 Intel Xeon X5670 6 cores/CPU x86_64 96 GiB 250 GB HDD 1 Gbps (SR‑IOV) + 40 Gbps InfiniBand

**: crossed GPUs are not supported by Grid'5000 default environments

Production queue resources

Cluster Access Condition Date of arrival Manufacturing date Nodes CPU Memory Storage Network Accelerators
# Name Cores Architecture
esterel41 production queue 2025-01-25 2024-03-01 1 2 Intel Xeon Gold 6426Y 16 cores/CPU x86_64 512 GiB 479 GB SSD + 2.88 TB SSD 1 Gbps + 56 Gbps InfiniBand 2 x Nvidia L40 (45 GiB)
mercantour2 production queue 2025-01-16 2015-09-01 8 2 Intel Xeon E5-2650 v2 8 cores/CPU x86_64 256 GiB 1.0 TB HDD 1 Gbps (SR‑IOV) + 40 Gbps InfiniBand
musa production queue 2025-01-16 2024-12-09 6 2 AMD EPYC 9254 24 cores/CPU x86_64 512 GiB 6.4 TB SSD 25 Gbps  2 x Nvidia Tesla H100 (94 GiB)

**: crossed GPUs are not supported by Grid'5000 default environments

Testing queue resources

Cluster Access Condition Date of arrival Manufacturing date Nodes CPU Memory Storage Network Accelerators
# Name Cores Architecture
esterel10 testing queue 2024-12-19 2017-11-15 3 2 Intel Xeon E5-2630 v4 10 cores/CPU x86_64 128 GiB 1.6 TB SSD + 2 x 600 GB HDD 1 Gbps + 56 Gbps InfiniBand [1-2]: 4 x Nvidia GTX 1080 Ti (11 GiB)
3: 3 x Nvidia GTX 1080 Ti (11 GiB)

**: crossed GPUs are not supported by Grid'5000 default environments

Clusters in the default queue

uvb

4 nodes, 8 cpus, 48 cores (json)

Reservation example:

Terminal.png fsophia:
oarsub -p uvb -I
Model: Dell PowerEdge C6100
Manufacturing date: 2011-01-04
Date of arrival: 2011-01-04
CPU: Intel Xeon X5670 (Westmere), x86_64, 2.93GHz, 2 CPUs/node, 6 cores/CPU
Memory: 96 GiB
Storage: disk0, 250 GB HDD SATA Western Digital WDC WD2502ABYS-1 (dev: /dev/disk0) (primary disk)
Network:
  • eth0/eno1, Ethernet, configured rate: 1 Gbps, model: Intel 82576 Gigabit Network Connection, driver: igb, SR-IOV enabled
  • eth1/eno2, Ethernet, model: Intel 82576 Gigabit Network Connection, driver: igb - unavailable for experiment
  • ib0, InfiniBand, configured rate: 40 Gbps, model: Mellanox Technologies MT25408A0-FCC-QI ConnectX, Dual Port 40Gb/s InfiniBand / 10GigE Adapter IC with PCIe 2.0 x8 5.0GT/s Interface, driver: mlx4_core
  • ib1, InfiniBand, model: Mellanox Technologies MT25408A0-FCC-QI ConnectX, Dual Port 40Gb/s InfiniBand / 10GigE Adapter IC with PCIe 2.0 x8 5.0GT/s Interface, driver: mlx4_core - unavailable for experiment

Clusters in the production queue

esterel41

1 node, 2 cpus, 32 cores (json)

Reservation example:

Terminal.png fsophia:
oarsub -q production -p esterel41 -I

Max walltime per nodes:

  • esterel41-1: 168h
Access condition: production queue
Model: DL380 Gen11
Manufacturing date: 2024-03-01
Date of arrival: 2025-01-25
CPU: Intel Xeon Gold 6426Y (Sapphire Rapids), x86_64, 2 CPUs/node, 16 cores/CPU
Memory: 512 GiB
Storage:
  • disk0, 479 GB SSD SATA HPE MR416i-o Gen11 (dev: /dev/disk0) (primary disk)
  • disk1, 2.88 TB SSD SATA HPE MR416i-o Gen11 (dev: /dev/disk1)
Network:
  • eth0/enp1s0f0np0, Ethernet, configured rate: 1 Gbps, model: Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe, driver: tg3 - no KaVLAN
  • eth1/ens15f1, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • eth2/ens15f2, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • eth3/ens15f3, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • ibs3, InfiniBand, configured rate: 56 Gbps, model: Mellanox Technologies MT28908 Family [ConnectX-6], driver: mlx5_core
GPU: 2 x Nvidia L40 (45 GiB)
Compute capability: 8.9

mercantour2

8 nodes, 16 cpus, 128 cores, split as follows due to differences between nodes (json)

Reservation example:

Terminal.png fsophia:
oarsub -q production -p mercantour2 -I

Max walltime per nodes:

  • mercantour2-[1-8]: 168h
mercantour2-[1-6,8] (7 nodes, 14 cpus, 112 cores)
Access condition: production queue
Model: Dell PowerEdge C6220 II
Manufacturing date: 2015-09-01
Date of arrival: 2025-01-16
CPU: Intel Xeon E5-2650 v2 (Ivy Bridge), x86_64, 2.60GHz, 2 CPUs/node, 8 cores/CPU
Memory: 256 GiB
Storage: disk0, 1.0 TB HDD SATA Seagate ST1000NM0033-9ZM (dev: /dev/disk0) (primary disk)
Network:
  • eth0/enp1s0f0np0, Ethernet, configured rate: 1 Gbps, model: Intel I350 Gigabit Network Connection, driver: igb, SR-IOV enabled
  • eth1/eno2, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • ib0, InfiniBand, configured rate: 40 Gbps, model: Mellanox Technologies MT25408A0-FCC-QI ConnectX, Dual Port 40Gb/s InfiniBand / 10GigE Adapter IC with PCIe 2.0 x8 5.0GT/s Interface, driver: mlx4_core
  • ib1, InfiniBand, model: Mellanox Technologies MT25408A0-FCC-QI ConnectX, Dual Port 40Gb/s InfiniBand / 10GigE Adapter IC with PCIe 2.0 x8 5.0GT/s Interface, driver: mlx4_core - unavailable for experiment

mercantour2-7 (1 node, 2 cpus, 16 cores)
Access condition: production queue
Model: Dell PowerEdge C6220 II
Manufacturing date: 2015-09-01
Date of arrival: 2025-01-16
CPU: Intel Xeon E5-2650 v2 (Ivy Bridge), x86_64, 2.60GHz, 2 CPUs/node, 8 cores/CPU
Memory: 256 GiB
Storage: disk0, 1.0 TB HDD SATA Toshiba TOSHIBA MG03ACA1 (dev: /dev/disk0) (primary disk)
Network:
  • eth0/enp1s0f0np0, Ethernet, configured rate: 1 Gbps, model: Intel I350 Gigabit Network Connection, driver: igb, SR-IOV enabled
  • eth1/eno2, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • ib0, InfiniBand, configured rate: 40 Gbps, model: Mellanox Technologies MT25408A0-FCC-QI ConnectX, Dual Port 40Gb/s InfiniBand / 10GigE Adapter IC with PCIe 2.0 x8 5.0GT/s Interface, driver: mlx4_core
  • ib1, InfiniBand, model: Mellanox Technologies MT25408A0-FCC-QI ConnectX, Dual Port 40Gb/s InfiniBand / 10GigE Adapter IC with PCIe 2.0 x8 5.0GT/s Interface, driver: mlx4_core - unavailable for experiment

musa

6 nodes, 12 cpus, 288 cores (json)

Reservation example:

Terminal.png fsophia:
oarsub -q production -p musa -I

Max walltime per nodes:

  • musa-[1-6]: 168h
Access condition: production queue
Model: ProLiant DL385 Gen11
Manufacturing date: 2024-12-09
Date of arrival: 2025-01-16
CPU: AMD EPYC 9254 (Zen 4), x86_64, 2 CPUs/node, 24 cores/CPU
Memory: 512 GiB
Storage: disk0, 6.4 TB SSD NVME Samsung MO006400KYDND (dev: /dev/disk0) (primary disk)
Network:
  • eth0/enp1s0f0np0, Ethernet, configured rate: 25 Gbps, model: Broadcom Inc. and subsidiaries BCM57414 NetXtreme-E 10Gb/25Gb RDMA Ethernet Controller, driver: bnxt_en
  • eth1/ens22f1np1, Ethernet, model: Broadcom Inc. and subsidiaries BCM57414 NetXtreme-E 10Gb/25Gb RDMA Ethernet Controller, driver: bnxt_en - unavailable for experiment
GPU: 2 x Nvidia H100 NVL (94 GiB)
Compute capability: 9.0

Clusters in the testing queue

esterel10

3 nodes, 6 cpus, 60 cores, split as follows due to differences between nodes (json)

Reservation example:

Terminal.png fsophia:
oarsub -q testing -p esterel10 -I
esterel10-[1-2] (2 nodes, 4 cpus, 40 cores)
Access condition: testing queue
Model: Dell T630
Manufacturing date: 2017-11-15
Date of arrival: 2024-12-19
CPU: Intel Xeon E5-2630 v4 (Broadwell), x86_64, 2.20GHz, 2 CPUs/node, 10 cores/CPU
Memory: 128 GiB
Storage:
  • disk0, 1.6 TB SSD SAS Toshiba THNSF81D60CSE (dev: /dev/disk0) (primary disk)
  • disk1, 600 GB HDD SAS Toshiba AL14SEB060NY (dev: /dev/disk1)
  • disk2, 600 GB HDD SAS Toshiba AL14SEB060NY (dev: /dev/disk2)
Network:
  • eth0/eno1, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth1/enp1s0f1, Ethernet, configured rate: 1 Gbps, model: Intel I350 Gigabit Network Connection, driver: igb - no KaVLAN
  • ib0, InfiniBand, configured rate: 56 Gbps, model: Mellanox Technologies MT27500 Family [ConnectX-3], driver: mlx4_core
GPU: 4 x Nvidia GeForce GTX 1080 Ti (11 GiB)
Compute capability: 6.1

esterel10-3 (1 node, 2 cpus, 20 cores)
Access condition: testing queue
Model: Dell T630
Manufacturing date: 2017-11-15
Date of arrival: 2024-12-19
CPU: Intel Xeon E5-2630 v4 (Broadwell), x86_64, 2.20GHz, 2 CPUs/node, 10 cores/CPU
Memory: 128 GiB
Storage:
  • disk0, 1.6 TB SSD SAS Toshiba THNSF81D60CSE (dev: /dev/disk0) (primary disk)
  • disk1, 600 GB HDD SAS Toshiba AL14SEB060NY (dev: /dev/disk1)
  • disk2, 600 GB HDD SAS Toshiba AL14SEB060NY (dev: /dev/disk2)
Network:
  • eth0/eno1, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth1/enp1s0f1, Ethernet, configured rate: 1 Gbps, model: Intel I350 Gigabit Network Connection, driver: igb - no KaVLAN
  • ib0, InfiniBand, configured rate: 56 Gbps, model: Mellanox Technologies MT27500 Family [ConnectX-3], driver: mlx4_core
GPU: 3 x Nvidia GeForce GTX 1080 Ti (11 GiB)
Compute capability: 6.1

Last generated from the Grid'5000 Reference API on 2025-02-13 (commit 360740a910)