Sophia:Hardware
See also: Network topology for Sophia
Summary
- 5 clusters
- 22 nodes
- 556 CPU cores
- 25 GPUs
- 250880 GPUs cores
- 6.25 TiB RAM
- 11 SSDs and 18 HDDs on nodes (total: 59.17 TB)
- 48.0 TFLOPS (excluding GPUs)
Default queue resources
|
Production queue resources
|
Clusters summary
Default queue resources
Cluster | Access Condition | Date of arrival | Manufacturing date | Nodes | CPU | Memory | Storage | Network | Accelerators | |||
---|---|---|---|---|---|---|---|---|---|---|---|---|
# | Name | Cores | Architecture | |||||||||
uvb | 2011-01-04 | 2011-01-04 | 4 | 2 | Intel Xeon X5670 | 6 cores/CPU | x86_64 | 96 GiB | 250 GB HDD | 1 Gbps (SR‑IOV) + 40 Gbps InfiniBand |
**: crossed GPUs are not supported by Grid'5000 default environments
Production queue resources
Cluster | Access Condition | Date of arrival | Manufacturing date | Nodes | CPU | Memory | Storage | Network | Accelerators | |||
---|---|---|---|---|---|---|---|---|---|---|---|---|
# | Name | Cores | Architecture | |||||||||
esterel41 | production queue | 2025-01-25 | 2024-03-01 | 1 | 2 | Intel Xeon Gold 6426Y | 16 cores/CPU | x86_64 | 512 GiB | 479 GB SSD + 2.88 TB SSD | 1 Gbps + 56 Gbps InfiniBand | 2 x Nvidia L40 (45 GiB) |
mercantour2 | production queue | 2025-01-16 | 2015-09-01 | 8 | 2 | Intel Xeon E5-2650 v2 | 8 cores/CPU | x86_64 | 256 GiB | 1.0 TB HDD | 1 Gbps (SR‑IOV) + 40 Gbps InfiniBand | |
musa | production queue | 2025-01-16 | 2024-12-09 | 6 | 2 | AMD EPYC 9254 | 24 cores/CPU | x86_64 | 512 GiB | 6.4 TB SSD | 25 Gbps | 2 x Nvidia Tesla H100 (94 GiB) |
**: crossed GPUs are not supported by Grid'5000 default environments
Testing queue resources
Cluster | Access Condition | Date of arrival | Manufacturing date | Nodes | CPU | Memory | Storage | Network | Accelerators | |||
---|---|---|---|---|---|---|---|---|---|---|---|---|
# | Name | Cores | Architecture | |||||||||
esterel10 | testing queue | 2024-12-19 | 2017-11-15 | 3 | 2 | Intel Xeon E5-2630 v4 | 10 cores/CPU | x86_64 | 128 GiB | 1.6 TB SSD + 2 x 600 GB HDD | 1 Gbps + 56 Gbps InfiniBand | [1-2]: 4 x Nvidia GTX 1080 Ti (11 GiB) 3: 3 x Nvidia GTX 1080 Ti (11 GiB) |
**: crossed GPUs are not supported by Grid'5000 default environments
Clusters in the default queue
uvb
4 nodes, 8 cpus, 48 cores (json)
Reservation example:
Model: | Dell PowerEdge C6100 |
Manufacturing date: | 2011-01-04 |
Date of arrival: | 2011-01-04 |
CPU: | Intel Xeon X5670 (Westmere), x86_64, 2.93GHz, 2 CPUs/node, 6 cores/CPU |
Memory: | 96 GiB |
Storage: | disk0, 250 GB HDD SATA Western Digital WDC WD2502ABYS-1 (dev: /dev/disk0 ) (primary disk) |
Network: |
|
Clusters in the production queue
esterel41
1 node, 2 cpus, 32 cores (json)
Reservation example:
Max walltime per nodes:
- esterel41-1: 168h
Access condition: | production queue |
Model: | DL380 Gen11 |
Manufacturing date: | 2024-03-01 |
Date of arrival: | 2025-01-25 |
CPU: | Intel Xeon Gold 6426Y (Sapphire Rapids), x86_64, 2 CPUs/node, 16 cores/CPU |
Memory: | 512 GiB |
Storage: |
|
Network: |
|
GPU: | 2 x Nvidia L40 (45 GiB) Compute capability: 8.9 |
mercantour2
8 nodes, 16 cpus, 128 cores, split as follows due to differences between nodes (json)
Reservation example:
Max walltime per nodes:
- mercantour2-[1-8]: 168h
- mercantour2-[1-6,
8] (7 nodes, 14 cpus, 112 cores)
Access condition: | production queue |
Model: | Dell PowerEdge C6220 II |
Manufacturing date: | 2015-09-01 |
Date of arrival: | 2025-01-16 |
CPU: | Intel Xeon E5-2650 v2 (Ivy Bridge), x86_64, 2.60GHz, 2 CPUs/node, 8 cores/CPU |
Memory: | 256 GiB |
Storage: | disk0, 1.0 TB HDD SATA Seagate ST1000NM0033-9ZM (dev: /dev/disk0 ) (primary disk) |
Network: |
|
- mercantour2-7 (1 node, 2 cpus, 16 cores)
Access condition: | production queue |
Model: | Dell PowerEdge C6220 II |
Manufacturing date: | 2015-09-01 |
Date of arrival: | 2025-01-16 |
CPU: | Intel Xeon E5-2650 v2 (Ivy Bridge), x86_64, 2.60GHz, 2 CPUs/node, 8 cores/CPU |
Memory: | 256 GiB |
Storage: | disk0, 1.0 TB HDD SATA Toshiba TOSHIBA MG03ACA1 (dev: /dev/disk0 ) (primary disk) |
Network: |
|
musa
6 nodes, 12 cpus, 288 cores (json)
Reservation example:
Max walltime per nodes:
- musa-[1-6]: 168h
Access condition: | production queue |
Model: | ProLiant DL385 Gen11 |
Manufacturing date: | 2024-12-09 |
Date of arrival: | 2025-01-16 |
CPU: | AMD EPYC 9254 (Zen 4), x86_64, 2 CPUs/node, 24 cores/CPU |
Memory: | 512 GiB |
Storage: | disk0, 6.4 TB SSD NVME Samsung MO006400KYDND (dev: /dev/disk0 ) (primary disk) |
Network: |
|
GPU: | 2 x Nvidia H100 NVL (94 GiB) Compute capability: 9.0 |
Clusters in the testing queue
esterel10
3 nodes, 6 cpus, 60 cores, split as follows due to differences between nodes (json)
Reservation example:
- esterel10-[1-2] (2 nodes, 4 cpus, 40 cores)
Access condition: | testing queue |
Model: | Dell T630 |
Manufacturing date: | 2017-11-15 |
Date of arrival: | 2024-12-19 |
CPU: | Intel Xeon E5-2630 v4 (Broadwell), x86_64, 2.20GHz, 2 CPUs/node, 10 cores/CPU |
Memory: | 128 GiB |
Storage: |
|
Network: |
|
GPU: | 4 x Nvidia GeForce GTX 1080 Ti (11 GiB) Compute capability: 6.1 |
- esterel10-3 (1 node, 2 cpus, 20 cores)
Access condition: | testing queue |
Model: | Dell T630 |
Manufacturing date: | 2017-11-15 |
Date of arrival: | 2024-12-19 |
CPU: | Intel Xeon E5-2630 v4 (Broadwell), x86_64, 2.20GHz, 2 CPUs/node, 10 cores/CPU |
Memory: | 128 GiB |
Storage: |
|
Network: |
|
GPU: | 3 x Nvidia GeForce GTX 1080 Ti (11 GiB) Compute capability: 6.1 |
Last generated from the Grid'5000 Reference API on 2025-02-13 (commit 360740a910)