Grenoble:Hardware: Difference between revisions
No edit summary |
No edit summary |
||
Line 491: | Line 491: | ||
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Network:''' | | valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Network:''' | ||
| | | | ||
* eth0/enp1s0f0np0, Ethernet, configured rate: 25 Gbps, model: Broadcom Inc. and subsidiaries BCM57414 NetXtreme-E 10Gb/25Gb RDMA Ethernet Controller, driver: bnxt_en | * eth0/enp1s0f0np0, Ethernet, configured rate: 25 Gbps, model: Broadcom Inc. and subsidiaries BCM57414 NetXtreme-E 10Gb/25Gb RDMA Ethernet Controller, driver: bnxt_en<br /> | ||
* eth1/ens15f1np1, Ethernet, configured rate: 25 Gbps, model: Broadcom Inc. and subsidiaries BCM57414 NetXtreme-E 10Gb/25Gb RDMA Ethernet Controller, driver: bnxt_en | * eth1/ens15f1np1, Ethernet, configured rate: 25 Gbps, model: Broadcom Inc. and subsidiaries BCM57414 NetXtreme-E 10Gb/25Gb RDMA Ethernet Controller, driver: bnxt_en [[Advanced_KaVLAN#A_simple_multi_NICs_example|(multi NICs example)]]<br /> | ||
* eth2/ens4np0, Ethernet, configured rate: 100 Gbps, model: Mellanox Technologies MT28908 Family [ConnectX-6], driver: mlx5_core | * eth2/ens4np0, Ethernet, configured rate: 100 Gbps, model: Mellanox Technologies MT28908 Family [ConnectX-6], driver: mlx5_core [[Advanced_KaVLAN#A_simple_multi_NICs_example|(multi NICs example)]]<br/> | ||
|- | |- | ||
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''GPU:''' | | valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''GPU:''' | ||
Line 538: | Line 538: | ||
* <span style="color:grey">eth1, Ethernet, model: Broadcom Inc. and subsidiaries BCM57414 NetXtreme-E 10Gb/25Gb RDMA Ethernet Controller, driver: bnxt_en - unavailable for experiment</span><br/> | * <span style="color:grey">eth1, Ethernet, model: Broadcom Inc. and subsidiaries BCM57414 NetXtreme-E 10Gb/25Gb RDMA Ethernet Controller, driver: bnxt_en - unavailable for experiment</span><br/> | ||
|- | |- | ||
|}''<small>Last generated from the Grid'5000 Reference API on 2025-02- | |}''<small>Last generated from the Grid'5000 Reference API on 2025-02-26 ([https://gitlab.inria.fr/grid5000/reference-repository/commit/b63c99645d commit b63c99645d])</small>'' |
Revision as of 13:24, 26 February 2025
See also: Network topology for Grenoble
Summary
- 7 clusters
- 61 nodes
- 2096 CPU cores
- 60 GPUs
- 390144 GPUs cores
- 13.81 TiB RAM + 6.0 TiB PMEM
- 96 SSDs and 68 HDDs on nodes (total: 242.45 TB)
- 145.8 TFLOPS (excluding GPUs)
Clusters summary
Default queue resources
Cluster | Access Condition | Date of arrival | Manufacturing date | Nodes | CPU | Memory | Storage | Network | Accelerators | |||
---|---|---|---|---|---|---|---|---|---|---|---|---|
# | Name | Cores | Architecture | |||||||||
dahu | 2018-03-22 | 2017-12-12 | 32 | 2 | Intel Xeon Gold 6130 | 16 cores/CPU | x86_64 | 192 GiB | 240 GB SSD + 480 GB SSD + 4.0 TB HDD | 10 Gbps + 100 Gbps Omni-Path | ||
drac | exotic job type | 2020-10-05 | 2016-10-17 | 12 | 2 | IBM POWER8NVL 1.0 | 10 cores/CPU | ppc64le | 128 GiB | 1.0 TB HDD + 1.0 TB HDD | 10 Gbps + 2 x 100 Gbps InfiniBand | 4 x Nvidia Tesla P100 (16 GiB) |
servan | exotic job type | 2021-12-15 | 2021-12-10 | 2 | 2 | AMD EPYC 7352 | 24 cores/CPU | x86_64 | 128 GiB | 1.6 TB SSD + 1.6 TB SSD | 25 Gbps + 2 x 100 Gbps FPGA/Ethernet | Xilinx Alveo U200 |
troll | exotic job type | 2019-12-23 | 2019-11-21 | 4 | 2 | Intel Xeon Gold 5218 | 16 cores/CPU | x86_64 | 384 GiB + 1.5 TiB PMEM | 480 GB SSD + 1.6 TB SSD | 25 Gbps (SR‑IOV) + 100 Gbps Omni-Path | |
yeti | exotic job type | 2018-01-16 | 2017-12-26 | 4 | 4 | Intel Xeon Gold 6130 | 16 cores/CPU | x86_64 | 768 GiB | 480 GB SSD + 3 x 2.0 TB HDD* + 2 x 1.6 TB SSD | 10 Gbps + 100 Gbps Omni-Path |
*: disk is reservable **: crossed GPUs are not supported by Grid'5000 default environments ***: OPA (Omni-Path Architecture) is currently not supported on Debian 12 environment
Testing queue resources
Cluster | Access Condition | Date of arrival | Manufacturing date | Nodes | CPU | Memory | Storage | Network | Accelerators | |||
---|---|---|---|---|---|---|---|---|---|---|---|---|
# | Name | Cores | Architecture | |||||||||
kinovis | testing queue | 2025-02-10 | 2024-06-26 | 6 | 2 | Intel Xeon Gold 6442Y | 24 cores/CPU | x86_64 | 256 GiB | 1.92 TB SSD | 2 x 25 Gbps + 100 Gbps | 2 x Nvidia Tesla L40S (45 GiB) |
nessie | testing queue | 2024-11-08 | 2024-08-19 | 1 | 2 | Intel Xeon Gold 6430 | 32 cores/CPU | x86_64 | 64 GiB | 1.92 TB SSD + 480 GB SSD | 25 Gbps |
*: disk is reservable **: crossed GPUs are not supported by Grid'5000 default environments ***: OPA (Omni-Path Architecture) is currently not supported on Debian 12 environment
Clusters in the default queue
dahu
32 nodes, 64 cpus, 1024 cores, split as follows due to differences between nodes (json)
Reservation example:
- dahu-[1,
4-32] (30 nodes, 60 cpus, 960 cores)
Model: | Dell PowerEdge C6420 |
Manufacturing date: | 2017-12-12 |
Date of arrival: | 2018-03-22 |
CPU: | Intel Xeon Gold 6130 (Skylake-SP), x86_64, 2.10GHz, 2 CPUs/node, 16 cores/CPU |
Memory: | 192 GiB |
Storage: |
|
Network: |
|
- dahu-2 (1 node, 2 cpus, 32 cores)
Model: | Dell PowerEdge C6420 |
Manufacturing date: | 2017-12-12 |
Date of arrival: | 2018-03-22 |
CPU: | Intel Xeon Gold 6130 (Skylake-SP), x86_64, 2.10GHz, 2 CPUs/node, 16 cores/CPU |
Memory: | 192 GiB |
Storage: |
|
Network: |
|
- dahu-3 (1 node, 2 cpus, 32 cores)
Model: | Dell PowerEdge C6420 |
Manufacturing date: | 2017-12-12 |
Date of arrival: | 2018-03-22 |
CPU: | Intel Xeon Gold 6130 (Skylake-SP), x86_64, 2.10GHz, 2 CPUs/node, 16 cores/CPU |
Memory: | 192 GiB |
Storage: |
|
Network: |
|
drac
12 nodes, 24 cpus, 240 cores (json)
Reservation example:
Access condition: | exotic job type |
Model: | IBM PowerNV S822LC (8335-GTB) |
Manufacturing date: | 2016-10-17 |
Date of arrival: | 2020-10-05 |
CPU: | IBM POWER8NVL 1.0 (POWER8), ppc64le, 2 CPUs/node, 10 cores/CPU |
Memory: | 128 GiB |
Storage: |
|
Network: |
|
GPU: | 4 x Nvidia Tesla P100-SXM2-16GB (16 GiB) Compute capability: 6.0 |
Note: This cluster is defined as exotic. Please read the exotic page for more information.
servan
2 nodes, 4 cpus, 96 cores (json)
Reservation example:
Access condition: | exotic job type |
Model: | Dell PowerEdge R7525 |
Manufacturing date: | 2021-12-10 |
Date of arrival: | 2021-12-15 |
CPU: | AMD EPYC 7352 (Zen 2), x86_64, 2 CPUs/node, 24 cores/CPU |
Memory: | 128 GiB |
Storage: |
|
Network: |
|
FPGA: | Xilinx Alveo U200 |
Note: This cluster is defined as exotic. Please read the exotic page for more information.
troll
4 nodes, 8 cpus, 128 cores (json)
Reservation example:
Access condition: | exotic job type |
Model: | Dell PowerEdge R640 |
Manufacturing date: | 2019-11-21 |
Date of arrival: | 2019-12-23 |
CPU: | Intel Xeon Gold 5218 (Cascade Lake-SP), x86_64, 2.30GHz, 2 CPUs/node, 16 cores/CPU |
Memory: | 384 GiB + 1.5 TiB PMEM |
Storage: |
|
Network: |
|
Note: This cluster is defined as exotic. Please read the exotic page for more information.
yeti
4 nodes, 16 cpus, 256 cores, split as follows due to differences between nodes (json)
Reservation example:
- yeti-1 (1 node, 4 cpus, 64 cores)
Access condition: | exotic job type |
Model: | Dell PowerEdge R940 |
Manufacturing date: | 2017-12-26 |
Date of arrival: | 2018-01-16 |
CPU: | Intel Xeon Gold 6130 (Skylake-SP), x86_64, 2.10GHz, 4 CPUs/node, 16 cores/CPU |
Memory: | 768 GiB |
Storage: |
|
Network: |
|
- yeti-[2,
4] (2 nodes, 8 cpus, 128 cores)
Access condition: | exotic job type |
Model: | Dell PowerEdge R940 |
Manufacturing date: | 2017-12-26 |
Date of arrival: | 2018-01-16 |
CPU: | Intel Xeon Gold 6130 (Skylake-SP), x86_64, 2.10GHz, 4 CPUs/node, 16 cores/CPU |
Memory: | 768 GiB |
Storage: |
|
Network: |
|
- yeti-3 (1 node, 4 cpus, 64 cores)
Access condition: | exotic job type |
Model: | Dell PowerEdge R940 |
Manufacturing date: | 2017-12-26 |
Date of arrival: | 2018-01-16 |
CPU: | Intel Xeon Gold 6130 (Skylake-SP), x86_64, 2.10GHz, 4 CPUs/node, 16 cores/CPU |
Memory: | 768 GiB |
Storage: |
|
Network: |
|
Note: This cluster is defined as exotic. Please read the exotic page for more information.
Clusters in the testing queue
kinovis
6 nodes, 12 cpus, 288 cores (json)
Reservation example:
Access condition: | testing queue |
Model: | HPE Proliant DL380 Gen11 |
Manufacturing date: | 2024-06-26 |
Date of arrival: | 2025-02-10 |
CPU: | Intel Xeon Gold 6442Y (Sapphire Rapids), x86_64, 2 CPUs/node, 24 cores/CPU |
Memory: | 256 GiB |
Storage: | disk0, 1.92 TB SSD SATA HPE MR416i-o Gen11 (dev: /dev/disk0 ) (primary disk) |
Network: |
|
GPU: | 2 x Nvidia L40S (45 GiB) Compute capability: 8.9 |
nessie
1 node, 2 cpus, 64 cores (json)
Reservation example:
Access condition: | testing queue |
Model: | HPE ProLiant DL385 Gen10+ v2 |
Manufacturing date: | 2024-08-19 |
Date of arrival: | 2024-11-08 |
CPU: | Intel Xeon Gold 6430 (Sapphire Rapids), x86_64, 2 CPUs/node, 32 cores/CPU |
Memory: | 64 GiB |
Storage: |
|
Network: |
|
Last generated from the Grid'5000 Reference API on 2025-02-26 (commit b63c99645d)