Grenoble:Hardware: Difference between revisions
Jump to navigation
Jump to search
No edit summary |
No edit summary |
||
Line 4: | Line 4: | ||
<div class="sitelink">Hardware: [[Hardware|Global]] | [[Grenoble:Hardware|Grenoble]] | [[Lille:Hardware|Lille]] | [[Luxembourg:Hardware|Luxembourg]] | [[Lyon:Hardware|Lyon]] | [[Nancy:Hardware|Nancy]] | [[Nantes:Hardware|Nantes]] | [[Rennes:Hardware|Rennes]] | [[Sophia:Hardware|Sophia]]</div> | <div class="sitelink">Hardware: [[Hardware|Global]] | [[Grenoble:Hardware|Grenoble]] | [[Lille:Hardware|Lille]] | [[Luxembourg:Hardware|Luxembourg]] | [[Lyon:Hardware|Lyon]] | [[Nancy:Hardware|Nancy]] | [[Nantes:Hardware|Nantes]] | [[Rennes:Hardware|Rennes]] | [[Sophia:Hardware|Sophia]]</div> | ||
'''See also:''' [[Grenoble:Network|Network topology for Grenoble]] | '''See also:''' [[Grenoble:Network|Network topology for Grenoble]] | ||
= Summary = | |||
* 5 clusters | |||
* 54 nodes | |||
* 1744 CPU cores | |||
* 48 GPUs | |||
* 12.25 TiB RAM + 6.0 TiB PMEM | |||
* 88 SSDs and 68 HDDs on nodes (total: 228.53 TB) | |||
* 101.9 TFLOPS (excluding GPUs) | |||
= | = Clusters = | ||
{|class="wikitable sortable" | {|class="wikitable sortable" | ||
|- | |- | ||
Line 314: | Line 321: | ||
* <span style="color:grey">eth3/eno36, Ethernet, model: Intel Ethernet Controller E810-C for SFP, driver: ice - unavailable for experiment</span><br/> | * <span style="color:grey">eth3/eno36, Ethernet, model: Intel Ethernet Controller E810-C for SFP, driver: ice - unavailable for experiment</span><br/> | ||
|- | |- | ||
|}''<small>Last generated from the Grid'5000 Reference API on 2022-03-30 ([https://github.com/grid5000/reference-repository/commit/ | |}''<small>Last generated from the Grid'5000 Reference API on 2022-03-30 ([https://github.com/grid5000/reference-repository/commit/bb81389cf0 commit bb81389cf0])</small>'' |
Revision as of 22:01, 30 March 2022
See also: Network topology for Grenoble
Summary
- 5 clusters
- 54 nodes
- 1744 CPU cores
- 48 GPUs
- 12.25 TiB RAM + 6.0 TiB PMEM
- 88 SSDs and 68 HDDs on nodes (total: 228.53 TB)
- 101.9 TFLOPS (excluding GPUs)
Clusters
Cluster | Access Condition | Date of arrival | Nodes | CPU | Cores | Memory | Storage | Network | Accelerators |
---|---|---|---|---|---|---|---|---|---|
dahu | 2018-03-22 | 32 | 2 x Intel Xeon Gold 6130 | 16 cores/CPU | 192 GiB | 240 GB SSD + 480 GB SSD + 4.0 TB HDD | 10 Gbps + 100 Gbps Omni-Path | ||
drac | exotic job type | 2020-10-05 | 12 | 2 x Power POWER8NVL 1.0 | 10 cores/CPU | 128 GiB | 1 x 1.0 TB HDD + 1 x 1.0 TB HDD | 10 Gbps + 2 x 100 Gbps InfiniBand | 4 x Nvidia Tesla P100 (16 GiB) |
servan | testing queue, exotic job type |
2021-12-15 | 2 | 2 x AMD EPYC 7352 | 24 cores/CPU | 128 GiB | 1 x 1.6 TB SSD + 1 x 1.6 TB SSD | 25 Gbps | |
troll | exotic job type | 2019-12-23 | 4 | 2 x Intel Xeon Gold 5218 | 16 cores/CPU | 384 GiB + 1.5 TiB PMEM | 480 GB SSD + 1.6 TB SSD | 25 Gbps (SR‑IOV) + 100 Gbps Omni-Path | |
yeti | exotic job type | 2018-01-16 | 4 | 4 x Intel Xeon Gold 6130 | 16 cores/CPU | 768 GiB | 480 GB SSD + 3 x 2.0 TB HDD* + 2 x 1.6 TB SSD | 10 Gbps + 100 Gbps Omni-Path |
*: disk is reservable
Clusters in the default queue
dahu
32 nodes, 64 cpus, 1024 cores (json)
Reservation example:
Model: | Dell PowerEdge C6420 |
Date of arrival: | 2018-03-22 |
CPU: | Intel Xeon Gold 6130 (Skylake, 2.10GHz, 2 CPUs/node, 16 cores/CPU) |
Memory: | 192 GiB |
Storage: |
|
Network: |
|
drac
12 nodes, 24 cpus, 240 cores (json)
Reservation example:
Access condition: | exotic job type |
Model: | IBM PowerNV S822LC (8335-GTB) |
Date of arrival: | 2020-10-05 |
CPU: | POWER8NVL 1.0 (POWER8, 2 CPUs/node, 10 cores/CPU) |
Memory: | 128 GiB |
Storage: |
|
Network: |
|
GPU: | 4 x Nvidia Tesla P100-SXM2-16GB (16 GiB) Compute capability: 6.0 |
troll
4 nodes, 8 cpus, 128 cores (json)
Reservation example:
Access condition: | exotic job type |
Model: | Dell PowerEdge R640 |
Date of arrival: | 2019-12-23 |
CPU: | Intel Xeon Gold 5218 (Cascade Lake-SP, 2.30GHz, 2 CPUs/node, 16 cores/CPU) |
Memory: | 384 GiB + 1.5 TiB PMEM |
Storage: |
|
Network: |
|
yeti
4 nodes, 16 cpus, 256 cores, split as follows due to differences between nodes (json)
Reservation example:
- yeti-1 (1 node, 4 cpus, 64 cores)
Access condition: | exotic job type |
Model: | Dell PowerEdge R940 |
Date of arrival: | 2018-01-16 |
CPU: | Intel Xeon Gold 6130 (Skylake, 2.10GHz, 4 CPUs/node, 16 cores/CPU) |
Memory: | 768 GiB |
Storage: |
|
Network: |
|
- yeti-[2,
4] (2 nodes, 8 cpus, 128 cores)
Access condition: | exotic job type |
Model: | Dell PowerEdge R940 |
Date of arrival: | 2018-01-16 |
CPU: | Intel Xeon Gold 6130 (Skylake, 2.10GHz, 4 CPUs/node, 16 cores/CPU) |
Memory: | 768 GiB |
Storage: |
|
Network: |
|
- yeti-3 (1 node, 4 cpus, 64 cores)
Access condition: | exotic job type |
Model: | Dell PowerEdge R940 |
Date of arrival: | 2018-01-16 |
CPU: | Intel Xeon Gold 6130 (Skylake, 2.10GHz, 4 CPUs/node, 16 cores/CPU) |
Memory: | 768 GiB |
Storage: |
|
Network: |
|
Clusters in the testing queue
servan
2 nodes, 4 cpus, 96 cores (json)
Reservation example:
Access condition: | testing queue, exotic job type |
Model: | Dell PowerEdge R7525 |
Date of arrival: | 2021-12-15 |
CPU: | AMD EPYC 7352 (Zen 2, 2 CPUs/node, 24 cores/CPU) |
Memory: | 128 GiB |
Storage: |
|
Network: |
|
Last generated from the Grid'5000 Reference API on 2022-03-30 (commit bb81389cf0)