Grenoble:Hardware: Difference between revisions
Jump to navigation
Jump to search
No edit summary |
No edit summary |
||
Line 2: | Line 2: | ||
__NOEDITSECTION__ | __NOEDITSECTION__ | ||
{{Portal|User}} | {{Portal|User}} | ||
<div class="sitelink">Hardware: [[Hardware|Global]] | [[Grenoble:Hardware|Grenoble]] | [[Lille:Hardware|Lille]] | [[Luxembourg:Hardware|Luxembourg]] | [[Lyon:Hardware|Lyon]] | [[Nancy:Hardware|Nancy]] | [[Nantes:Hardware|Nantes]] | [[Rennes:Hardware|Rennes]] | [[Sophia:Hardware|Sophia]]</div> | <div class="sitelink">Hardware: [[Hardware|Global]] | [[Grenoble:Hardware|Grenoble]] | [[Lille:Hardware|Lille]] | [[Luxembourg:Hardware|Luxembourg]] | [[Lyon:Hardware|Lyon]] | [[Nancy:Hardware|Nancy]] | [[Nantes:Hardware|Nantes]] | [[Rennes:Hardware|Rennes]] | [[Sophia:Hardware|Sophia]] | [[Toulouse:Hardware|Toulouse]]</div> | ||
'''See also:''' [[Grenoble:Network|Network topology for Grenoble]] | '''See also:''' [[Grenoble:Network|Network topology for Grenoble]] | ||
= Summary = | = Summary = | ||
Line 331: | Line 331: | ||
* ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1<br/> | * ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1<br/> | ||
|- | |- | ||
|}''<small>Last generated from the Grid'5000 Reference API on | |}''<small>Last generated from the Grid'5000 Reference API on 2023-03-16 ([https://github.com/grid5000/reference-repository/commit/49ff749e0a commit 49ff749e0a])</small>'' |
Revision as of 09:15, 16 March 2023
Hardware: Global | Grenoble | Lille | Luxembourg | Lyon | Nancy | Nantes | Rennes | Sophia | Toulouse
See also: Network topology for Grenoble
Summary
- 5 clusters
- 54 nodes
- 1744 CPU cores
- 48 GPUs
- 12.25 TiB RAM + 6.0 TiB PMEM
- 88 SSDs and 68 HDDs on nodes (total: 228.53 TB)
- 101.9 TFLOPS (excluding GPUs)
Clusters
Cluster | Access Condition | Date of arrival | Nodes | CPU | Memory | Storage | Network | Accelerators | |||
---|---|---|---|---|---|---|---|---|---|---|---|
# | Name | Cores | Architecture | ||||||||
dahu | 2018-03-22 | 32 | 2 | Intel Xeon Gold 6130 | 16 cores/CPU | x86_64 | 192 GiB | 240 GB SSD + 480 GB SSD + 4.0 TB HDD | 10 Gbps + 100 Gbps Omni-Path | ||
drac | exotic job type | 2020-10-05 | 12 | 2 | POWER8NVL 1.0 | 10 cores/CPU | ppc64le | 128 GiB | 1.0 TB HDD + 1.0 TB HDD | 10 Gbps + 2 x 100 Gbps InfiniBand | 4 x Nvidia Tesla P100 (16 GiB) |
servan | exotic job type | 2021-12-15 | 2 | 2 | AMD EPYC 7352 | 24 cores/CPU | x86_64 | 128 GiB | 1.6 TB SSD + 1.6 TB SSD | 25 Gbps + 2 x 100 Gbps FPGA/Ethernet | Xilinx Alveo U200 |
troll | exotic job type | 2019-12-23 | 4 | 2 | Intel Xeon Gold 5218 | 16 cores/CPU | x86_64 | 384 GiB + 1.5 TiB PMEM | 480 GB SSD + 1.6 TB SSD | 25 Gbps (SR‑IOV) + 100 Gbps Omni-Path | |
yeti | exotic job type | 2018-01-16 | 4 | 4 | Intel Xeon Gold 6130 | 16 cores/CPU | x86_64 | 768 GiB | 480 GB SSD + 3 x 2.0 TB HDD* + 2 x 1.6 TB SSD | 10 Gbps + 100 Gbps Omni-Path |
*: disk is reservable
**: crossed GPUs are not supported by Grid'5000 default environments
Clusters in the default queue
dahu
32 nodes, 64 cpus, 1024 cores (json)
Reservation example:
Model: | Dell PowerEdge C6420 |
Date of arrival: | 2018-03-22 |
CPU: | Intel Xeon Gold 6130 (Skylake, 2.10GHz, 2 CPUs/node, 16 cores/CPU) |
Memory: | 192 GiB |
Storage: |
|
Network: |
|
drac
12 nodes, 24 cpus, 240 cores (json)
Reservation example:
Access condition: | exotic job type |
Model: | IBM PowerNV S822LC (8335-GTB) |
Date of arrival: | 2020-10-05 |
CPU: | POWER8NVL 1.0 (POWER8, 2 CPUs/node, 10 cores/CPU) |
Memory: | 128 GiB |
Storage: |
|
Network: |
|
GPU: | 4 x Nvidia Tesla P100-SXM2-16GB (16 GiB) Compute capability: 6.0 |
servan
2 nodes, 4 cpus, 96 cores (json)
Reservation example:
Access condition: | exotic job type |
Model: | Dell PowerEdge R7525 |
Date of arrival: | 2021-12-15 |
CPU: | AMD EPYC 7352 (Zen 2, 2 CPUs/node, 24 cores/CPU) |
Memory: | 128 GiB |
Storage: |
|
Network: |
|
FPGA: | Xilinx Alveo U200 |
troll
4 nodes, 8 cpus, 128 cores (json)
Reservation example:
Access condition: | exotic job type |
Model: | Dell PowerEdge R640 |
Date of arrival: | 2019-12-23 |
CPU: | Intel Xeon Gold 5218 (Cascade Lake-SP, 2.30GHz, 2 CPUs/node, 16 cores/CPU) |
Memory: | 384 GiB + 1.5 TiB PMEM |
Storage: |
|
Network: |
|
yeti
4 nodes, 16 cpus, 256 cores, split as follows due to differences between nodes (json)
Reservation example:
- yeti-1 (1 node, 4 cpus, 64 cores)
Access condition: | exotic job type |
Model: | Dell PowerEdge R940 |
Date of arrival: | 2018-01-16 |
CPU: | Intel Xeon Gold 6130 (Skylake, 2.10GHz, 4 CPUs/node, 16 cores/CPU) |
Memory: | 768 GiB |
Storage: |
|
Network: |
|
- yeti-[2,
4] (2 nodes, 8 cpus, 128 cores)
Access condition: | exotic job type |
Model: | Dell PowerEdge R940 |
Date of arrival: | 2018-01-16 |
CPU: | Intel Xeon Gold 6130 (Skylake, 2.10GHz, 4 CPUs/node, 16 cores/CPU) |
Memory: | 768 GiB |
Storage: |
|
Network: |
|
- yeti-3 (1 node, 4 cpus, 64 cores)
Access condition: | exotic job type |
Model: | Dell PowerEdge R940 |
Date of arrival: | 2018-01-16 |
CPU: | Intel Xeon Gold 6130 (Skylake, 2.10GHz, 4 CPUs/node, 16 cores/CPU) |
Memory: | 768 GiB |
Storage: |
|
Network: |
|
Last generated from the Grid'5000 Reference API on 2023-03-16 (commit 49ff749e0a)