Grenoble:Hardware: Difference between revisions
Jump to navigation
Jump to search
No edit summary |
No edit summary |
||
Line 36: | Line 36: | ||
|[[#drac|drac]]||<b>[[Getting_Started#Selecting_specific_resources|exotic]]</b> job type||2020-10-05||12||2||POWER8NVL 1.0||10 cores/CPU||ppc64le||128 GiB||data-sort-value="1862"|<b>1.0 TB HDD</b> + 1.0 TB HDD||data-sort-value="210000"|10 Gbps + 2 x 100 Gbps InfiniBand||4 x Nvidia Tesla P100 (16 GiB) | |[[#drac|drac]]||<b>[[Getting_Started#Selecting_specific_resources|exotic]]</b> job type||2020-10-05||12||2||POWER8NVL 1.0||10 cores/CPU||ppc64le||128 GiB||data-sort-value="1862"|<b>1.0 TB HDD</b> + 1.0 TB HDD||data-sort-value="210000"|10 Gbps + 2 x 100 Gbps InfiniBand||4 x Nvidia Tesla P100 (16 GiB) | ||
|- | |- | ||
|[[#servan|servan]]||<b>[[Getting_Started#Selecting_specific_resources|exotic]]</b> job type||2021-12-15||2||2||AMD EPYC 7352||24 cores/CPU||x86_64||128 GiB||data-sort-value="2980"|<b>1.6 TB SSD</b> + 1.6 TB SSD||data-sort-value="225000"|25 Gbps + 2 x 100 Gbps ||Xilinx Alveo U200 | |[[#servan|servan]]||<b>[[Getting_Started#Selecting_specific_resources|exotic]]</b> job type||2021-12-15||2||2||AMD EPYC 7352||24 cores/CPU||x86_64||128 GiB||data-sort-value="2980"|<b>1.6 TB SSD</b> + 1.6 TB SSD||data-sort-value="225000"|25 Gbps + 2 x 100 Gbps FPGA/Ethernet||Xilinx Alveo U200 | ||
|- | |- | ||
|[[#troll|troll]]||<b>[[Getting_Started#Selecting_specific_resources|exotic]]</b> job type||2019-12-23||4||2||Intel Xeon Gold 5218||16 cores/CPU||x86_64||384 GiB + 1.5 TiB [[PMEM]]||data-sort-value="1937"|<b>480 GB SSD</b> + 1.6 TB SSD||data-sort-value="125000"|25 Gbps (SR‑IOV) + 100 Gbps Omni-Path|| | |[[#troll|troll]]||<b>[[Getting_Started#Selecting_specific_resources|exotic]]</b> job type||2019-12-23||4||2||Intel Xeon Gold 5218||16 cores/CPU||x86_64||384 GiB + 1.5 TiB [[PMEM]]||data-sort-value="1937"|<b>480 GB SSD</b> + 1.6 TB SSD||data-sort-value="125000"|25 Gbps (SR‑IOV) + 100 Gbps Omni-Path|| | ||
Line 166: | Line 166: | ||
* <span style="color:grey">eth2/eno35, Ethernet, model: Intel Ethernet Controller E810-C for SFP, driver: ice - unavailable for experiment</span><br /> | * <span style="color:grey">eth2/eno35, Ethernet, model: Intel Ethernet Controller E810-C for SFP, driver: ice - unavailable for experiment</span><br /> | ||
* <span style="color:grey">eth3/eno36, Ethernet, model: Intel Ethernet Controller E810-C for SFP, driver: ice - unavailable for experiment</span><br /> | * <span style="color:grey">eth3/eno36, Ethernet, model: Intel Ethernet Controller E810-C for SFP, driver: ice - unavailable for experiment</span><br /> | ||
* fpga0, Ethernet, configured rate: 100 Gbps, model: Xilinx Alveo U200 QSFP28 (100GbE), driver: n/a<br /> | * fpga0, FPGA/Ethernet, configured rate: 100 Gbps, model: Xilinx Alveo U200 QSFP28 (100GbE), driver: n/a<br /> | ||
* fpga1, Ethernet, configured rate: 100 Gbps, model: Xilinx Alveo U200 QSFP28 (100GbE), driver: n/a<br/> | * fpga1, FPGA/Ethernet, configured rate: 100 Gbps, model: Xilinx Alveo U200 QSFP28 (100GbE), driver: n/a<br/> | ||
|- | |- | ||
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''FPGA:''' | | valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''FPGA:''' | ||
Line 331: | Line 331: | ||
* ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1<br/> | * ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1<br/> | ||
|- | |- | ||
|}''<small>Last generated from the Grid'5000 Reference API on 2022-09-28 ([https://github.com/grid5000/reference-repository/commit/ | |}''<small>Last generated from the Grid'5000 Reference API on 2022-09-28 ([https://github.com/grid5000/reference-repository/commit/eec26642dd commit eec26642dd])</small>'' |
Revision as of 09:23, 28 September 2022
See also: Network topology for Grenoble
Summary
- 5 clusters
- 54 nodes
- 1744 CPU cores
- 48 GPUs
- 12.25 TiB RAM + 6.0 TiB PMEM
- 88 SSDs and 68 HDDs on nodes (total: 228.53 TB)
- 101.9 TFLOPS (excluding GPUs)
Clusters
Cluster | Access Condition | Date of arrival | Nodes | CPU | Memory | Storage | Network | Accelerators | |||
---|---|---|---|---|---|---|---|---|---|---|---|
# | Name | Cores | Architecture | ||||||||
dahu | 2018-03-22 | 32 | 2 | Intel Xeon Gold 6130 | 16 cores/CPU | x86_64 | 192 GiB | 240 GB SSD + 480 GB SSD + 4.0 TB HDD | 10 Gbps + 100 Gbps Omni-Path | ||
drac | exotic job type | 2020-10-05 | 12 | 2 | POWER8NVL 1.0 | 10 cores/CPU | ppc64le | 128 GiB | 1.0 TB HDD + 1.0 TB HDD | 10 Gbps + 2 x 100 Gbps InfiniBand | 4 x Nvidia Tesla P100 (16 GiB) |
servan | exotic job type | 2021-12-15 | 2 | 2 | AMD EPYC 7352 | 24 cores/CPU | x86_64 | 128 GiB | 1.6 TB SSD + 1.6 TB SSD | 25 Gbps + 2 x 100 Gbps FPGA/Ethernet | Xilinx Alveo U200 |
troll | exotic job type | 2019-12-23 | 4 | 2 | Intel Xeon Gold 5218 | 16 cores/CPU | x86_64 | 384 GiB + 1.5 TiB PMEM | 480 GB SSD + 1.6 TB SSD | 25 Gbps (SR‑IOV) + 100 Gbps Omni-Path | |
yeti | exotic job type | 2018-01-16 | 4 | 4 | Intel Xeon Gold 6130 | 16 cores/CPU | x86_64 | 768 GiB | 480 GB SSD + 3 x 2.0 TB HDD* + 2 x 1.6 TB SSD | 10 Gbps + 100 Gbps Omni-Path |
*: disk is reservable
**: crossed GPUs are not supported by Grid'5000 default environments
Clusters in the default queue
dahu
32 nodes, 64 cpus, 1024 cores (json)
Reservation example:
Model: | Dell PowerEdge C6420 |
Date of arrival: | 2018-03-22 |
CPU: | Intel Xeon Gold 6130 (Skylake, 2.10GHz, 2 CPUs/node, 16 cores/CPU) |
Memory: | 192 GiB |
Storage: |
|
Network: |
|
drac
12 nodes, 24 cpus, 240 cores (json)
Reservation example:
Access condition: | exotic job type |
Model: | IBM PowerNV S822LC (8335-GTB) |
Date of arrival: | 2020-10-05 |
CPU: | POWER8NVL 1.0 (POWER8, 2 CPUs/node, 10 cores/CPU) |
Memory: | 128 GiB |
Storage: |
|
Network: |
|
GPU: | 4 x Nvidia Tesla P100-SXM2-16GB (16 GiB) Compute capability: 6.0 |
servan
2 nodes, 4 cpus, 96 cores (json)
Reservation example:
Access condition: | exotic job type |
Model: | Dell PowerEdge R7525 |
Date of arrival: | 2021-12-15 |
CPU: | AMD EPYC 7352 (Zen 2, 2 CPUs/node, 24 cores/CPU) |
Memory: | 128 GiB |
Storage: |
|
Network: |
|
FPGA: | Xilinx Alveo U200 |
troll
4 nodes, 8 cpus, 128 cores (json)
Reservation example:
Access condition: | exotic job type |
Model: | Dell PowerEdge R640 |
Date of arrival: | 2019-12-23 |
CPU: | Intel Xeon Gold 5218 (Cascade Lake-SP, 2.30GHz, 2 CPUs/node, 16 cores/CPU) |
Memory: | 384 GiB + 1.5 TiB PMEM |
Storage: |
|
Network: |
|
yeti
4 nodes, 16 cpus, 256 cores, split as follows due to differences between nodes (json)
Reservation example:
- yeti-1 (1 node, 4 cpus, 64 cores)
Access condition: | exotic job type |
Model: | Dell PowerEdge R940 |
Date of arrival: | 2018-01-16 |
CPU: | Intel Xeon Gold 6130 (Skylake, 2.10GHz, 4 CPUs/node, 16 cores/CPU) |
Memory: | 768 GiB |
Storage: |
|
Network: |
|
- yeti-[2,
4] (2 nodes, 8 cpus, 128 cores)
Access condition: | exotic job type |
Model: | Dell PowerEdge R940 |
Date of arrival: | 2018-01-16 |
CPU: | Intel Xeon Gold 6130 (Skylake, 2.10GHz, 4 CPUs/node, 16 cores/CPU) |
Memory: | 768 GiB |
Storage: |
|
Network: |
|
- yeti-3 (1 node, 4 cpus, 64 cores)
Access condition: | exotic job type |
Model: | Dell PowerEdge R940 |
Date of arrival: | 2018-01-16 |
CPU: | Intel Xeon Gold 6130 (Skylake, 2.10GHz, 4 CPUs/node, 16 cores/CPU) |
Memory: | 768 GiB |
Storage: |
|
Network: |
|
Last generated from the Grid'5000 Reference API on 2022-09-28 (commit eec26642dd)