Difference between revisions of "Grenoble:Hardware"
From Grid5000
(4 intermediate revisions by 3 users not shown) | |||
Line 4: | Line 4: | ||
<div class="sitelink">Hardware: [[Hardware|Global]] | [[Grenoble:Hardware|Grenoble]] | [[Lille:Hardware|Lille]] | [[Luxembourg:Hardware|Luxembourg]] | [[Lyon:Hardware|Lyon]] | [[Nancy:Hardware|Nancy]] | [[Nantes:Hardware|Nantes]] | [[Rennes:Hardware|Rennes]] | [[Sophia:Hardware|Sophia]]</div> | <div class="sitelink">Hardware: [[Hardware|Global]] | [[Grenoble:Hardware|Grenoble]] | [[Lille:Hardware|Lille]] | [[Luxembourg:Hardware|Luxembourg]] | [[Lyon:Hardware|Lyon]] | [[Nancy:Hardware|Nancy]] | [[Nantes:Hardware|Nantes]] | [[Rennes:Hardware|Rennes]] | [[Sophia:Hardware|Sophia]]</div> | ||
'''See also:''' [[Grenoble:Network|Network topology for Grenoble]] | '''See also:''' [[Grenoble:Network|Network topology for Grenoble]] | ||
+ | = Summary = | ||
+ | * 5 clusters | ||
+ | * 54 nodes | ||
+ | * 1744 CPU cores | ||
+ | * 48 GPUs | ||
+ | * 12.25 TiB RAM + 6.0 TiB PMEM | ||
+ | * 88 SSDs and 68 HDDs on nodes (total: 228.53 TB) | ||
+ | * 101.9 TFLOPS (excluding GPUs) | ||
− | = | + | = Clusters = |
− | |||
{|class="wikitable sortable" | {|class="wikitable sortable" | ||
|- | |- | ||
Line 24: | Line 31: | ||
|[[#drac|drac]]||<b>[[Getting_Started#Selecting_specific_resources|exotic]]</b> job type||2020-10-05||12||2 x Power POWER8NVL 1.0||10 cores/CPU||128 GiB||data-sort-value="1862"|<b>1 x 1.0 TB HDD</b> + 1 x 1.0 TB HDD||data-sort-value="210000"|10 Gbps + 2 x 100 Gbps InfiniBand||4 x Nvidia Tesla P100 (16 GiB) | |[[#drac|drac]]||<b>[[Getting_Started#Selecting_specific_resources|exotic]]</b> job type||2020-10-05||12||2 x Power POWER8NVL 1.0||10 cores/CPU||128 GiB||data-sort-value="1862"|<b>1 x 1.0 TB HDD</b> + 1 x 1.0 TB HDD||data-sort-value="210000"|10 Gbps + 2 x 100 Gbps InfiniBand||4 x Nvidia Tesla P100 (16 GiB) | ||
|- | |- | ||
− | |[[#servan|servan]]||<b>testing</b> queue,<br/><b>[[Getting_Started#Selecting_specific_resources|exotic]]</b> job type||2021-12-15||2||2 x AMD EPYC 7352||24 cores/CPU||128 GiB||data-sort-value="2980"|<b>1 x 1.6 TB SSD</b> + 1 x 1.6 TB SSD||data-sort-value=" | + | |[[#servan|servan]]||<b>testing</b> queue,<br/><b>[[Getting_Started#Selecting_specific_resources|exotic]]</b> job type||2021-12-15||2||2 x AMD EPYC 7352||24 cores/CPU||128 GiB||data-sort-value="2980"|<b>1 x 1.6 TB SSD</b> + 1 x 1.6 TB SSD||data-sort-value="225000"|25 Gbps + 2 x 100 Gbps Ethernet-FPGA||Xilinx Alveo U200 |
|- | |- | ||
|[[#troll|troll]]||<b>[[Getting_Started#Selecting_specific_resources|exotic]]</b> job type||2019-12-23||4||2 x Intel Xeon Gold 5218||16 cores/CPU||384 GiB + 1.5 TiB [[PMEM]]||data-sort-value="1937"|<b>480 GB SSD</b> + 1.6 TB SSD||data-sort-value="125000"|25 Gbps (SR‑IOV) + 100 Gbps Omni-Path|| | |[[#troll|troll]]||<b>[[Getting_Started#Selecting_specific_resources|exotic]]</b> job type||2019-12-23||4||2 x Intel Xeon Gold 5218||16 cores/CPU||384 GiB + 1.5 TiB [[PMEM]]||data-sort-value="1937"|<b>480 GB SSD</b> + 1.6 TB SSD||data-sort-value="125000"|25 Gbps (SR‑IOV) + 100 Gbps Omni-Path|| | ||
Line 40: | Line 47: | ||
'''Reservation example:''' | '''Reservation example:''' | ||
− | {{Term|location=fgrenoble|cmd=<code class="command">oarsub</code> <code class="env">-p | + | {{Term|location=fgrenoble|cmd=<code class="command">oarsub</code> <code class="env">-p dahu</code> <code>-I</code>}} |
{| | {| | ||
Line 76: | Line 83: | ||
'''Reservation example:''' | '''Reservation example:''' | ||
− | {{Term|location=fgrenoble|cmd=<code class="command">oarsub</code> <code class="replace">-t exotic</code> <code class="env">-p | + | {{Term|location=fgrenoble|cmd=<code class="command">oarsub</code> <code class="replace">-t exotic</code> <code class="env">-p drac</code> <code>-I</code>}} |
{| | {| | ||
Line 122: | Line 129: | ||
'''Reservation example:''' | '''Reservation example:''' | ||
− | {{Term|location=fgrenoble|cmd=<code class="command">oarsub</code> <code class="replace">-t exotic</code> <code class="env">-p | + | {{Term|location=fgrenoble|cmd=<code class="command">oarsub</code> <code class="replace">-t exotic</code> <code class="env">-p troll</code> <code>-I</code>}} |
{| | {| | ||
Line 160: | Line 167: | ||
'''Reservation example:''' | '''Reservation example:''' | ||
− | {{Term|location=fgrenoble|cmd=<code class="command">oarsub</code> <code class="replace">-t exotic</code> <code class="env">-p | + | {{Term|location=fgrenoble|cmd=<code class="command">oarsub</code> <code class="replace">-t exotic</code> <code class="env">-p yeti</code> <code>-I</code>}} |
; yeti-1 (1 node, 4 cpus, 64 cores) | ; yeti-1 (1 node, 4 cpus, 64 cores) | ||
Line 283: | Line 290: | ||
'''Reservation example:''' | '''Reservation example:''' | ||
− | {{Term|location=fgrenoble|cmd=<code class="command">oarsub</code> <code class="replace">-q testing</code> <code class="replace">-t exotic</code> <code class="env">-p | + | {{Term|location=fgrenoble|cmd=<code class="command">oarsub</code> <code class="replace">-q testing</code> <code class="replace">-t exotic</code> <code class="env">-p servan</code> <code>-I</code>}} |
{| | {| | ||
Line 312: | Line 319: | ||
* <span style="color:grey">eth1/eno34, Ethernet, model: Intel Ethernet Controller E810-C for SFP, driver: ice - unavailable for experiment</span><br /> | * <span style="color:grey">eth1/eno34, Ethernet, model: Intel Ethernet Controller E810-C for SFP, driver: ice - unavailable for experiment</span><br /> | ||
* <span style="color:grey">eth2/eno35, Ethernet, model: Intel Ethernet Controller E810-C for SFP, driver: ice - unavailable for experiment</span><br /> | * <span style="color:grey">eth2/eno35, Ethernet, model: Intel Ethernet Controller E810-C for SFP, driver: ice - unavailable for experiment</span><br /> | ||
− | * <span style="color:grey">eth3/eno36, Ethernet, model: Intel Ethernet Controller E810-C for SFP, driver: ice - unavailable for experiment</span><br/> | + | * <span style="color:grey">eth3/eno36, Ethernet, model: Intel Ethernet Controller E810-C for SFP, driver: ice - unavailable for experiment</span><br /> |
+ | * fpga0, Ethernet-FPGA, configured rate: 100 Gbps, model: Xilinx Alveo U200 QSFP28 (100GbE), driver: xrt<br /> | ||
+ | * fpga1, Ethernet-FPGA, configured rate: 100 Gbps, model: Xilinx Alveo U200 QSFP28 (100GbE), driver: xrt<br/> | ||
+ | |- | ||
+ | | valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''FPGA:''' | ||
+ | | Xilinx Alveo U200<br/> | ||
|- | |- | ||
− | |}''<small>Last generated from the Grid'5000 Reference API on 2022- | + | |}''<small>Last generated from the Grid'5000 Reference API on 2022-05-24 ([https://github.com/grid5000/reference-repository/commit/e3c52eea14 commit e3c52eea14])</small>'' |
Revision as of 17:16, 24 May 2022
See also: Network topology for Grenoble
Summary
- 5 clusters
- 54 nodes
- 1744 CPU cores
- 48 GPUs
- 12.25 TiB RAM + 6.0 TiB PMEM
- 88 SSDs and 68 HDDs on nodes (total: 228.53 TB)
- 101.9 TFLOPS (excluding GPUs)
Clusters
Cluster | Access Condition | Date of arrival | Nodes | CPU | Cores | Memory | Storage | Network | Accelerators |
---|---|---|---|---|---|---|---|---|---|
dahu | 2018-03-22 | 32 | 2 x Intel Xeon Gold 6130 | 16 cores/CPU | 192 GiB | 240 GB SSD + 480 GB SSD + 4.0 TB HDD | 10 Gbps + 100 Gbps Omni-Path | ||
drac | exotic job type | 2020-10-05 | 12 | 2 x Power POWER8NVL 1.0 | 10 cores/CPU | 128 GiB | 1 x 1.0 TB HDD + 1 x 1.0 TB HDD | 10 Gbps + 2 x 100 Gbps InfiniBand | 4 x Nvidia Tesla P100 (16 GiB) |
servan | testing queue, exotic job type |
2021-12-15 | 2 | 2 x AMD EPYC 7352 | 24 cores/CPU | 128 GiB | 1 x 1.6 TB SSD + 1 x 1.6 TB SSD | 25 Gbps + 2 x 100 Gbps Ethernet-FPGA | Xilinx Alveo U200 |
troll | exotic job type | 2019-12-23 | 4 | 2 x Intel Xeon Gold 5218 | 16 cores/CPU | 384 GiB + 1.5 TiB PMEM | 480 GB SSD + 1.6 TB SSD | 25 Gbps (SR‑IOV) + 100 Gbps Omni-Path | |
yeti | exotic job type | 2018-01-16 | 4 | 4 x Intel Xeon Gold 6130 | 16 cores/CPU | 768 GiB | 480 GB SSD + 3 x 2.0 TB HDD* + 2 x 1.6 TB SSD | 10 Gbps + 100 Gbps Omni-Path |
*: disk is reservable
Clusters in the default queue
dahu
32 nodes, 64 cpus, 1024 cores (json)
Reservation example:
Model: | Dell PowerEdge C6420 |
Date of arrival: | 2018-03-22 |
CPU: | Intel Xeon Gold 6130 (Skylake, 2.10GHz, 2 CPUs/node, 16 cores/CPU) |
Memory: | 192 GiB |
Storage: |
|
Network: |
|
drac
12 nodes, 24 cpus, 240 cores (json)
Reservation example:
Access condition: | exotic job type |
Model: | IBM PowerNV S822LC (8335-GTB) |
Date of arrival: | 2020-10-05 |
CPU: | POWER8NVL 1.0 (POWER8, 2 CPUs/node, 10 cores/CPU) |
Memory: | 128 GiB |
Storage: |
|
Network: |
|
GPU: | 4 x Nvidia Tesla P100-SXM2-16GB (16 GiB) Compute capability: 6.0 |
troll
4 nodes, 8 cpus, 128 cores (json)
Reservation example:
Access condition: | exotic job type |
Model: | Dell PowerEdge R640 |
Date of arrival: | 2019-12-23 |
CPU: | Intel Xeon Gold 5218 (Cascade Lake-SP, 2.30GHz, 2 CPUs/node, 16 cores/CPU) |
Memory: | 384 GiB + 1.5 TiB PMEM |
Storage: |
|
Network: |
|
yeti
4 nodes, 16 cpus, 256 cores, split as follows due to differences between nodes (json)
Reservation example:
- yeti-1 (1 node, 4 cpus, 64 cores)
Access condition: | exotic job type |
Model: | Dell PowerEdge R940 |
Date of arrival: | 2018-01-16 |
CPU: | Intel Xeon Gold 6130 (Skylake, 2.10GHz, 4 CPUs/node, 16 cores/CPU) |
Memory: | 768 GiB |
Storage: |
|
Network: |
|
- yeti-[2,
4] (2 nodes, 8 cpus, 128 cores)
Access condition: | exotic job type |
Model: | Dell PowerEdge R940 |
Date of arrival: | 2018-01-16 |
CPU: | Intel Xeon Gold 6130 (Skylake, 2.10GHz, 4 CPUs/node, 16 cores/CPU) |
Memory: | 768 GiB |
Storage: |
|
Network: |
|
- yeti-3 (1 node, 4 cpus, 64 cores)
Access condition: | exotic job type |
Model: | Dell PowerEdge R940 |
Date of arrival: | 2018-01-16 |
CPU: | Intel Xeon Gold 6130 (Skylake, 2.10GHz, 4 CPUs/node, 16 cores/CPU) |
Memory: | 768 GiB |
Storage: |
|
Network: |
|
Clusters in the testing queue
servan
2 nodes, 4 cpus, 96 cores (json)
Reservation example:
Access condition: | testing queue, exotic job type |
Model: | Dell PowerEdge R7525 |
Date of arrival: | 2021-12-15 |
CPU: | AMD EPYC 7352 (Zen 2, 2 CPUs/node, 24 cores/CPU) |
Memory: | 128 GiB |
Storage: |
|
Network: |
|
FPGA: | Xilinx Alveo U200 |