Grenoble:Hardware: Difference between revisions
		
		
		
		
		
		Jump to navigation
		Jump to search
		
				
		
		
	
| No edit summary | No edit summary | ||
| (12 intermediate revisions by 5 users not shown) | |||
| Line 4: | Line 4: | ||
| <div class="sitelink">Hardware: [[Hardware|Global]] | [[Grenoble:Hardware|Grenoble]] | [[Lille:Hardware|Lille]] | [[Luxembourg:Hardware|Luxembourg]] | [[Lyon:Hardware|Lyon]] | [[Nancy:Hardware|Nancy]] | [[Nantes:Hardware|Nantes]] | [[Rennes:Hardware|Rennes]] | [[Sophia:Hardware|Sophia]]</div> | <div class="sitelink">Hardware: [[Hardware|Global]] | [[Grenoble:Hardware|Grenoble]] | [[Lille:Hardware|Lille]] | [[Luxembourg:Hardware|Luxembourg]] | [[Lyon:Hardware|Lyon]] | [[Nancy:Hardware|Nancy]] | [[Nantes:Hardware|Nantes]] | [[Rennes:Hardware|Rennes]] | [[Sophia:Hardware|Sophia]]</div> | ||
| '''See also:''' [[Grenoble:Network|Network topology for Grenoble]] | '''See also:''' [[Grenoble:Network|Network topology for Grenoble]] | ||
| = Summary = | |||
| * 5 clusters | |||
| * 54 nodes | |||
| * 1744 CPU cores | |||
| * 48 GPUs | |||
| * 12.25 TiB RAM + 6.0 TiB PMEM | |||
| * 88 SSDs and 68 HDDs on nodes (total: 228.53 TB) | |||
| * 101.9 TFLOPS (excluding GPUs) | |||
| =  | = Clusters = | ||
| {|class="wikitable sortable" | {|class="wikitable sortable" | ||
| |- | |- | ||
| Line 23: | Line 30: | ||
| |- | |- | ||
| |[[#drac|drac]]||<b>[[Getting_Started#Selecting_specific_resources|exotic]]</b> job type||2020-10-05||12||2 x Power POWER8NVL 1.0||10 cores/CPU||128 GiB||data-sort-value="1862"|<b>1 x 1.0 TB HDD</b> + 1 x 1.0 TB HDD||data-sort-value="210000"|10 Gbps + 2 x 100 Gbps InfiniBand||4 x Nvidia Tesla P100 (16 GiB) | |[[#drac|drac]]||<b>[[Getting_Started#Selecting_specific_resources|exotic]]</b> job type||2020-10-05||12||2 x Power POWER8NVL 1.0||10 cores/CPU||128 GiB||data-sort-value="1862"|<b>1 x 1.0 TB HDD</b> + 1 x 1.0 TB HDD||data-sort-value="210000"|10 Gbps + 2 x 100 Gbps InfiniBand||4 x Nvidia Tesla P100 (16 GiB) | ||
| |- | |||
| |[[#servan|servan]]||<b>testing</b> queue,<br/><b>[[Getting_Started#Selecting_specific_resources|exotic]]</b> job type||2021-12-15||2||2 x AMD EPYC 7352||24 cores/CPU||128 GiB||data-sort-value="2980"|<b>1 x 1.6 TB SSD</b> + 1 x 1.6 TB SSD||data-sort-value="225000"|25 Gbps + 2 x 100 Gbps Ethernet-FPGA||Xilinx Alveo U200 | |||
| |- | |- | ||
| |[[#troll|troll]]||<b>[[Getting_Started#Selecting_specific_resources|exotic]]</b> job type||2019-12-23||4||2 x Intel Xeon Gold 5218||16 cores/CPU||384 GiB + 1.5 TiB [[PMEM]]||data-sort-value="1937"|<b>480 GB SSD</b> + 1.6 TB SSD||data-sort-value="125000"|25 Gbps (SR‑IOV) + 100 Gbps Omni-Path|| | |[[#troll|troll]]||<b>[[Getting_Started#Selecting_specific_resources|exotic]]</b> job type||2019-12-23||4||2 x Intel Xeon Gold 5218||16 cores/CPU||384 GiB + 1.5 TiB [[PMEM]]||data-sort-value="1937"|<b>480 GB SSD</b> + 1.6 TB SSD||data-sort-value="125000"|25 Gbps (SR‑IOV) + 100 Gbps Omni-Path|| | ||
| Line 30: | Line 39: | ||
| |} | |} | ||
| ''*: disk is [[Disk_reservation|reservable]]'' | ''*: disk is [[Disk_reservation|reservable]]'' | ||
| = Clusters in default queue = | = Clusters in the [https://intranet.grid5000.fr/oar/Grenoble/drawgantt-svg/ default queue] = | ||
| == dahu == | == [https://intranet.grid5000.fr/oar/Grenoble/drawgantt-svg/?filter=dahu%20only dahu] == | ||
| '''32 nodes, 64 cpus, 1024 cores''' ([https://public-api.grid5000.fr/stable/sites/grenoble/clusters/dahu/nodes.json?pretty=1 json]) | '''32 nodes, 64 cpus, 1024 cores''' ([https://public-api.grid5000.fr/stable/sites/grenoble/clusters/dahu/nodes.json?pretty=1 json]) | ||
| Line 38: | Line 47: | ||
| '''Reservation example:''' | '''Reservation example:''' | ||
| {{Term|location=fgrenoble|cmd=<code class="command">oarsub</code> <code class="env">-p  | {{Term|location=fgrenoble|cmd=<code class="command">oarsub</code> <code class="env">-p dahu</code> <code>-I</code>}} | ||
| {| | {| | ||
| Line 68: | Line 77: | ||
| |} | |} | ||
| == drac == | == [https://intranet.grid5000.fr/oar/Grenoble/drawgantt-svg/?filter=drac%20only drac] == | ||
| '''12 nodes, 24 cpus, 240 cores''' ([https://public-api.grid5000.fr/stable/sites/grenoble/clusters/drac/nodes.json?pretty=1 json]) | '''12 nodes, 24 cpus, 240 cores''' ([https://public-api.grid5000.fr/stable/sites/grenoble/clusters/drac/nodes.json?pretty=1 json]) | ||
| Line 74: | Line 83: | ||
| '''Reservation example:''' | '''Reservation example:''' | ||
| {{Term|location=fgrenoble|cmd=<code class="command">oarsub</code> <code class="replace">-t exotic</code> <code class="env">-p  | {{Term|location=fgrenoble|cmd=<code class="command">oarsub</code> <code class="replace">-t exotic</code> <code class="env">-p drac</code> <code>-I</code>}} | ||
| {| | {| | ||
| Line 100: | Line 109: | ||
| | valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Network:''' | | valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Network:''' | ||
| |   | |   | ||
| * eth0/enP1p1s0f0, Ethernet, configured rate: 10 Gbps, model: Broadcom NetXtreme II BCM57800 1/10 Gigabit Ethernet, driver: bnx2x<br /> | * eth0/enP1p1s0f0, Ethernet, configured rate: 10 Gbps, model: Broadcom Inc. and subsidiaries NetXtreme II BCM57800 1/10 Gigabit Ethernet, driver: bnx2x<br /> | ||
| * <span style="color:grey">eth1/enP1p1s0f1, Ethernet, model: Broadcom NetXtreme II BCM57800 1/10 Gigabit Ethernet, driver: bnx2x - unavailable for experiment</span><br /> | * <span style="color:grey">eth1/enP1p1s0f1, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme II BCM57800 1/10 Gigabit Ethernet, driver: bnx2x - unavailable for experiment</span><br /> | ||
| * <span style="color:grey">eth2/enP1p1s0f2, Ethernet, model: Broadcom NetXtreme II BCM57800 1/10 Gigabit Ethernet, driver: bnx2x - unavailable for experiment</span><br /> | * <span style="color:grey">eth2/enP1p1s0f2, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme II BCM57800 1/10 Gigabit Ethernet, driver: bnx2x - unavailable for experiment</span><br /> | ||
| * <span style="color:grey">eth3/enP1p1s0f3, Ethernet, model: Broadcom NetXtreme II BCM57800 1/10 Gigabit Ethernet, driver: bnx2x - unavailable for experiment</span><br /> | * <span style="color:grey">eth3/enP1p1s0f3, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme II BCM57800 1/10 Gigabit Ethernet, driver: bnx2x - unavailable for experiment</span><br /> | ||
| * <span style="color:grey">eth4/enP9p7s0f0, Ethernet, configured rate: 1 Gbps, model: Broadcom NetXtreme BCM5719 Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment</span><br /> | * <span style="color:grey">eth4/enP9p7s0f0, Ethernet, configured rate: 1 Gbps, model: Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment</span><br /> | ||
| * <span style="color:grey">eth5/enP9p7s0f1, Ethernet, model: Broadcom NetXtreme BCM5719 Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment</span><br /> | * <span style="color:grey">eth5/enP9p7s0f1, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment</span><br /> | ||
| * ib0, InfiniBand, configured rate: 100 Gbps, model: Mellanox Technologies MT27700 Family [ConnectX-4], driver: mlx5_core<br /> | * ib0, InfiniBand, configured rate: 100 Gbps, model: Mellanox Technologies MT27700 Family [ConnectX-4], driver: mlx5_core<br /> | ||
| * ib1, InfiniBand, configured rate: 100 Gbps, model: Mellanox Technologies MT27700 Family [ConnectX-4], driver: mlx5_core<br/> | * ib1, InfiniBand, configured rate: 100 Gbps, model: Mellanox Technologies MT27700 Family [ConnectX-4], driver: mlx5_core<br/> | ||
| Line 114: | Line 123: | ||
| |} | |} | ||
| == troll == | == [https://intranet.grid5000.fr/oar/Grenoble/drawgantt-svg/?filter=troll%20only troll] == | ||
| '''4 nodes, 8 cpus, 128 cores''' ([https://public-api.grid5000.fr/stable/sites/grenoble/clusters/troll/nodes.json?pretty=1 json]) | '''4 nodes, 8 cpus, 128 cores''' ([https://public-api.grid5000.fr/stable/sites/grenoble/clusters/troll/nodes.json?pretty=1 json]) | ||
| Line 120: | Line 129: | ||
| '''Reservation example:''' | '''Reservation example:''' | ||
| {{Term|location=fgrenoble|cmd=<code class="command">oarsub</code> <code class="replace">-t exotic</code> <code class="env">-p  | {{Term|location=fgrenoble|cmd=<code class="command">oarsub</code> <code class="replace">-t exotic</code> <code class="env">-p troll</code> <code>-I</code>}} | ||
| {| | {| | ||
| Line 152: | Line 161: | ||
| |} | |} | ||
| == yeti == | == [https://intranet.grid5000.fr/oar/Grenoble/drawgantt-svg/?filter=yeti%20only yeti] == | ||
| '''4 nodes, 16 cpus, 256 cores,''' split as follows due to differences between nodes ([https://public-api.grid5000.fr/stable/sites/grenoble/clusters/yeti/nodes.json?pretty=1 json]) | '''4 nodes, 16 cpus, 256 cores,''' split as follows due to differences between nodes ([https://public-api.grid5000.fr/stable/sites/grenoble/clusters/yeti/nodes.json?pretty=1 json]) | ||
| Line 158: | Line 167: | ||
| '''Reservation example:''' | '''Reservation example:''' | ||
| {{Term|location=fgrenoble|cmd=<code class="command">oarsub</code> <code class="replace">-t exotic</code> <code class="env">-p  | {{Term|location=fgrenoble|cmd=<code class="command">oarsub</code> <code class="replace">-t exotic</code> <code class="env">-p yeti</code> <code>-I</code>}} | ||
| ; yeti-[1-2,<wbr>4] ( | ; yeti-1 (1 node, 4 cpus, 64 cores) | ||
| {| | |||
| |- | |||
| | valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Access condition:''' | |||
| | exotic job type<br/> | |||
| |- | |||
| | valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Model:''' | |||
| | Dell PowerEdge R940<br/> | |||
| |- | |||
| | valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Date of arrival:''' | |||
| | 2018-01-16<br/> | |||
| |- | |||
| | valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''CPU:''' | |||
| | Intel Xeon Gold 6130 (Skylake, 2.10GHz, 4 CPUs/node, 16 cores/CPU)<br/> | |||
| |- | |||
| | valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Memory:''' | |||
| | 768 GiB<br/> | |||
| |- | |||
| | valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Storage:''' | |||
| |  | |||
| * disk0, 480 GB SSD SATA Intel SSDSC2KG480G7R (dev: <code class="file">/dev/disk0</code>, by-path: <code class="file">/dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:0:0</code>)  (primary disk)<br /> | |||
| * disk1, 2.0 TB HDD SAS Seagate ST2000NX0463 (dev: <code class="file">/dev/disk1</code>[[Disk_reservation|*]], by-path: <code class="file">/dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:1:0</code>) [[Disk_reservation|(reservable)]] <br /> | |||
| * disk2, 2.0 TB HDD SAS Seagate ST2000NX0463 (dev: <code class="file">/dev/disk2</code>[[Disk_reservation|*]], by-path: <code class="file">/dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:2:0</code>) [[Disk_reservation|(reservable)]] <br /> | |||
| * disk3, 2.0 TB HDD SAS Seagate ST2000NX0463 (dev: <code class="file">/dev/disk3</code>[[Disk_reservation|*]], by-path: <code class="file">/dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:3:0</code>) [[Disk_reservation|(reservable)]] <br /> | |||
| * disk4, 1.6 TB SSD NVME Dell Dell Express Flash PM1725b 1.6TB AIC (dev: <code class="file">/dev/disk4</code>, by-path: <code class="file">/dev/disk/by-path/pci-0000:59:00.0-nvme-1</code>)  <br /> | |||
| * disk5, 1.6 TB SSD NVME Dell Dell Express Flash NVMe PM1725 1.6TB AIC (dev: <code class="file">/dev/disk5</code>, by-path: <code class="file">/dev/disk/by-path/pci-0000:6d:00.0-nvme-1</code>)  <br/> | |||
| |- | |||
| | valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Network:''' | |||
| |  | |||
| * eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e<br /> | |||
| * <span style="color:grey">eth1/eno2, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment</span><br /> | |||
| * <span style="color:grey">eth2/eno3, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment</span><br /> | |||
| * <span style="color:grey">eth3/eno4, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment</span><br /> | |||
| * ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1<br/> | |||
| |- | |||
| |} | |||
| <hr style="height:10pt; visibility:hidden;" /> | |||
| ; yeti-[2,<wbr>4] (2 nodes, 8 cpus, 128 cores) | |||
| {| | {| | ||
| |- | |- | ||
| Line 233: | Line 280: | ||
| * ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1<br/> | * ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1<br/> | ||
| |- | |- | ||
| |}''<small>Last generated from the Grid'5000 Reference API on  | |} | ||
| = Clusters in the testing queue = | |||
| == [https://intranet.grid5000.fr/oar/Grenoble/drawgantt-svg/?filter=servan%20only servan] == | |||
| '''2 nodes, 4 cpus, 96 cores''' ([https://public-api.grid5000.fr/stable/sites/grenoble/clusters/servan/nodes.json?pretty=1 json]) | |||
| '''Reservation example:''' | |||
| {{Term|location=fgrenoble|cmd=<code class="command">oarsub</code> <code class="replace">-q testing</code> <code class="replace">-t exotic</code> <code class="env">-p servan</code> <code>-I</code>}} | |||
| {| | |||
| |- | |||
| | valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Access condition:''' | |||
| | testing queue, exotic job type<br/> | |||
| |- | |||
| | valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Model:''' | |||
| | Dell PowerEdge R7525<br/> | |||
| |- | |||
| | valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Date of arrival:''' | |||
| | 2021-12-15<br/> | |||
| |- | |||
| | valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''CPU:''' | |||
| | AMD EPYC 7352 (Zen 2, 2 CPUs/node, 24 cores/CPU)<br/> | |||
| |- | |||
| | valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Memory:''' | |||
| | 128 GiB<br/> | |||
| |- | |||
| | valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Storage:''' | |||
| |  | |||
| * disk0, 1.6 TB SSD NVME Dell Dell Ent NVMe v2 AGN MU U.2 1.6TB (dev: <code class="file">/dev/disk0</code>, by-path: <code class="file">/dev/disk/by-path/pci-0000:c3:00.0-nvme-1</code>)  (primary disk)<br /> | |||
| * disk1, 1.6 TB SSD NVME Dell Dell Ent NVMe v2 AGN MU U.2 1.6TB (dev: <code class="file">/dev/disk1</code>, by-path: <code class="file">/dev/disk/by-path/pci-0000:c4:00.0-nvme-1</code>)  <br/> | |||
| |- | |||
| | valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Network:''' | |||
| |  | |||
| * eth0/eno33, Ethernet, configured rate: 25 Gbps, model: Intel Ethernet Controller E810-C for SFP, driver: ice<br /> | |||
| * <span style="color:grey">eth1/eno34, Ethernet, model: Intel Ethernet Controller E810-C for SFP, driver: ice - unavailable for experiment</span><br /> | |||
| * <span style="color:grey">eth2/eno35, Ethernet, model: Intel Ethernet Controller E810-C for SFP, driver: ice - unavailable for experiment</span><br /> | |||
| * <span style="color:grey">eth3/eno36, Ethernet, model: Intel Ethernet Controller E810-C for SFP, driver: ice - unavailable for experiment</span><br /> | |||
| * fpga0, Ethernet-FPGA, configured rate: 100 Gbps, model: Xilinx Alveo U200 QSFP28 (100GbE), driver: xrt<br /> | |||
| * fpga1, Ethernet-FPGA, configured rate: 100 Gbps, model: Xilinx Alveo U200 QSFP28 (100GbE), driver: xrt<br/> | |||
| |- | |||
| | valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''FPGA:''' | |||
| | Xilinx Alveo U200<br/> | |||
| |- | |||
| |}''<small>Last generated from the Grid'5000 Reference API on 2022-05-24 ([https://github.com/grid5000/reference-repository/commit/e3c52eea14 commit e3c52eea14])</small>'' | |||
Revision as of 16:16, 24 May 2022
See also: Network topology for Grenoble
Summary
- 5 clusters
- 54 nodes
- 1744 CPU cores
- 48 GPUs
- 12.25 TiB RAM + 6.0 TiB PMEM
- 88 SSDs and 68 HDDs on nodes (total: 228.53 TB)
- 101.9 TFLOPS (excluding GPUs)
Clusters
| Cluster | Access Condition | Date of arrival | Nodes | CPU | Cores | Memory | Storage | Network | Accelerators | 
|---|---|---|---|---|---|---|---|---|---|
| dahu | 2018-03-22 | 32 | 2 x Intel Xeon Gold 6130 | 16 cores/CPU | 192 GiB | 240 GB SSD + 480 GB SSD + 4.0 TB HDD | 10 Gbps + 100 Gbps Omni-Path | ||
| drac | exotic job type | 2020-10-05 | 12 | 2 x Power POWER8NVL 1.0 | 10 cores/CPU | 128 GiB | 1 x 1.0 TB HDD + 1 x 1.0 TB HDD | 10 Gbps + 2 x 100 Gbps InfiniBand | 4 x Nvidia Tesla P100 (16 GiB) | 
| servan | testing queue, exotic job type | 2021-12-15 | 2 | 2 x AMD EPYC 7352 | 24 cores/CPU | 128 GiB | 1 x 1.6 TB SSD + 1 x 1.6 TB SSD | 25 Gbps + 2 x 100 Gbps Ethernet-FPGA | Xilinx Alveo U200 | 
| troll | exotic job type | 2019-12-23 | 4 | 2 x Intel Xeon Gold 5218 | 16 cores/CPU | 384 GiB + 1.5 TiB PMEM | 480 GB SSD + 1.6 TB SSD | 25 Gbps (SR‑IOV) + 100 Gbps Omni-Path | |
| yeti | exotic job type | 2018-01-16 | 4 | 4 x Intel Xeon Gold 6130 | 16 cores/CPU | 768 GiB | 480 GB SSD + 3 x 2.0 TB HDD* + 2 x 1.6 TB SSD | 10 Gbps + 100 Gbps Omni-Path | 
*: disk is reservable
Clusters in the default queue
dahu
32 nodes, 64 cpus, 1024 cores (json)
Reservation example:
| Model: | Dell PowerEdge C6420 | 
| Date of arrival: | 2018-03-22 | 
| CPU: | Intel Xeon Gold 6130 (Skylake, 2.10GHz, 2 CPUs/node, 16 cores/CPU) | 
| Memory: | 192 GiB | 
| Storage: | 
 | 
| Network: | 
 | 
drac
12 nodes, 24 cpus, 240 cores (json)
Reservation example:
| Access condition: | exotic job type | 
| Model: | IBM PowerNV S822LC (8335-GTB) | 
| Date of arrival: | 2020-10-05 | 
| CPU: | POWER8NVL 1.0 (POWER8, 2 CPUs/node, 10 cores/CPU) | 
| Memory: | 128 GiB | 
| Storage: | 
 | 
| Network: | 
 | 
| GPU: | 4 x Nvidia Tesla P100-SXM2-16GB (16 GiB) Compute capability: 6.0 | 
troll
4 nodes, 8 cpus, 128 cores (json)
Reservation example:
| Access condition: | exotic job type | 
| Model: | Dell PowerEdge R640 | 
| Date of arrival: | 2019-12-23 | 
| CPU: | Intel Xeon Gold 5218 (Cascade Lake-SP, 2.30GHz, 2 CPUs/node, 16 cores/CPU) | 
| Memory: | 384 GiB + 1.5 TiB PMEM | 
| Storage: | 
 | 
| Network: | 
 | 
yeti
4 nodes, 16 cpus, 256 cores, split as follows due to differences between nodes (json)
Reservation example:
- yeti-1 (1 node, 4 cpus, 64 cores)
| Access condition: | exotic job type | 
| Model: | Dell PowerEdge R940 | 
| Date of arrival: | 2018-01-16 | 
| CPU: | Intel Xeon Gold 6130 (Skylake, 2.10GHz, 4 CPUs/node, 16 cores/CPU) | 
| Memory: | 768 GiB | 
| Storage: | 
 | 
| Network: | 
 | 
- yeti-[2,4] (2 nodes, 8 cpus, 128 cores) 
| Access condition: | exotic job type | 
| Model: | Dell PowerEdge R940 | 
| Date of arrival: | 2018-01-16 | 
| CPU: | Intel Xeon Gold 6130 (Skylake, 2.10GHz, 4 CPUs/node, 16 cores/CPU) | 
| Memory: | 768 GiB | 
| Storage: | 
 | 
| Network: | 
 | 
- yeti-3 (1 node, 4 cpus, 64 cores)
| Access condition: | exotic job type | 
| Model: | Dell PowerEdge R940 | 
| Date of arrival: | 2018-01-16 | 
| CPU: | Intel Xeon Gold 6130 (Skylake, 2.10GHz, 4 CPUs/node, 16 cores/CPU) | 
| Memory: | 768 GiB | 
| Storage: | 
 | 
| Network: | 
 | 
Clusters in the testing queue
servan
2 nodes, 4 cpus, 96 cores (json)
Reservation example:
| Access condition: | testing queue, exotic job type | 
| Model: | Dell PowerEdge R7525 | 
| Date of arrival: | 2021-12-15 | 
| CPU: | AMD EPYC 7352 (Zen 2, 2 CPUs/node, 24 cores/CPU) | 
| Memory: | 128 GiB | 
| Storage: | 
 | 
| Network: | 
 | 
| FPGA: | Xilinx Alveo U200 | 
Last generated from the Grid'5000 Reference API on 2022-05-24 (commit e3c52eea14)
