Grenoble:Hardware: Difference between revisions
Jump to navigation
Jump to search
No edit summary |
No edit summary |
||
Line 20: | Line 20: | ||
|[[#dahu|dahu]]||||2018-03-22||32||2 x Intel Xeon Gold 6130||16 cores/CPU||192 GiB||data-sort-value="4396"|<b>240 GB SSD</b> + 480 GB SSD + 4.0 TB HDD||data-sort-value="110000"|10 Gbps + 100 Gbps Omni-Path | |[[#dahu|dahu]]||||2018-03-22||32||2 x Intel Xeon Gold 6130||16 cores/CPU||192 GiB||data-sort-value="4396"|<b>240 GB SSD</b> + 480 GB SSD + 4.0 TB HDD||data-sort-value="110000"|10 Gbps + 100 Gbps Omni-Path | ||
|- | |- | ||
|[[#drac_.28testing_queue.29|drac]]||<b>testing</b> queue,<br/><b>exotic</b> job type||2020-10-05||12||2 x POWER8NVL 1.0 (pvr 004c 0100)||10 cores/CPU||128 GiB||data-sort-value="1862"|<b>1 x 1.0 TB HDD</b> + 1 x 1.0 TB HDD||data-sort-value=" | |[[#drac_.28testing_queue.29|drac]]||<b>testing</b> queue,<br/><b>exotic</b> job type||2020-10-05||12||2 x POWER8NVL 1.0 (pvr 004c 0100)||10 cores/CPU||128 GiB||data-sort-value="1862"|<b>1 x 1.0 TB HDD</b> + 1 x 1.0 TB HDD||data-sort-value="1000"|1 Gbps | ||
|- | |- | ||
|[[#troll|troll]]||||2019-12-23||4||2 x Intel Xeon Gold 5218||16 cores/CPU||384 GiB + 1.5 TiB [[PMEM]]||data-sort-value="1937"|<b>480 GB SSD</b> + 1.6 TB SSD||data-sort-value="110000"|10 Gbps + 100 Gbps Omni-Path | |[[#troll|troll]]||||2019-12-23||4||2 x Intel Xeon Gold 5218||16 cores/CPU||384 GiB + 1.5 TiB [[PMEM]]||data-sort-value="1937"|<b>480 GB SSD</b> + 1.6 TB SSD||data-sort-value="110000"|10 Gbps + 100 Gbps Omni-Path | ||
Line 92: | Line 92: | ||
* <span style="color:grey">eth4/enP9p7s0f0, Ethernet, model: Broadcom NetXtreme BCM5719 Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment</span><br /> | * <span style="color:grey">eth4/enP9p7s0f0, Ethernet, model: Broadcom NetXtreme BCM5719 Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment</span><br /> | ||
* <span style="color:grey">eth5/enP9p7s0f1, Ethernet, model: Broadcom NetXtreme BCM5719 Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment</span><br /> | * <span style="color:grey">eth5/enP9p7s0f1, Ethernet, model: Broadcom NetXtreme BCM5719 Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment</span><br /> | ||
* ib0, InfiniBand | * <span style="color:grey">ib0, InfiniBand, model: Mellanox Technologies MT27700 Family [ConnectX-4], driver: mlx5_core - unavailable for experiment</span><br /> | ||
* <span style="color:grey">ib1, InfiniBand, model: Mellanox Technologies MT27700 Family [ConnectX-4], driver: mlx5_core - unavailable for experiment</span><br/> | * <span style="color:grey">ib1, InfiniBand, model: Mellanox Technologies MT27700 Family [ConnectX-4], driver: mlx5_core - unavailable for experiment</span><br/> | ||
|- | |- | ||
Line 126: | Line 126: | ||
* <span style="color:grey">eth4/enP9p7s0f0, Ethernet, model: Broadcom NetXtreme BCM5719 Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment</span><br /> | * <span style="color:grey">eth4/enP9p7s0f0, Ethernet, model: Broadcom NetXtreme BCM5719 Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment</span><br /> | ||
* <span style="color:grey">eth5/enP9p7s0f1, Ethernet, model: Broadcom NetXtreme BCM5719 Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment</span><br /> | * <span style="color:grey">eth5/enP9p7s0f1, Ethernet, model: Broadcom NetXtreme BCM5719 Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment</span><br /> | ||
* ib0, InfiniBand | * <span style="color:grey">ib0, InfiniBand, model: Mellanox Technologies MT27700 Family [ConnectX-4], driver: mlx5_core - unavailable for experiment</span><br /> | ||
* <span style="color:grey">ib1, InfiniBand, model: Mellanox Technologies MT27700 Family [ConnectX-4], driver: mlx5_core - unavailable for experiment</span><br/> | * <span style="color:grey">ib1, InfiniBand, model: Mellanox Technologies MT27700 Family [ConnectX-4], driver: mlx5_core - unavailable for experiment</span><br/> | ||
|- | |- | ||
Line 231: | Line 231: | ||
* ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1<br/> | * ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1<br/> | ||
|- | |- | ||
|}''<small>Last generated from the Grid'5000 Reference API on 2020-11- | |}''<small>Last generated from the Grid'5000 Reference API on 2020-11-06 ([https://github.com/grid5000/reference-repository/commit/1c4fd6704 commit 1c4fd6704])</small>'' |
Revision as of 15:18, 6 November 2020
Summary
4 clusters, 52 nodes, 1648 cores, 98.4 TFLOPS
Cluster | Access Condition | Date of arrival | Nodes | CPU | Cores | Memory | Storage | Network |
---|---|---|---|---|---|---|---|---|
dahu | 2018-03-22 | 32 | 2 x Intel Xeon Gold 6130 | 16 cores/CPU | 192 GiB | 240 GB SSD + 480 GB SSD + 4.0 TB HDD | 10 Gbps + 100 Gbps Omni-Path | |
drac | testing queue, exotic job type |
2020-10-05 | 12 | 2 x POWER8NVL 1.0 (pvr 004c 0100) | 10 cores/CPU | 128 GiB | 1 x 1.0 TB HDD + 1 x 1.0 TB HDD | 1 Gbps |
troll | 2019-12-23 | 4 | 2 x Intel Xeon Gold 5218 | 16 cores/CPU | 384 GiB + 1.5 TiB PMEM | 480 GB SSD + 1.6 TB SSD | 10 Gbps + 100 Gbps Omni-Path | |
yeti | 2018-01-16 | 4 | 4 x Intel Xeon Gold 6130 | 16 cores/CPU | 768 GiB | 480 GB SSD + 3 x 2.0 TB HDD* + 1.6 TB SSD | 10 Gbps + 100 Gbps Omni-Path |
*: disk is reservable
Cluster details
dahu
32 nodes, 64 cpus, 1024 cores (json)
Model: | Dell PowerEdge C6420 |
Date of arrival: | 2018-03-22 |
CPU: | Intel Xeon Gold 6130 (Skylake, 2.10GHz, 2 CPUs/node, 16 cores/CPU) |
Memory: | 192 GiB |
Storage: |
|
Network: |
|
drac (testing queue, exotic job type)
12 nodes, 24 cpus, 240 cores, split as follows due to differences between nodes (json)
- drac-[1-4] (4 nodes, 8 cpus, 80 cores)
Model: | IBM PowerNV S822LC (8335-GTB) |
Date of arrival: | 2020-10-05 |
CPU: | POWER8NVL 1.0 (pvr 004c 0100) (POWER8NVL, 2 CPUs/node, 10 cores/CPU) |
Memory: | 128 GiB |
Storage: |
|
Network: |
|
- drac-[5-12] (8 nodes, 16 cpus, 160 cores)
Model: | IBM PowerNV S822LC (8335-GTB) |
Date of arrival: | 2020-10-05 |
CPU: | POWER8NVL 1.0 (pvr 004c 0100) (POWER8NVL, 2 CPUs/node, 10 cores/CPU) |
Memory: | 128 GiB |
Storage: |
|
Network: |
|
troll
4 nodes, 8 cpus, 128 cores (json)
Model: | Dell PowerEdge R640 |
Date of arrival: | 2019-12-23 |
CPU: | Intel Xeon Gold 5218 (Cascade Lake-SP, 2.30GHz, 2 CPUs/node, 16 cores/CPU) |
Memory: | 384 GiB + 1.5 TiB PMEM |
Storage: |
|
Network: |
|
yeti
4 nodes, 16 cpus, 256 cores, split as follows due to differences between nodes (json)
- yeti-[1-2,
4] (3 nodes, 12 cpus, 192 cores)
Model: | Dell PowerEdge R940 |
Date of arrival: | 2018-01-16 |
CPU: | Intel Xeon Gold 6130 (Skylake, 2.10GHz, 4 CPUs/node, 16 cores/CPU) |
Memory: | 768 GiB |
Storage: |
*: the disk block device name /dev/sd? may vary in deployed environments, prefer referring to the by-path identifier |
Network: |
|
- yeti-3 (1 node, 4 cpus, 64 cores)
Model: | Dell PowerEdge R940 |
Date of arrival: | 2018-01-16 |
CPU: | Intel Xeon Gold 6130 (Skylake, 2.10GHz, 4 CPUs/node, 16 cores/CPU) |
Memory: | 768 GiB |
Storage: |
*: the disk block device name /dev/sd? may vary in deployed environments, prefer referring to the by-path identifier |
Network: |
|
Last generated from the Grid'5000 Reference API on 2020-11-06 (commit 1c4fd6704)