Grenoble:Hardware: Difference between revisions
Jump to navigation
Jump to search
No edit summary |
No edit summary |
||
Line 59: | Line 59: | ||
== yeti == | == yeti == | ||
'''4 nodes, 16 cpus, 256 cores''' ([https://public-api.grid5000.fr/stable/sites/grenoble/clusters/yeti/nodes.json?pretty=1 json]) | '''4 nodes, 16 cpus, 256 cores,''' split as follows due to differences between nodes ([https://public-api.grid5000.fr/stable/sites/grenoble/clusters/yeti/nodes.json?pretty=1 json]) | ||
; yeti-[1-2,<wbr>4] (3 nodes, 12 cpus, 192 cores) | |||
{| | {| | ||
|- | |- | ||
Line 91: | Line 92: | ||
* ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1<br/> | * ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1<br/> | ||
|- | |- | ||
|}''<small>Last generated from the Grid'5000 Reference API on 2019- | |} | ||
<hr style="height:10pt; visibility:hidden;" /> | |||
; yeti-3 (1 node, 4 cpus, 64 cores) | |||
{| | |||
|- | |||
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Model:''' | |||
| Dell PowerEdge R940<br/> | |||
|- | |||
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Date of arrival:''' | |||
| 2018-01-16<br/> | |||
|- | |||
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''CPU:''' | |||
| Intel Xeon Gold 6130 (Skylake, 2.10GHz, 4 CPUs/node, 16 cores/CPU)<br/> | |||
|- | |||
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Memory:''' | |||
| 768 GiB<br/> | |||
|- | |||
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Storage:''' | |||
| | |||
* 1.6 TB SSD NVME Dell Dell Express Flash NVMe PM1725 1.6TB AIC (driver: nvme, path: /dev/disk/by-path/pci-0000:6d:00.0-nvme-1)<br /> | |||
* 1.6 TB SSD NVME Dell Dell Express Flash NVMe PM1725 1.6TB AIC (driver: nvme, path: /dev/disk/by-path/pci-0000:6e:00.0-nvme-1)<br /> | |||
* 480 GB SSD SAS Intel SSDSC2KG480G8R (driver: megaraid_sas, path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:0:0)<br /> | |||
* 2.0 TB HDD SAS Seagate ST2000NX0463 (driver: megaraid_sas, path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:1:0)<br /> | |||
* 2.0 TB HDD SAS Seagate ST2000NX0463 (driver: megaraid_sas, path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:2:0)<br /> | |||
* 2.0 TB HDD SAS Seagate ST2000NX0463 (driver: megaraid_sas, path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:3:0)<br/> | |||
|- | |||
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Network:''' | |||
| | |||
* eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e<br /> | |||
* <span style="color:grey">eth1/eno2, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment</span><br /> | |||
* <span style="color:grey">eth2/eno3, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment</span><br /> | |||
* <span style="color:grey">eth3/eno4, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment</span><br /> | |||
* ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1<br/> | |||
|- | |||
|}''<small>Last generated from the Grid'5000 Reference API on 2019-09-03 ([https://github.com/grid5000/reference-repository/commit/bfe66c15e commit bfe66c15e])</small>'' |
Revision as of 14:13, 3 September 2019
Summary
2 clusters, 36 nodes, 1280 cores, 17.1 TFLOPS
Cluster | Queue | Date of arrival | Nodes | CPU | Cores | Memory | Storage | Network |
---|---|---|---|---|---|---|---|---|
dahu | default | 2018-03-22 | 32 | 2 x Intel Xeon Gold 6130 | 16 cores/CPU | 192 GiB | 240 GB SSD + 480 GB SSD + 4.0 TB HDD | 10 Gbps + 100 Gbps Omni-Path |
yeti | default | 2018-01-16 | 4 | 4 x Intel Xeon Gold 6130 | 16 cores/CPU | 768 GiB | 480 GB SSD + 2 x 1.6 TB SSD + 3 x 2.0 TB HDD | 10 Gbps + 100 Gbps Omni-Path |
Cluster details
dahu
32 nodes, 64 cpus, 1024 cores (json)
Model: | Dell PowerEdge C6420 |
Date of arrival: | 2018-03-22 |
CPU: | Intel Xeon Gold 6130 (Skylake, 2.10GHz, 2 CPUs/node, 16 cores/CPU) |
Memory: | 192 GiB |
Storage: |
|
Network: |
|
yeti
4 nodes, 16 cpus, 256 cores, split as follows due to differences between nodes (json)
- yeti-[1-2,
4] (3 nodes, 12 cpus, 192 cores)
Model: | Dell PowerEdge R940 |
Date of arrival: | 2018-01-16 |
CPU: | Intel Xeon Gold 6130 (Skylake, 2.10GHz, 4 CPUs/node, 16 cores/CPU) |
Memory: | 768 GiB |
Storage: |
|
Network: |
|
- yeti-3 (1 node, 4 cpus, 64 cores)
Model: | Dell PowerEdge R940 |
Date of arrival: | 2018-01-16 |
CPU: | Intel Xeon Gold 6130 (Skylake, 2.10GHz, 4 CPUs/node, 16 cores/CPU) |
Memory: | 768 GiB |
Storage: |
|
Network: |
|
Last generated from the Grid'5000 Reference API on 2019-09-03 (commit bfe66c15e)