Grenoble:Hardware

From Grid5000
Revision as of 15:28, 29 November 2018 by Ddelabroye (talk | contribs)
Jump to navigation Jump to search


Summary

2 clusters, 36 nodes, 1280 cores, 17.1 TFLOPS

Cluster Queue Date of arrival Nodes CPU Cores Memory Storage Network
dahu default 2018-03-22 32 2 x Intel Xeon Gold 6130 16 cores/CPU 192 GB 223 GB SSD + 447 GB SSD + 3.639 TB HDD 10 Gbps + 100 Gbps Omni-Path
yeti testing 2018-01-16 4 4 x Intel Xeon Gold 6130 16 cores/CPU 768 GB 446 GB SSD + 2 x 1.455 TB SSD + 3 x 1.819 TB HDD 10 Gbps + 100 Gbps Omni-Path

Cluster details

dahu

32 nodes, 64 cpus, 1024 cores (json)

Model: Dell PowerEdge C6420
Date of arrival: 2018-03-22
CPU: Intel Xeon Gold 6130 (Skylake, 2.10GHz, 2 CPUs/node, 16 cores/CPU)
Memory: 192 GB
Storage:
  • 223 GB SSD SATA MZ7KM240HMHQ0D3 (driver: megaraid_sas, path: /dev/disk/by-path/pci-0000:00:11.5-ata-3)
  • 447 GB SSD SATA MZ7KM480HMHQ0D3 (driver: megaraid_sas, path: /dev/disk/by-path/pci-0000:00:11.5-ata-4)
  • 3.639 TB HDD SATA ST4000NM0265-2DC (driver: megaraid_sas, path: /dev/disk/by-path/pci-0000:00:11.5-ata-5)
Network:
  • eth0/enp24s0f0, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e
  • eth1/enp24s0f1, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
  • ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1

yeti (testing queue)

4 nodes, 16 cpus, 256 cores (json)

Model: Dell PowerEdge R940
Date of arrival: 2018-01-16
CPU: Intel Xeon Gold 6130 (Skylake, 2.10GHz, 4 CPUs/node, 16 cores/CPU)
Memory: 768 GB
Storage:
  • 1.455 TB SSD NVME Dell Express Flash NVMe PM1725 1.6TB AIC (driver: nvme, path: /dev/disk/by-path/pci-0000:6d:00.0-nvme-1)
  • 1.455 TB SSD NVME Dell Express Flash NVMe PM1725 1.6TB AIC (driver: nvme, path: /dev/disk/by-path/pci-0000:6e:00.0-nvme-1)
  • 446 GB SSD SAS PERC H740P Adp (driver: megaraid_sas, path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:2:0:0)
  • 1.819 TB HDD SAS PERC H740P Adp (driver: megaraid_sas, path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:2:1:0)
  • 1.819 TB HDD SAS PERC H740P Adp (driver: megaraid_sas, path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:2:2:0)
  • 1.819 TB HDD SAS PERC H740P Adp (driver: megaraid_sas, path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:2:3:0)
Network:
  • eth0/eno113, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e
  • eth1/eno114, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
  • eth2/eno115, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
  • eth3/eno116, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
  • ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1

Last generated from the Grid'5000 Reference API on 2018-11-29 (commit d37c7e221)