Grenoble:Hardware: Difference between revisions

From Grid5000
Jump to navigation Jump to search
No edit summary
No edit summary
(21 intermediate revisions by 3 users not shown)
Line 21: Line 21:
|[[#dahu|dahu]]||||2018-03-22||32||2&nbsp;x&nbsp;Intel&nbsp;Xeon&nbsp;Gold&nbsp;6130||16&nbsp;cores/CPU||192&nbsp;GiB||data-sort-value="4396"|<b>240&nbsp;GB&nbsp;SSD</b> +&nbsp;480&nbsp;GB&nbsp;SSD +&nbsp;4.0&nbsp;TB&nbsp;HDD||data-sort-value="110000"|10&nbsp;Gbps&nbsp;+&nbsp;100&nbsp;Gbps&nbsp;Omni-Path||
|[[#dahu|dahu]]||||2018-03-22||32||2&nbsp;x&nbsp;Intel&nbsp;Xeon&nbsp;Gold&nbsp;6130||16&nbsp;cores/CPU||192&nbsp;GiB||data-sort-value="4396"|<b>240&nbsp;GB&nbsp;SSD</b> +&nbsp;480&nbsp;GB&nbsp;SSD +&nbsp;4.0&nbsp;TB&nbsp;HDD||data-sort-value="110000"|10&nbsp;Gbps&nbsp;+&nbsp;100&nbsp;Gbps&nbsp;Omni-Path||
|-
|-
|[[#drac_.28testing_queue.29|drac]]||<b>testing</b>&nbsp;queue,<br/><b>exotic</b>&nbsp;job&nbsp;type||2020-10-05||12||2&nbsp;x&nbsp;POWER8NVL&nbsp;1.0||10&nbsp;cores/CPU||128&nbsp;GiB||data-sort-value="1862"|<b>1&nbsp;x&nbsp;1.0&nbsp;TB&nbsp;HDD</b> +&nbsp;1&nbsp;x&nbsp;1.0&nbsp;TB&nbsp;HDD||data-sort-value="1000"|1&nbsp;Gbps&nbsp;||4&nbsp;x&nbsp;Nvidia Tesla&nbsp;P100-SXM2-16GB
|[[#drac|drac]]||<b>[[Getting_Started#Selecting_specific_resources|exotic]]</b>&nbsp;job&nbsp;type||2020-10-05||12||2&nbsp;x&nbsp;POWER8NVL&nbsp;1.0||10&nbsp;cores/CPU||128&nbsp;GiB||data-sort-value="1862"|<b>1&nbsp;x&nbsp;1.0&nbsp;TB&nbsp;HDD</b> +&nbsp;1&nbsp;x&nbsp;1.0&nbsp;TB&nbsp;HDD||data-sort-value="210000"|10&nbsp;Gbps&nbsp;+&nbsp;2&nbsp;x&nbsp;100&nbsp;Gbps&nbsp;InfiniBand||4&nbsp;x&nbsp;Nvidia Tesla&nbsp;P100
|-
|-
|[[#troll|troll]]||||2019-12-23||4||2&nbsp;x&nbsp;Intel&nbsp;Xeon&nbsp;Gold&nbsp;5218||16&nbsp;cores/CPU||384&nbsp;GiB + 1.5&nbsp;TiB [[PMEM]]||data-sort-value="1937"|<b>480&nbsp;GB&nbsp;SSD</b> +&nbsp;1.6&nbsp;TB&nbsp;SSD||data-sort-value="110000"|10&nbsp;Gbps&nbsp;+&nbsp;100&nbsp;Gbps&nbsp;Omni-Path||
|[[#troll|troll]]||<b>[[Getting_Started#Selecting_specific_resources|exotic]]</b>&nbsp;job&nbsp;type||2019-12-23||4||2&nbsp;x&nbsp;Intel&nbsp;Xeon&nbsp;Gold&nbsp;5218||16&nbsp;cores/CPU||384&nbsp;GiB + 1.5&nbsp;TiB [[PMEM]]||data-sort-value="1937"|<b>480&nbsp;GB&nbsp;SSD</b> +&nbsp;1.6&nbsp;TB&nbsp;SSD||data-sort-value="125000"|25&nbsp;Gbps&nbsp;+&nbsp;100&nbsp;Gbps&nbsp;Omni-Path||
|-
|-
|[[#yeti|yeti]]||||2018-01-16||4||4&nbsp;x&nbsp;Intel&nbsp;Xeon&nbsp;Gold&nbsp;6130||16&nbsp;cores/CPU||768&nbsp;GiB||data-sort-value="7526"|<b>480&nbsp;GB&nbsp;SSD</b> +&nbsp;3&nbsp;x&nbsp;2.0&nbsp;TB&nbsp;HDD[[Disk_reservation|*]] +&nbsp;1.6&nbsp;TB&nbsp;SSD||data-sort-value="110000"|10&nbsp;Gbps&nbsp;+&nbsp;100&nbsp;Gbps&nbsp;Omni-Path||
|[[#yeti|yeti]]||||2018-01-16||4||4&nbsp;x&nbsp;Intel&nbsp;Xeon&nbsp;Gold&nbsp;6130||16&nbsp;cores/CPU||768&nbsp;GiB||data-sort-value="9016"|<b>480&nbsp;GB&nbsp;SSD</b> +&nbsp;3&nbsp;x&nbsp;2.0&nbsp;TB&nbsp;HDD[[Disk_reservation|*]] +&nbsp;2&nbsp;x&nbsp;1.6&nbsp;TB&nbsp;SSD||data-sort-value="110000"|10&nbsp;Gbps&nbsp;+&nbsp;100&nbsp;Gbps&nbsp;Omni-Path||


|}
|}
''*: disk is [[Disk_reservation|reservable]]''
''*: disk is [[Disk_reservation|reservable]]''
= Cluster details =
= Clusters in default queue =


== dahu ==
== dahu ==


'''32 nodes, 64 cpus, 1024 cores''' ([https://public-api.grid5000.fr/stable/sites/grenoble/clusters/dahu/nodes.json?pretty=1 json])
'''32 nodes, 64 cpus, 1024 cores''' ([https://public-api.grid5000.fr/stable/sites/grenoble/clusters/dahu/nodes.json?pretty=1 json])
'''Reservation example:'''
{{Term|location=fgrenoble|cmd=<code class="command">oarsub</code> <code class="env">-p "cluster='dahu'"</code> <code>-I</code>}}
{|
{|
|-
|-
Line 62: Line 67:
|}
|}


== drac (testing queue, exotic job type) ==
== drac ==


'''12 nodes, 24 cpus, 240 cores''' ([https://public-api.grid5000.fr/stable/sites/grenoble/clusters/drac/nodes.json?pretty=1 json])
'''12 nodes, 24 cpus, 240 cores''' ([https://public-api.grid5000.fr/stable/sites/grenoble/clusters/drac/nodes.json?pretty=1 json])
'''Reservation example:'''
{{Term|location=fgrenoble|cmd=<code class="command">oarsub</code> <code class="replace">-t exotic</code> <code class="env">-p "cluster='drac'"</code> <code>-I</code>}}
{|
{|
|-
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Access condition:'''
| exotic job type<br/>
|-
|-
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Model:'''
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Model:'''
Line 86: Line 99:
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Network:'''
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Network:'''
|  
|  
* <span style="color:grey">eth0/enP1p1s0f0, Ethernet, configured rate: 10&nbsp;Gbps, model: Broadcom NetXtreme II BCM57800 1/10 Gigabit Ethernet, driver: bnx2x - unavailable for experiment</span><br />
* eth0/enP1p1s0f0, Ethernet, configured rate: 10&nbsp;Gbps, model: Broadcom NetXtreme II BCM57800 1/10 Gigabit Ethernet, driver: bnx2x<br />
* <span style="color:grey">eth1/enP1p1s0f1, Ethernet, model: Broadcom NetXtreme II BCM57800 1/10 Gigabit Ethernet, driver: bnx2x - unavailable for experiment</span><br />
* <span style="color:grey">eth1/enP1p1s0f1, Ethernet, model: Broadcom NetXtreme II BCM57800 1/10 Gigabit Ethernet, driver: bnx2x - unavailable for experiment</span><br />
* <span style="color:grey">eth2/enP1p1s0f2, Ethernet, model: Broadcom NetXtreme II BCM57800 1/10 Gigabit Ethernet, driver: bnx2x - unavailable for experiment</span><br />
* <span style="color:grey">eth2/enP1p1s0f2, Ethernet, model: Broadcom NetXtreme II BCM57800 1/10 Gigabit Ethernet, driver: bnx2x - unavailable for experiment</span><br />
* eth3/enP1p1s0f3, Ethernet, configured rate: 1&nbsp;Gbps, model: Broadcom NetXtreme II BCM57800 1/10 Gigabit Ethernet, driver: bnx2x - no KaVLAN<br />
* <span style="color:grey">eth3/enP1p1s0f3, Ethernet, model: Broadcom NetXtreme II BCM57800 1/10 Gigabit Ethernet, driver: bnx2x - unavailable for experiment</span><br />
* <span style="color:grey">eth4/enP9p7s0f0, Ethernet, configured rate: 1&nbsp;Gbps, model: Broadcom NetXtreme BCM5719 Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment</span><br />
* <span style="color:grey">eth4/enP9p7s0f0, Ethernet, configured rate: 1&nbsp;Gbps, model: Broadcom NetXtreme BCM5719 Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment</span><br />
* <span style="color:grey">eth5/enP9p7s0f1, Ethernet, model: Broadcom NetXtreme BCM5719 Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment</span><br />
* <span style="color:grey">eth5/enP9p7s0f1, Ethernet, model: Broadcom NetXtreme BCM5719 Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment</span><br />
* <span style="color:grey">ib0, InfiniBand, model: Mellanox Technologies MT27700 Family [ConnectX-4], driver: mlx5_core - unavailable for experiment</span><br />
* ib0, InfiniBand, configured rate: 100&nbsp;Gbps, model: Mellanox Technologies MT27700 Family [ConnectX-4], driver: mlx5_core<br />
* <span style="color:grey">ib1, InfiniBand, model: Mellanox Technologies MT27700 Family [ConnectX-4], driver: mlx5_core - unavailable for experiment</span><br/>
* ib1, InfiniBand, configured rate: 100&nbsp;Gbps, model: Mellanox Technologies MT27700 Family [ConnectX-4], driver: mlx5_core<br/>
|-
|-
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''GPU:'''
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''GPU:'''
Line 103: Line 116:


'''4 nodes, 8 cpus, 128 cores''' ([https://public-api.grid5000.fr/stable/sites/grenoble/clusters/troll/nodes.json?pretty=1 json])
'''4 nodes, 8 cpus, 128 cores''' ([https://public-api.grid5000.fr/stable/sites/grenoble/clusters/troll/nodes.json?pretty=1 json])
'''Reservation example:'''
{{Term|location=fgrenoble|cmd=<code class="command">oarsub</code> <code class="replace">-t exotic</code> <code class="env">-p "cluster='troll'"</code> <code>-I</code>}}
{|
{|
|-
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Access condition:'''
| exotic job type<br/>
|-
|-
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Model:'''
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Model:'''
Line 120: Line 141:
|  
|  
* 480&nbsp;GB SSD SATA Micron MTFDDAK480TDN (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:0:0)  (primary disk)<br />
* 480&nbsp;GB SSD SATA Micron MTFDDAK480TDN (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:0:0)  (primary disk)<br />
* 1.6&nbsp;TB SSD NVME Dell Dell Express Flash NVMe PM1725 1.6TB AIC (dev: /dev/nvme0n1, by-path: /dev/disk/by-path/pci-0000:d8:00.0-nvme-1)  <br/>
* 1.6&nbsp;TB SSD NVME Dell Dell Ent NVMe AGN MU AIC 1.6TB (dev: /dev/nvme0n1, by-path: /dev/disk/by-path/pci-0000:d8:00.0-nvme-1)  <br/>
|-
|-
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Network:'''
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Network:'''
|  
|  
* eth0/eno1, Ethernet, configured rate: 10&nbsp;Gbps, model: Mellanox Technologies MT27710 Family [ConnectX-4 Lx], driver: mlx5_core<br />
* eth0/eno1, Ethernet, configured rate: 25&nbsp;Gbps, model: Mellanox Technologies MT27710 Family [ConnectX-4 Lx], driver: mlx5_core<br />
* <span style="color:grey">eth1/eno2, Ethernet, model: Mellanox Technologies MT27710 Family [ConnectX-4 Lx], driver: mlx5_core - unavailable for experiment</span><br />
* <span style="color:grey">eth1/eno2, Ethernet, model: Mellanox Technologies MT27710 Family [ConnectX-4 Lx], driver: mlx5_core - unavailable for experiment</span><br />
* ib0, Omni-Path, configured rate: 100&nbsp;Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1<br/>
* ib0, Omni-Path, configured rate: 100&nbsp;Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1<br/>
Line 133: Line 154:


'''4 nodes, 16 cpus, 256 cores,''' split as follows due to differences between nodes ([https://public-api.grid5000.fr/stable/sites/grenoble/clusters/yeti/nodes.json?pretty=1 json])
'''4 nodes, 16 cpus, 256 cores,''' split as follows due to differences between nodes ([https://public-api.grid5000.fr/stable/sites/grenoble/clusters/yeti/nodes.json?pretty=1 json])
'''Reservation example:'''
{{Term|location=fgrenoble|cmd=<code class="command">oarsub</code> <code class="env">-p "cluster='yeti'"</code> <code>-I</code>}}
; yeti-[1-2,<wbr>4] (3 nodes, 12 cpus, 192 cores)
; yeti-[1-2,<wbr>4] (3 nodes, 12 cpus, 192 cores)
{|
{|
Line 150: Line 176:
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Storage:'''
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Storage:'''
|  
|  
* 480&nbsp;GB SSD SAS Intel SSDSC2KG480G7R (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:0:0)  (primary disk)<br />
* 480&nbsp;GB SSD SATA Intel SSDSC2KG480G7R (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:0:0)  (primary disk)<br />
* 2.0&nbsp;TB HDD SAS Seagate ST2000NX0463 (dev: /dev/sdb*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:1:0) [[Disk_reservation|(reservable)]] <br />
* 2.0&nbsp;TB HDD SAS Seagate ST2000NX0463 (dev: /dev/sdb*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:1:0) [[Disk_reservation|(reservable)]] <br />
* 2.0&nbsp;TB HDD SAS Seagate ST2000NX0463 (dev: /dev/sdc*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:2:0) [[Disk_reservation|(reservable)]] <br />
* 2.0&nbsp;TB HDD SAS Seagate ST2000NX0463 (dev: /dev/sdc*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:2:0) [[Disk_reservation|(reservable)]] <br />
* 2.0&nbsp;TB HDD SAS Seagate ST2000NX0463 (dev: /dev/sdd*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:3:0) [[Disk_reservation|(reservable)]] <br />
* 2.0&nbsp;TB HDD SAS Seagate ST2000NX0463 (dev: /dev/sdd*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:3:0) [[Disk_reservation|(reservable)]] <br />
* 1.6&nbsp;TB SSD NVME Dell Dell Express Flash NVMe PM1725 1.6TB AIC (dev: /dev/nvme0n1, by-path: /dev/disk/by-path/pci-0000:6d:00.0-nvme-1)  <br />  
* 1.6&nbsp;TB SSD NVME Dell Dell Express Flash NVMe PM1725 1.6TB AIC (dev: /dev/nvme0n1, by-path: /dev/disk/by-path/pci-0000:59:00.0-nvme-1)  <br />
* 1.6&nbsp;TB SSD NVME Dell Dell Express Flash NVMe PM1725 1.6TB AIC (dev: /dev/nvme1n1, by-path: /dev/disk/by-path/pci-0000:6d:00.0-nvme-1)  <br />  
''*: the disk block device name /dev/sd? may vary in deployed environments, prefer referring to the by-path identifier''<br/>
''*: the disk block device name /dev/sd? may vary in deployed environments, prefer referring to the by-path identifier''<br/>
|-
|-
Line 185: Line 212:
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Storage:'''
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Storage:'''
|  
|  
* 480&nbsp;GB SSD SAS Intel SSDSC2KG480G8R (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:0:0)  (primary disk)<br />
* 480&nbsp;GB SSD SATA Intel SSDSC2KG480G8R (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:0:0)  (primary disk)<br />
* 2.0&nbsp;TB HDD SAS Seagate ST2000NX0463 (dev: /dev/sdb*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:1:0) [[Disk_reservation|(reservable)]] <br />
* 2.0&nbsp;TB HDD SAS Seagate ST2000NX0463 (dev: /dev/sdb*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:1:0) [[Disk_reservation|(reservable)]] <br />
* 2.0&nbsp;TB HDD SAS Seagate ST2000NX0463 (dev: /dev/sdc*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:2:0) [[Disk_reservation|(reservable)]] <br />
* 2.0&nbsp;TB HDD SAS Seagate ST2000NX0463 (dev: /dev/sdc*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:2:0) [[Disk_reservation|(reservable)]] <br />
* 2.0&nbsp;TB HDD SAS Seagate ST2000NX0463 (dev: /dev/sdd*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:3:0) [[Disk_reservation|(reservable)]] <br />
* 2.0&nbsp;TB HDD SAS Seagate ST2000NX0463 (dev: /dev/sdd*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:3:0) [[Disk_reservation|(reservable)]] <br />
* 1.6&nbsp;TB SSD NVME Dell Dell Express Flash NVMe PM1725 1.6TB AIC (dev: /dev/nvme0n1, by-path: /dev/disk/by-path/pci-0000:6d:00.0-nvme-1)  <br />  
* 1.6&nbsp;TB SSD NVME Dell Dell Express Flash NVMe PM1725 1.6TB AIC (dev: /dev/nvme0n1, by-path: /dev/disk/by-path/pci-0000:59:00.0-nvme-1)  <br />
* 1.6&nbsp;TB SSD NVME Dell Dell Express Flash NVMe PM1725 1.6TB AIC (dev: /dev/nvme1n1, by-path: /dev/disk/by-path/pci-0000:6d:00.0-nvme-1)  <br />  
''*: the disk block device name /dev/sd? may vary in deployed environments, prefer referring to the by-path identifier''<br/>
''*: the disk block device name /dev/sd? may vary in deployed environments, prefer referring to the by-path identifier''<br/>
|-
|-
Line 200: Line 228:
* ib0, Omni-Path, configured rate: 100&nbsp;Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1<br/>
* ib0, Omni-Path, configured rate: 100&nbsp;Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1<br/>
|-
|-
|}''<small>Last generated from the Grid'5000 Reference API on 2020-11-19 ([https://github.com/grid5000/reference-repository/commit/4cdcaa4ad commit 4cdcaa4ad])</small>''
|}''<small>Last generated from the Grid'5000 Reference API on 2021-04-15 ([https://github.com/grid5000/reference-repository/commit/434756fbbe commit 434756fbbe])</small>''

Revision as of 11:55, 15 April 2021

Summary

4 clusters, 52 nodes, 1648 cores, 98.4 TFLOPS

Cluster Access Condition Date of arrival Nodes CPU Cores Memory Storage Network Accelerators
dahu 2018-03-22 32 2 x Intel Xeon Gold 6130 16 cores/CPU 192 GiB 240 GB SSD + 480 GB SSD + 4.0 TB HDD 10 Gbps + 100 Gbps Omni-Path
drac exotic job type 2020-10-05 12 2 x POWER8NVL 1.0 10 cores/CPU 128 GiB 1 x 1.0 TB HDD + 1 x 1.0 TB HDD 10 Gbps + 2 x 100 Gbps InfiniBand 4 x Nvidia Tesla P100
troll exotic job type 2019-12-23 4 2 x Intel Xeon Gold 5218 16 cores/CPU 384 GiB + 1.5 TiB PMEM 480 GB SSD + 1.6 TB SSD 25 Gbps + 100 Gbps Omni-Path
yeti 2018-01-16 4 4 x Intel Xeon Gold 6130 16 cores/CPU 768 GiB 480 GB SSD + 3 x 2.0 TB HDD* + 2 x 1.6 TB SSD 10 Gbps + 100 Gbps Omni-Path

*: disk is reservable

Clusters in default queue

dahu

32 nodes, 64 cpus, 1024 cores (json)

Reservation example:

Terminal.png fgrenoble:
oarsub -p "cluster='dahu'" -I
Model: Dell PowerEdge C6420
Date of arrival: 2018-03-22
CPU: Intel Xeon Gold 6130 (Skylake, 2.10GHz, 2 CPUs/node, 16 cores/CPU)
Memory: 192 GiB
Storage:
  • 240 GB SSD SATA Samsung MZ7KM240HMHQ0D3 (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:00:11.5-ata-3) (primary disk)
  • 480 GB SSD SATA Samsung MZ7KM480HMHQ0D3 (dev: /dev/sdb, by-path: /dev/disk/by-path/pci-0000:00:11.5-ata-4)
  • 4.0 TB HDD SATA Seagate ST4000NM0265-2DC (dev: /dev/sdc, by-path: /dev/disk/by-path/pci-0000:00:11.5-ata-5)
Network:
  • eth0/enp24s0f0, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e
  • eth1/enp24s0f1, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
  • ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1

drac

12 nodes, 24 cpus, 240 cores (json)

Reservation example:

Terminal.png fgrenoble:
oarsub -t exotic -p "cluster='drac'" -I
Access condition: exotic job type
Model: IBM PowerNV S822LC (8335-GTB)
Date of arrival: 2020-10-05
CPU: POWER8NVL 1.0 (POWER8, 2 CPUs/node, 10 cores/CPU)
Memory: 128 GiB
Storage:
  • 1.0 TB HDD SATA Seagate ST1000NX0313 (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0009:04:00.0-ata-1) (primary disk)
  • 1.0 TB HDD SATA Seagate ST1000NX0313 (dev: /dev/sdb, by-path: /dev/disk/by-path/pci-0009:04:00.0-ata-2)
Network:
  • eth0/enP1p1s0f0, Ethernet, configured rate: 10 Gbps, model: Broadcom NetXtreme II BCM57800 1/10 Gigabit Ethernet, driver: bnx2x
  • eth1/enP1p1s0f1, Ethernet, model: Broadcom NetXtreme II BCM57800 1/10 Gigabit Ethernet, driver: bnx2x - unavailable for experiment
  • eth2/enP1p1s0f2, Ethernet, model: Broadcom NetXtreme II BCM57800 1/10 Gigabit Ethernet, driver: bnx2x - unavailable for experiment
  • eth3/enP1p1s0f3, Ethernet, model: Broadcom NetXtreme II BCM57800 1/10 Gigabit Ethernet, driver: bnx2x - unavailable for experiment
  • eth4/enP9p7s0f0, Ethernet, configured rate: 1 Gbps, model: Broadcom NetXtreme BCM5719 Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • eth5/enP9p7s0f1, Ethernet, model: Broadcom NetXtreme BCM5719 Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • ib0, InfiniBand, configured rate: 100 Gbps, model: Mellanox Technologies MT27700 Family [ConnectX-4], driver: mlx5_core
  • ib1, InfiniBand, configured rate: 100 Gbps, model: Mellanox Technologies MT27700 Family [ConnectX-4], driver: mlx5_core
GPU: 4 x Nvidia Tesla P100-SXM2-16GB

troll

4 nodes, 8 cpus, 128 cores (json)

Reservation example:

Terminal.png fgrenoble:
oarsub -t exotic -p "cluster='troll'" -I
Access condition: exotic job type
Model: Dell PowerEdge R640
Date of arrival: 2019-12-23
CPU: Intel Xeon Gold 5218 (Cascade Lake-SP, 2.30GHz, 2 CPUs/node, 16 cores/CPU)
Memory: 384 GiB + 1.5 TiB PMEM
Storage:
  • 480 GB SSD SATA Micron MTFDDAK480TDN (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:0:0) (primary disk)
  • 1.6 TB SSD NVME Dell Dell Ent NVMe AGN MU AIC 1.6TB (dev: /dev/nvme0n1, by-path: /dev/disk/by-path/pci-0000:d8:00.0-nvme-1)
Network:
  • eth0/eno1, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27710 Family [ConnectX-4 Lx], driver: mlx5_core
  • eth1/eno2, Ethernet, model: Mellanox Technologies MT27710 Family [ConnectX-4 Lx], driver: mlx5_core - unavailable for experiment
  • ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1

yeti

4 nodes, 16 cpus, 256 cores, split as follows due to differences between nodes (json)

Reservation example:

Terminal.png fgrenoble:
oarsub -p "cluster='yeti'" -I
yeti-[1-2,4] (3 nodes, 12 cpus, 192 cores)
Model: Dell PowerEdge R940
Date of arrival: 2018-01-16
CPU: Intel Xeon Gold 6130 (Skylake, 2.10GHz, 4 CPUs/node, 16 cores/CPU)
Memory: 768 GiB
Storage:
  • 480 GB SSD SATA Intel SSDSC2KG480G7R (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:0:0) (primary disk)
  • 2.0 TB HDD SAS Seagate ST2000NX0463 (dev: /dev/sdb*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:1:0) (reservable)
  • 2.0 TB HDD SAS Seagate ST2000NX0463 (dev: /dev/sdc*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:2:0) (reservable)
  • 2.0 TB HDD SAS Seagate ST2000NX0463 (dev: /dev/sdd*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:3:0) (reservable)
  • 1.6 TB SSD NVME Dell Dell Express Flash NVMe PM1725 1.6TB AIC (dev: /dev/nvme0n1, by-path: /dev/disk/by-path/pci-0000:59:00.0-nvme-1)
  • 1.6 TB SSD NVME Dell Dell Express Flash NVMe PM1725 1.6TB AIC (dev: /dev/nvme1n1, by-path: /dev/disk/by-path/pci-0000:6d:00.0-nvme-1)

*: the disk block device name /dev/sd? may vary in deployed environments, prefer referring to the by-path identifier

Network:
  • eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e
  • eth1/eno2, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
  • eth2/eno3, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
  • eth3/eno4, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
  • ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1

yeti-3 (1 node, 4 cpus, 64 cores)
Model: Dell PowerEdge R940
Date of arrival: 2018-01-16
CPU: Intel Xeon Gold 6130 (Skylake, 2.10GHz, 4 CPUs/node, 16 cores/CPU)
Memory: 768 GiB
Storage:
  • 480 GB SSD SATA Intel SSDSC2KG480G8R (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:0:0) (primary disk)
  • 2.0 TB HDD SAS Seagate ST2000NX0463 (dev: /dev/sdb*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:1:0) (reservable)
  • 2.0 TB HDD SAS Seagate ST2000NX0463 (dev: /dev/sdc*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:2:0) (reservable)
  • 2.0 TB HDD SAS Seagate ST2000NX0463 (dev: /dev/sdd*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:3:0) (reservable)
  • 1.6 TB SSD NVME Dell Dell Express Flash NVMe PM1725 1.6TB AIC (dev: /dev/nvme0n1, by-path: /dev/disk/by-path/pci-0000:59:00.0-nvme-1)
  • 1.6 TB SSD NVME Dell Dell Express Flash NVMe PM1725 1.6TB AIC (dev: /dev/nvme1n1, by-path: /dev/disk/by-path/pci-0000:6d:00.0-nvme-1)

*: the disk block device name /dev/sd? may vary in deployed environments, prefer referring to the by-path identifier

Network:
  • eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e
  • eth1/eno2, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
  • eth2/eno3, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
  • eth3/eno4, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
  • ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1

Last generated from the Grid'5000 Reference API on 2021-04-15 (commit 434756fbbe)