Difference between revisions of "Lyon:Hardware"

From Grid5000
Jump to: navigation, search
Line 6: Line 6:
  
 
= Summary =
 
= Summary =
'''8 clusters, 74 nodes, 1464 cores, 40.5 TFLOPS'''
+
'''9 clusters, 74 nodes, 1590 cores, 44.8 TFLOPS'''
 
{|class="wikitable sortable"
 
{|class="wikitable sortable"
 
|-
 
|-
Line 32: Line 32:
 
|[[#pyxis|pyxis]]||<b>[[Getting_Started#Selecting_specific_resources|exotic]]</b>&nbsp;job&nbsp;type||2020-01-06||4||2&nbsp;x&nbsp;ARM&nbsp;ThunderX2&nbsp;99xx||32&nbsp;cores/CPU||256&nbsp;GiB||data-sort-value="464"|<b>1&nbsp;x&nbsp;250&nbsp;GB&nbsp;SSD</b> +&nbsp;1&nbsp;x&nbsp;250&nbsp;GB&nbsp;SSD||data-sort-value="110000"|10&nbsp;Gbps&nbsp;(SR&#8209;IOV)&nbsp;+&nbsp;100&nbsp;Gbps&nbsp;InfiniBand||
 
|[[#pyxis|pyxis]]||<b>[[Getting_Started#Selecting_specific_resources|exotic]]</b>&nbsp;job&nbsp;type||2020-01-06||4||2&nbsp;x&nbsp;ARM&nbsp;ThunderX2&nbsp;99xx||32&nbsp;cores/CPU||256&nbsp;GiB||data-sort-value="464"|<b>1&nbsp;x&nbsp;250&nbsp;GB&nbsp;SSD</b> +&nbsp;1&nbsp;x&nbsp;250&nbsp;GB&nbsp;SSD||data-sort-value="110000"|10&nbsp;Gbps&nbsp;(SR&#8209;IOV)&nbsp;+&nbsp;100&nbsp;Gbps&nbsp;InfiniBand||
 
|-
 
|-
|[[#sagittaire|sagittaire]]||||2006-07-01||14||2&nbsp;x&nbsp;AMD&nbsp;Opteron&nbsp;250||1&nbsp;core/CPU||2&nbsp;GiB||data-sort-value="68"|<b>73&nbsp;GB&nbsp;HDD</b>||data-sort-value="1000"|1&nbsp;Gbps&nbsp;||
+
|[[#sagittaire|sagittaire]]||||2006-07-01||13||2&nbsp;x&nbsp;AMD&nbsp;Opteron&nbsp;250||1&nbsp;core/CPU||2&nbsp;GiB||data-sort-value="68"|<b>73&nbsp;GB&nbsp;HDD</b>||data-sort-value="1000"|1&nbsp;Gbps&nbsp;||
 +
|-
 +
|[[#sirius|sirius]]||<b>testing</b>&nbsp;queue||2021-11-18||1||2&nbsp;x&nbsp;AMD&nbsp;EPYC&nbsp;7742||64&nbsp;cores/CPU||1.0&nbsp;TiB||data-sort-value="17880"|<b>1&nbsp;x&nbsp;1.92&nbsp;TB&nbsp;SSD</b> +&nbsp;1&nbsp;x&nbsp;1.92&nbsp;TB&nbsp;SSD +&nbsp;4&nbsp;x&nbsp;3.84&nbsp;TB&nbsp;SSD||data-sort-value="31000"|1&nbsp;Gbps&nbsp;+&nbsp;3&nbsp;x&nbsp;10&nbsp;Gbps&nbsp;InfiniBand||
 
|-
 
|-
 
|[[#taurus|taurus]]||||2012-09-14||13||2&nbsp;x&nbsp;Intel&nbsp;Xeon&nbsp;E5-2630||6&nbsp;cores/CPU||32&nbsp;GiB||data-sort-value="278"|<b>299&nbsp;GB&nbsp;HDD</b>||data-sort-value="10000"|10&nbsp;Gbps&nbsp;(SR&#8209;IOV)&nbsp;||
 
|[[#taurus|taurus]]||||2012-09-14||13||2&nbsp;x&nbsp;Intel&nbsp;Xeon&nbsp;E5-2630||6&nbsp;cores/CPU||32&nbsp;GiB||data-sort-value="278"|<b>299&nbsp;GB&nbsp;HDD</b>||data-sort-value="10000"|10&nbsp;Gbps&nbsp;(SR&#8209;IOV)&nbsp;||
Line 279: Line 281:
 
== sagittaire ==
 
== sagittaire ==
  
'''14 nodes, 28 cpus, 28 cores,''' split as follows due to differences between nodes ([https://public-api.grid5000.fr/stable/sites/lyon/clusters/sagittaire/nodes.json?pretty=1 json])
+
'''13 nodes, 26 cpus, 26 cores,''' split as follows due to differences between nodes ([https://public-api.grid5000.fr/stable/sites/lyon/clusters/sagittaire/nodes.json?pretty=1 json])
  
 
'''Reservation example:'''
 
'''Reservation example:'''
Line 311: Line 313:
 
<hr style="height:10pt; visibility:hidden;" />
 
<hr style="height:10pt; visibility:hidden;" />
  
; sagittaire-[2-6,<wbr>13-16,<wbr>18,<wbr>20] (11 nodes, 22 cpus, 22 cores)
+
; sagittaire-[2-6,<wbr>13-16,<wbr>18] (10 nodes, 20 cpus, 20 cores)
 
{|
 
{|
 
|-
 
|-
Line 396: Line 398:
 
* <span style="color:grey">eth5/eno4, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment</span><br/>
 
* <span style="color:grey">eth5/eno4, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment</span><br/>
 
|-
 
|-
|}''<small>Last generated from the Grid'5000 Reference API on 2021-12-03 ([https://github.com/grid5000/reference-repository/commit/d847cd99da commit d847cd99da])</small>''
+
|}
 +
 
 +
= Clusters in testing queue =
 +
 
 +
== sirius ==
 +
 
 +
'''1 node, 2 cpus, 128 cores''' ([https://public-api.grid5000.fr/stable/sites/lyon/clusters/sirius/nodes.json?pretty=1 json])
 +
 
 +
'''Reservation example:'''
 +
 
 +
{{Term|location=flyon|cmd=<code class="command">oarsub</code> <code class="replace">-q testing</code> <code class="env">-p "cluster='sirius'"</code> <code>-I</code>}}
 +
 
 +
{|
 +
|-
 +
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Access condition:'''
 +
| testing queue<br/>
 +
|-
 +
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Model:'''
 +
| NVidia DGX A100<br/>
 +
|-
 +
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Date of arrival:'''
 +
| 2021-11-18<br/>
 +
|-
 +
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''CPU:'''
 +
| AMD EPYC 7742 (Zen 2, 2&nbsp;CPUs/node, 64&nbsp;cores/CPU)<br/>
 +
|-
 +
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Memory:'''
 +
| 1.0&nbsp;TiB<br/>
 +
|-
 +
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Storage:'''
 +
|
 +
* disk0, 1.92&nbsp;TB SSD NVME Samsung SAMSUNG MZ1LB1T9HALS-00007 (dev: <code class="file">/dev/disk0</code>, by-path: <code class="file">/dev/disk/by-path/pci-0000:22:00.0-nvme-1</code>)  (primary disk)<br />
 +
* disk1, 1.92&nbsp;TB SSD NVME Samsung SAMSUNG MZ1LB1T9HALS-00007 (dev: <code class="file">/dev/disk1</code>, by-path: <code class="file">/dev/disk/by-path/pci-0000:23:00.0-nvme-1</code>)  <br />
 +
* disk2, 3.84&nbsp;TB SSD NVME Kioxia KCM6DRUL3T84 (dev: <code class="file">/dev/disk2</code>, by-path: <code class="file">/dev/disk/by-path/pci-0000:09:00.0-nvme-1</code>)  <br />
 +
* disk3, 3.84&nbsp;TB SSD NVME Kioxia KCM6DRUL3T84 (dev: <code class="file">/dev/disk3</code>, by-path: <code class="file">/dev/disk/by-path/pci-0000:52:00.0-nvme-1</code>)  <br />
 +
* disk4, 3.84&nbsp;TB SSD NVME Kioxia KCM6DRUL3T84 (dev: <code class="file">/dev/disk4</code>, by-path: <code class="file">/dev/disk/by-path/pci-0000:ca:00.0-nvme-1</code>)  <br />
 +
* disk5, 3.84&nbsp;TB SSD NVME Kioxia KCM6DRUL3T84 (dev: <code class="file">/dev/disk5</code>, by-path: <code class="file">/dev/disk/by-path/pci-0000:8a:00.0-nvme-1</code>)  <br/>
 +
|-
 +
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Network:'''
 +
|
 +
* eth0, Ethernet, configured rate: 1&nbsp;Gbps, model: Intel I210 Gigabit Network Connection, driver: igb - no KaVLAN<br />
 +
* <span style="color:grey">eth1, Ethernet, model: Mellanox Technologies MT28908 Family [ConnectX-6], driver: mlx5_core - unavailable for experiment</span><br />
 +
* <span style="color:grey">eth2, Ethernet, model: Mellanox Technologies MT28908 Family [ConnectX-6], driver: mlx5_core - unavailable for experiment</span><br />
 +
* ib0, InfiniBand, configured rate: 10&nbsp;Gbps, model: Mellanox Technologies MT28908 Family [ConnectX-6], driver: mlx5_core<br />
 +
* ib1, InfiniBand, configured rate: 10&nbsp;Gbps, model: Mellanox Technologies MT28908 Family [ConnectX-6], driver: mlx5_core<br />
 +
* ib2, InfiniBand, configured rate: 10&nbsp;Gbps, model: Mellanox Technologies MT28908 Family [ConnectX-6], driver: mlx5_core<br />
 +
* <span style="color:grey">ib3, InfiniBand, model: Mellanox Technologies MT28908 Family [ConnectX-6], driver: mlx5_core - unavailable for experiment</span><br />
 +
* <span style="color:grey">ib4, InfiniBand, model: Mellanox Technologies MT28908 Family [ConnectX-6], driver: mlx5_core - unavailable for experiment</span><br />
 +
* <span style="color:grey">ib5, InfiniBand, model: Mellanox Technologies MT28908 Family [ConnectX-6], driver: mlx5_core - unavailable for experiment</span><br />
 +
* <span style="color:grey">ib6, InfiniBand, model: Mellanox Technologies MT28908 Family [ConnectX-6], driver: mlx5_core - unavailable for experiment</span><br />
 +
* <span style="color:grey">ib7, InfiniBand, model: Mellanox Technologies MT28908 Family [ConnectX-6], driver: mlx5_core - unavailable for experiment</span><br/>
 +
|-
 +
|}''<small>Last generated from the Grid'5000 Reference API on 2021-12-21 ([https://github.com/grid5000/reference-repository/commit/19dded246f commit 19dded246f])</small>''

Revision as of 11:48, 21 December 2021

See also: Network topology for Lyon

Summary

9 clusters, 74 nodes, 1590 cores, 44.8 TFLOPS

Cluster Access Condition Date of arrival Nodes CPU Cores Memory Storage Network Accelerators
gemini exotic job type 2019-09-01 2 2 x Intel Xeon E5-2698 v4 20 cores/CPU 512 GiB 480 GB SSD + 4 x 1.92 TB SSD* 10 Gbps (SR‑IOV) + 3 x 100 Gbps InfiniBand 8 x Nvidia Tesla V100 (32 GiB)
hercule 2012-10-02 4 2 x Intel Xeon E5-2620 6 cores/CPU 32 GiB 1 x 2.0 TB HDD + 2 x 2.0 TB HDD 10 Gbps (SR‑IOV) 
neowise exotic job type 2021-05-17 10 AMD EPYC 7642 48 cores/CPU 512 GiB 1.92 TB SSD 2 x 10 Gbps (SR‑IOV) + 2 x 100 Gbps InfiniBand 8 x AMD MI50 (32 GiB)
nova 2016-12-01 23 2 x Intel Xeon E5-2620 v4 8 cores/CPU 64 GiB 598 GB HDD 10 Gbps (SR‑IOV) 
orion 2012-09-14 4 2 x Intel Xeon E5-2630 6 cores/CPU 32 GiB 299 GB HDD 10 Gbps (SR‑IOV)  Nvidia Tesla M2075 (5 GiB)
pyxis exotic job type 2020-01-06 4 2 x ARM ThunderX2 99xx 32 cores/CPU 256 GiB 1 x 250 GB SSD + 1 x 250 GB SSD 10 Gbps (SR‑IOV) + 100 Gbps InfiniBand
sagittaire 2006-07-01 13 2 x AMD Opteron 250 1 core/CPU 2 GiB 73 GB HDD 1 Gbps 
sirius testing queue 2021-11-18 1 2 x AMD EPYC 7742 64 cores/CPU 1.0 TiB 1 x 1.92 TB SSD + 1 x 1.92 TB SSD + 4 x 3.84 TB SSD 1 Gbps + 3 x 10 Gbps InfiniBand
taurus 2012-09-14 13 2 x Intel Xeon E5-2630 6 cores/CPU 32 GiB 299 GB HDD 10 Gbps (SR‑IOV) 

*: disk is reservable

Clusters in default queue

gemini

2 nodes, 4 cpus, 80 cores (json)

Reservation example:

Terminal.png flyon:
oarsub -t exotic -p "cluster='gemini'" -I
Access condition: exotic job type
Model: Nvidia DGX-1
Date of arrival: 2019-09-01
CPU: Intel Xeon E5-2698 v4 (Broadwell, 2.20GHz, 2 CPUs/node, 20 cores/CPU)
Memory: 512 GiB
Storage:
  • disk0, 480 GB SSD SATA Samsung SAMSUNG MZ7KM480 (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:81:00.0-scsi-0:0:14:0) (primary disk)
  • disk1, 1.92 TB SSD SATA Samsung SAMSUNG MZ7LM1T9 (dev: /dev/disk1*, by-path: /dev/disk/by-path/pci-0000:81:00.0-scsi-0:0:15:0) (reservable)
  • disk2, 1.92 TB SSD SATA Samsung SAMSUNG MZ7LM1T9 (dev: /dev/disk2*, by-path: /dev/disk/by-path/pci-0000:81:00.0-scsi-0:0:16:0) (reservable)
  • disk3, 1.92 TB SSD SATA Samsung SAMSUNG MZ7LM1T9 (dev: /dev/disk3*, by-path: /dev/disk/by-path/pci-0000:81:00.0-scsi-0:0:17:0) (reservable)
  • disk4, 1.92 TB SSD SATA Samsung SAMSUNG MZ7LM1T9 (dev: /dev/disk4*, by-path: /dev/disk/by-path/pci-0000:81:00.0-scsi-0:0:18:0) (reservable)
Network:
  • eth0/enp1s0f0, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet Controller 10-Gigabit X540-AT2, driver: ixgbe, SR-IOV enabled
  • eth1/enp1s0f1, Ethernet, model: Intel Ethernet Controller 10-Gigabit X540-AT2, driver: ixgbe - unavailable for experiment
  • ib0, InfiniBand, configured rate: 100 Gbps, model: Mellanox Technologies MT27800 Family [ConnectX-5], driver: mlx5_core
  • ib1, InfiniBand, configured rate: 100 Gbps, model: Mellanox Technologies MT27800 Family [ConnectX-5], driver: mlx5_core
  • ib2, InfiniBand, configured rate: 100 Gbps, model: Mellanox Technologies MT27800 Family [ConnectX-5], driver: mlx5_core
  • ib3, InfiniBand, model: Mellanox Technologies MT27800 Family [ConnectX-5], driver: mlx5_core - unavailable for experiment
GPU: 8 x Nvidia Tesla V100-SXM2-32GB (32 GiB)
Compute capability: 7.0

hercule

4 nodes, 8 cpus, 48 cores (json)

Reservation example:

Terminal.png flyon:
oarsub -p "cluster='hercule'" -I
Model: Dell PowerEdge C6220
Date of arrival: 2012-10-02
CPU: Intel Xeon E5-2620 (Sandy Bridge, 2.00GHz, 2 CPUs/node, 6 cores/CPU)
Memory: 32 GiB
Storage:
  • disk0, 2.0 TB HDD SATA Western Digital WDC WD2003FYYS-1 (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:00:1f.2-ata-1) (primary disk)
  • disk1, 2.0 TB HDD SATA Western Digital WDC WD2003FYYS-1 (dev: /dev/disk1, by-path: /dev/disk/by-path/pci-0000:00:1f.2-ata-2)
  • disk2, 2.0 TB HDD SATA Western Digital WDC WD2003FYYS-1 (dev: /dev/disk2, by-path: /dev/disk/by-path/pci-0000:00:1f.2-ata-3)
Network:
  • eth0/enp130s0f0, Ethernet, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe - unavailable for experiment
  • eth1/enp130s0f1, Ethernet, configured rate: 10 Gbps, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe, SR-IOV enabled
  • eth2/eno1, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth3/eno2, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment

neowise

10 nodes, 10 cpus, 480 cores (json)

Reservation example:

Terminal.png flyon:
oarsub -t exotic -p "cluster='neowise'" -I
Access condition: exotic job type
Model: AMD-Penguin Computing
Date of arrival: 2021-05-17
CPU: AMD EPYC 7642 (Zen 2, 1 CPU/node, 48 cores/CPU)
Memory: 512 GiB
Storage: disk0, 1.92 TB SSD NVME Samsung SAMSUNG MZ1LB1T9HALS-00007 (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:82:00.0-nvme-1) (primary disk)
Network:
  • eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Mellanox Technologies MT27710 Family [ConnectX-4 Lx], driver: mlx5_core, SR-IOV enabled
  • eth1/eno2, Ethernet, configured rate: 10 Gbps, model: Mellanox Technologies MT27710 Family [ConnectX-4 Lx], driver: mlx5_core, SR-IOV enabled - no KaVLAN
  • ib0, InfiniBand, configured rate: 100 Gbps, model: Mellanox Technologies MT28908 Family [ConnectX-6], driver: mlx5_core
  • ib1, InfiniBand, configured rate: 100 Gbps, model: Mellanox Technologies MT28908 Family [ConnectX-6], driver: mlx5_core
GPU: 8 x AMD Radeon Instinct MI50 32GB (32 GiB)

nova

23 nodes, 46 cpus, 368 cores (json)

Reservation example:

Terminal.png flyon:
oarsub -p "cluster='nova'" -I
Model: Dell PowerEdge R430
Date of arrival: 2016-12-01
CPU: Intel Xeon E5-2620 v4 (Broadwell, 2.10GHz, 2 CPUs/node, 8 cores/CPU)
Memory: 64 GiB
Storage: disk0, 598 GB HDD RAID-0 (2 disks) Dell PERC H330 Mini (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0) (primary disk)
Network:
  • eth0/enp5s0f0, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet 10G 2P X520 Adapter, driver: ixgbe, SR-IOV enabled
  • eth1/enp5s0f1, Ethernet, model: Intel Ethernet 10G 2P X520 Adapter, driver: ixgbe - unavailable for experiment
  • eth2/eno1, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • eth3/eno2, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • eth4/eno3, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • eth5/eno4, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment

orion

4 nodes, 8 cpus, 48 cores (json)

Reservation example:

Terminal.png flyon:
oarsub -p "cluster='orion'" -I
Model: Dell PowerEdge R720
Date of arrival: 2012-09-14
CPU: Intel Xeon E5-2630 (Sandy Bridge, 2.30GHz, 2 CPUs/node, 6 cores/CPU)
Memory: 32 GiB
Storage: disk0, 299 GB HDD RAID-0 (1 disk) Dell PERC H710 (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:02:00.0-scsi-0:2:0:0) (primary disk)
Network:
  • eth0/enp68s0f0, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet 10G 2P X520 Adapter, driver: ixgbe, SR-IOV enabled
  • eth1/enp68s0f1, Ethernet, model: Intel Ethernet 10G 2P X520 Adapter, driver: ixgbe - unavailable for experiment
  • eth2/eno1, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth3/eno2, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth4/eno3, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth5/eno4, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
GPU: Nvidia Tesla M2075 (5 GiB)
Compute capability: 2.0

pyxis

4 nodes, 8 cpus, 256 cores (json)

Reservation example:

Terminal.png flyon:
oarsub -t exotic -p "cluster='pyxis'" -I
Access condition: exotic job type
Model: R181-T92-00
Date of arrival: 2020-01-06
CPU: ThunderX2 99xx (Vulcan, 2 CPUs/node, 32 cores/CPU)
Memory: 256 GiB
Storage:
  • disk0, 250 GB SSD SATA Samsung Samsung SSD 860 (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:0f:00.0-sas-phy2-lun-0) (primary disk)
  • disk1, 250 GB SSD SATA Samsung Samsung SSD 860 (dev: /dev/disk1, by-path: /dev/disk/by-path/pci-0000:0f:00.0-sas-phy3-lun-0)
Network:
  • eth0/eno1, Ethernet, configured rate: 10 Gbps, model: QLogic Corp. FastLinQ QL41000 Series 10/25/40/50GbE Controller, driver: qede, SR-IOV enabled
  • eth1/eno2, Ethernet, model: QLogic Corp. FastLinQ QL41000 Series 10/25/40/50GbE Controller, driver: qede - unavailable for experiment
  • ib0, InfiniBand, configured rate: 100 Gbps, model: Mellanox Technologies MT28908 Family [ConnectX-6], driver: mlx5_core

sagittaire

13 nodes, 26 cpus, 26 cores, split as follows due to differences between nodes (json)

Reservation example:

Terminal.png flyon:
oarsub -p "cluster='sagittaire'" -I
sagittaire-[11-12] (2 nodes, 4 cpus, 4 cores)
Model: Sun Fire V20z
Date of arrival: 2006-07-01
CPU: AMD Opteron 250 (K8, 2 CPUs/node, 1 core/CPU)
Memory: 2 GiB
Storage: disk0, 73 GB HDD SCSI Seagate ST373307LC (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:02:04.0-scsi-0:0:1:0) (primary disk)
Network:
  • eth0/enp2s2f0, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5704 Gigabit Ethernet, driver: tg3 - unavailable for experiment
  • eth1/enp2s2f1, Ethernet, configured rate: 1 Gbps, model: Broadcom Inc. and subsidiaries NetXtreme BCM5704 Gigabit Ethernet, driver: tg3

sagittaire-[2-6,13-16,18] (10 nodes, 20 cpus, 20 cores)
Model: Sun Fire V20z
Date of arrival: 2006-07-01
CPU: AMD Opteron 250 (K8, 2 CPUs/node, 1 core/CPU)
Memory: 2 GiB
Storage: disk0, 73 GB HDD SCSI Seagate ST373207LC (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:02:04.0-scsi-0:0:1:0) (primary disk)
Network:
  • eth0/enp2s2f0, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5704 Gigabit Ethernet, driver: tg3 - unavailable for experiment
  • eth1/enp2s2f1, Ethernet, configured rate: 1 Gbps, model: Broadcom Inc. and subsidiaries NetXtreme BCM5704 Gigabit Ethernet, driver: tg3

sagittaire-17 (1 node, 2 cpus, 2 cores)
Model: Sun Fire V20z
Date of arrival: 2006-07-01
CPU: AMD Opteron 250 (K8, 2 CPUs/node, 1 core/CPU)
Memory: 2 GiB
Storage: disk0, 73 GB HDD SCSI Fujitsu MAT3073NC (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:02:04.0-scsi-0:0:1:0) (primary disk)
Network:
  • eth0/enp2s2f0, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5704 Gigabit Ethernet, driver: tg3 - unavailable for experiment
  • eth1/enp2s2f1, Ethernet, configured rate: 1 Gbps, model: Broadcom Inc. and subsidiaries NetXtreme BCM5704 Gigabit Ethernet, driver: tg3

taurus

13 nodes, 26 cpus, 156 cores (json)

Reservation example:

Terminal.png flyon:
oarsub -p "cluster='taurus'" -I
Model: Dell PowerEdge R720
Date of arrival: 2012-09-14
CPU: Intel Xeon E5-2630 (Sandy Bridge, 2.30GHz, 2 CPUs/node, 6 cores/CPU)
Memory: 32 GiB
Storage: disk0, 299 GB HDD RAID-0 (1 disk) Dell PERC H710 (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:02:00.0-scsi-0:2:0:0) (primary disk)
Network:
  • eth0/enp68s0f0, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet 10G 2P X520 Adapter, driver: ixgbe, SR-IOV enabled
  • eth1/enp68s0f1, Ethernet, model: Intel Ethernet 10G 2P X520 Adapter, driver: ixgbe - unavailable for experiment
  • eth2/eno1, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth3/eno2, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth4/eno3, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth5/eno4, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment

Clusters in testing queue

sirius

1 node, 2 cpus, 128 cores (json)

Reservation example:

Terminal.png flyon:
oarsub -q testing -p "cluster='sirius'" -I
Access condition: testing queue
Model: NVidia DGX A100
Date of arrival: 2021-11-18
CPU: AMD EPYC 7742 (Zen 2, 2 CPUs/node, 64 cores/CPU)
Memory: 1.0 TiB
Storage:
  • disk0, 1.92 TB SSD NVME Samsung SAMSUNG MZ1LB1T9HALS-00007 (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:22:00.0-nvme-1) (primary disk)
  • disk1, 1.92 TB SSD NVME Samsung SAMSUNG MZ1LB1T9HALS-00007 (dev: /dev/disk1, by-path: /dev/disk/by-path/pci-0000:23:00.0-nvme-1)
  • disk2, 3.84 TB SSD NVME Kioxia KCM6DRUL3T84 (dev: /dev/disk2, by-path: /dev/disk/by-path/pci-0000:09:00.0-nvme-1)
  • disk3, 3.84 TB SSD NVME Kioxia KCM6DRUL3T84 (dev: /dev/disk3, by-path: /dev/disk/by-path/pci-0000:52:00.0-nvme-1)
  • disk4, 3.84 TB SSD NVME Kioxia KCM6DRUL3T84 (dev: /dev/disk4, by-path: /dev/disk/by-path/pci-0000:ca:00.0-nvme-1)
  • disk5, 3.84 TB SSD NVME Kioxia KCM6DRUL3T84 (dev: /dev/disk5, by-path: /dev/disk/by-path/pci-0000:8a:00.0-nvme-1)
Network:
  • eth0, Ethernet, configured rate: 1 Gbps, model: Intel I210 Gigabit Network Connection, driver: igb - no KaVLAN
  • eth1, Ethernet, model: Mellanox Technologies MT28908 Family [ConnectX-6], driver: mlx5_core - unavailable for experiment
  • eth2, Ethernet, model: Mellanox Technologies MT28908 Family [ConnectX-6], driver: mlx5_core - unavailable for experiment
  • ib0, InfiniBand, configured rate: 10 Gbps, model: Mellanox Technologies MT28908 Family [ConnectX-6], driver: mlx5_core
  • ib1, InfiniBand, configured rate: 10 Gbps, model: Mellanox Technologies MT28908 Family [ConnectX-6], driver: mlx5_core
  • ib2, InfiniBand, configured rate: 10 Gbps, model: Mellanox Technologies MT28908 Family [ConnectX-6], driver: mlx5_core
  • ib3, InfiniBand, model: Mellanox Technologies MT28908 Family [ConnectX-6], driver: mlx5_core - unavailable for experiment
  • ib4, InfiniBand, model: Mellanox Technologies MT28908 Family [ConnectX-6], driver: mlx5_core - unavailable for experiment
  • ib5, InfiniBand, model: Mellanox Technologies MT28908 Family [ConnectX-6], driver: mlx5_core - unavailable for experiment
  • ib6, InfiniBand, model: Mellanox Technologies MT28908 Family [ConnectX-6], driver: mlx5_core - unavailable for experiment
  • ib7, InfiniBand, model: Mellanox Technologies MT28908 Family [ConnectX-6], driver: mlx5_core - unavailable for experiment
Last generated from the Grid'5000 Reference API on 2021-12-21 (commit 19dded246f)