Lyon:Hardware

From Grid5000
Revision as of 21:38, 31 August 2021 by Pringot (talk | contribs)
Jump to navigation Jump to search

See also: Network topology for Lyon

Summary

8 clusters, 74 nodes, 1464 cores, 40.5 TFLOPS

Cluster Access Condition Date of arrival Nodes CPU Cores Memory Storage Network Accelerators
gemini exotic job type 2019-09-01 2 2 x Intel Xeon E5-2698 v4 20 cores/CPU 512 GiB 480 GB SSD + 4 x 1.92 TB SSD* 10 Gbps (SR‑IOV) + 3 x 100 Gbps InfiniBand 8 x Nvidia Tesla V100
hercule 2012-10-02 4 2 x Intel Xeon E5-2620 6 cores/CPU 32 GiB 1 x 2.0 TB HDD + 2 x 2.0 TB HDD 10 Gbps (SR‑IOV) 
neowise testing queue,
exotic job type
2021-05-17 10 AMD EPYC 7642 48 cores/CPU 512 GiB 1.92 TB SSD 2 x 10 Gbps (SR‑IOV) + 2 x 200 Gbps InfiniBand 8 x AMD MI50
nova 2016-12-01 23 2 x Intel Xeon E5-2620 v4 8 cores/CPU 64 GiB 598 GB HDD 10 Gbps (SR‑IOV) 
orion 2012-09-14 4 2 x Intel Xeon E5-2630 6 cores/CPU 32 GiB 598 GB HDD 10 Gbps (SR‑IOV)  Nvidia Tesla M2075
pyxis exotic job type 2020-01-06 4 2 x ThunderX2 99xx 32 cores/CPU 256 GiB 1 x 250 GB SSD + 1 x 250 GB SSD 10 Gbps (SR‑IOV) + 100 Gbps InfiniBand
sagittaire 2006-07-01 14 2 x AMD Opteron 250 1 core/CPU 2 GiB 73 GB HDD 1 Gbps 
taurus 2012-09-14 13 2 x Intel Xeon E5-2630 6 cores/CPU 32 GiB 598 GB HDD 10 Gbps (SR‑IOV) 

*: disk is reservable

Clusters in default queue

gemini

2 nodes, 4 cpus, 80 cores (json)

Reservation example:

Terminal.png flyon:
oarsub -t exotic -p "cluster='gemini'" -I
Access condition: exotic job type
Model: Nvidia DGX-1
Date of arrival: 2019-09-01
CPU: Intel Xeon E5-2698 v4 (Broadwell, 2.20GHz, 2 CPUs/node, 20 cores/CPU)
Memory: 512 GiB
Storage:
  • disk0, 480 GB SSD SATA Samsung SAMSUNG MZ7KM480 (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:81:00.0-scsi-0:0:14:0) (primary disk)
  • disk1, 1.92 TB SSD SATA Samsung SAMSUNG MZ7LM1T9 (dev: /dev/sdb*, by-path: /dev/disk/by-path/pci-0000:81:00.0-scsi-0:0:15:0) (reservable)
  • disk2, 1.92 TB SSD SATA Samsung SAMSUNG MZ7LM1T9 (dev: /dev/sdc*, by-path: /dev/disk/by-path/pci-0000:81:00.0-scsi-0:0:16:0) (reservable)
  • disk3, 1.92 TB SSD SATA Samsung SAMSUNG MZ7LM1T9 (dev: /dev/sdd*, by-path: /dev/disk/by-path/pci-0000:81:00.0-scsi-0:0:17:0) (reservable)
  • disk4, 1.92 TB SSD SATA Samsung SAMSUNG MZ7LM1T9 (dev: /dev/sde*, by-path: /dev/disk/by-path/pci-0000:81:00.0-scsi-0:0:18:0) (reservable)

*: the disk block device name /dev/sd? may vary in deployed environments, prefer referring to the by-path identifier

Network:
  • eth0/enp1s0f0, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet Controller 10-Gigabit X540-AT2, driver: ixgbe, SR-IOV enabled
  • eth1/enp1s0f1, Ethernet, model: Intel Ethernet Controller 10-Gigabit X540-AT2, driver: ixgbe - unavailable for experiment
  • ib0, InfiniBand, configured rate: 100 Gbps, model: Mellanox Technologies MT27800 Family [ConnectX-5], driver: mlx5_core
  • ib1, InfiniBand, configured rate: 100 Gbps, model: Mellanox Technologies MT27800 Family [ConnectX-5], driver: mlx5_core
  • ib2, InfiniBand, configured rate: 100 Gbps, model: Mellanox Technologies MT27800 Family [ConnectX-5], driver: mlx5_core
  • ib3, InfiniBand, model: Mellanox Technologies MT27800 Family [ConnectX-5], driver: mlx5_core - unavailable for experiment
GPU: 8 x Nvidia Tesla V100-SXM2-32GB

hercule

4 nodes, 8 cpus, 48 cores (json)

Reservation example:

Terminal.png flyon:
oarsub -p "cluster='hercule'" -I
Model: Dell PowerEdge C6220
Date of arrival: 2012-10-02
CPU: Intel Xeon E5-2620 (Sandy Bridge, 2.00GHz, 2 CPUs/node, 6 cores/CPU)
Memory: 32 GiB
Storage:
  • disk0, 2.0 TB HDD SATA Western Digital WDC WD2003FYYS-1 (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:00:1f.2-ata-1) (primary disk)
  • disk1, 2.0 TB HDD SATA Western Digital WDC WD2003FYYS-1 (dev: /dev/sdb, by-path: /dev/disk/by-path/pci-0000:00:1f.2-ata-2)
  • disk2, 2.0 TB HDD SATA Western Digital WDC WD2003FYYS-1 (dev: /dev/sdc, by-path: /dev/disk/by-path/pci-0000:00:1f.2-ata-3)
Network:
  • eth0/enp130s0f0, Ethernet, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe - unavailable for experiment
  • eth1/enp130s0f1, Ethernet, configured rate: 10 Gbps, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe, SR-IOV enabled
  • eth2/eno1, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth3/eno2, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment

nova

23 nodes, 46 cpus, 368 cores (json)

Reservation example:

Terminal.png flyon:
oarsub -p "cluster='nova'" -I
Model: Dell PowerEdge R430
Date of arrival: 2016-12-01
CPU: Intel Xeon E5-2620 v4 (Broadwell, 2.10GHz, 2 CPUs/node, 8 cores/CPU)
Memory: 64 GiB
Storage: disk0, 598 GB HDD RAID-0 (2 disks) Dell PERC H330 Mini (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0) (primary disk)
Network:
  • eth0/enp5s0f0, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet 10G 2P X520 Adapter, driver: ixgbe, SR-IOV enabled
  • eth1/enp5s0f1, Ethernet, model: Intel Ethernet 10G 2P X520 Adapter, driver: ixgbe - unavailable for experiment
  • eth2/eno1, Ethernet, model: Broadcom NetXtreme BCM5720 Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • eth3/eno2, Ethernet, model: Broadcom NetXtreme BCM5720 Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • eth4/eno3, Ethernet, model: Broadcom NetXtreme BCM5720 Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • eth5/eno4, Ethernet, model: Broadcom NetXtreme BCM5720 Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment

orion

4 nodes, 8 cpus, 48 cores (json)

Reservation example:

Terminal.png flyon:
oarsub -p "cluster='orion'" -I
Model: Dell PowerEdge R720
Date of arrival: 2012-09-14
CPU: Intel Xeon E5-2630 (Sandy Bridge, 2.30GHz, 2 CPUs/node, 6 cores/CPU)
Memory: 32 GiB
Storage: disk0, 598 GB HDD RAID-0 (2 disks) Dell PERC H710 (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:02:00.0-scsi-0:2:0:0) (primary disk)
Network:
  • eth0/enp68s0f0, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet 10G 2P X520 Adapter, driver: ixgbe, SR-IOV enabled
  • eth1/enp68s0f1, Ethernet, model: Intel Ethernet 10G 2P X520 Adapter, driver: ixgbe - unavailable for experiment
  • eth2/eno1, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth3/eno2, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth4/eno3, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth5/eno4, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
GPU: Nvidia Tesla M2075

pyxis

4 nodes, 8 cpus, 256 cores (json)

Reservation example:

Terminal.png flyon:
oarsub -t exotic -p "cluster='pyxis'" -I
Access condition: exotic job type
Model: R181-T92-00
Date of arrival: 2020-01-06
CPU: ThunderX2 99xx (Vulcan, 2 CPUs/node, 32 cores/CPU)
Memory: 256 GiB
Storage:
  • disk0, 250 GB SSD SATA Samsung Samsung SSD 860 (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:0f:00.0-sas-phy2-lun-0) (primary disk)
  • disk1, 250 GB SSD SATA Samsung Samsung SSD 860 (dev: /dev/sdb, by-path: /dev/disk/by-path/pci-0000:0f:00.0-sas-phy3-lun-0)
Network:
  • eth0/eno1, Ethernet, configured rate: 10 Gbps, model: QLogic Corp. FastLinQ QL41000 Series 10/25/40/50GbE Controller, driver: qede, SR-IOV enabled
  • eth1/eno2, Ethernet, model: QLogic Corp. FastLinQ QL41000 Series 10/25/40/50GbE Controller, driver: qede - unavailable for experiment
  • ib0, InfiniBand, configured rate: 100 Gbps, model: Mellanox Technologies MT28908 Family [ConnectX-6], driver: mlx5_core

sagittaire

14 nodes, 28 cpus, 28 cores, split as follows due to differences between nodes (json)

Reservation example:

Terminal.png flyon:
oarsub -p "cluster='sagittaire'" -I
sagittaire-[11-12] (2 nodes, 4 cpus, 4 cores)
Model: Sun Fire V20z
Date of arrival: 2006-07-01
CPU: AMD Opteron 250 (K8, 2 CPUs/node, 1 core/CPU)
Memory: 2 GiB
Storage: disk0, 73 GB HDD SCSI Seagate ST373307LC (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:02:04.0-scsi-0:0:1:0) (primary disk)
Network:
  • eth0/enp2s2f0, Ethernet, model: Broadcom NetXtreme BCM5704 Gigabit Ethernet, driver: tg3 - unavailable for experiment
  • eth1/enp2s2f1, Ethernet, configured rate: 1 Gbps, model: Broadcom NetXtreme BCM5704 Gigabit Ethernet, driver: tg3

sagittaire-[2-6,13-16,18,20] (11 nodes, 22 cpus, 22 cores)
Model: Sun Fire V20z
Date of arrival: 2006-07-01
CPU: AMD Opteron 250 (K8, 2 CPUs/node, 1 core/CPU)
Memory: 2 GiB
Storage: disk0, 73 GB HDD SCSI Seagate ST373207LC (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:02:04.0-scsi-0:0:1:0) (primary disk)
Network:
  • eth0/enp2s2f0, Ethernet, model: Broadcom NetXtreme BCM5704 Gigabit Ethernet, driver: tg3 - unavailable for experiment
  • eth1/enp2s2f1, Ethernet, configured rate: 1 Gbps, model: Broadcom NetXtreme BCM5704 Gigabit Ethernet, driver: tg3

sagittaire-17 (1 node, 2 cpus, 2 cores)
Model: Sun Fire V20z
Date of arrival: 2006-07-01
CPU: AMD Opteron 250 (K8, 2 CPUs/node, 1 core/CPU)
Memory: 2 GiB
Storage: disk0, 73 GB HDD SCSI Fujitsu MAT3073NC (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:02:04.0-scsi-0:0:1:0) (primary disk)
Network:
  • eth0/enp2s2f0, Ethernet, model: Broadcom NetXtreme BCM5704 Gigabit Ethernet, driver: tg3 - unavailable for experiment
  • eth1/enp2s2f1, Ethernet, configured rate: 1 Gbps, model: Broadcom NetXtreme BCM5704 Gigabit Ethernet, driver: tg3

taurus

13 nodes, 26 cpus, 156 cores (json)

Reservation example:

Terminal.png flyon:
oarsub -p "cluster='taurus'" -I
Model: Dell PowerEdge R720
Date of arrival: 2012-09-14
CPU: Intel Xeon E5-2630 (Sandy Bridge, 2.30GHz, 2 CPUs/node, 6 cores/CPU)
Memory: 32 GiB
Storage: disk0, 598 GB HDD RAID-0 (2 disks) Dell PERC H710 (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:02:00.0-scsi-0:2:0:0) (primary disk)
Network:
  • eth0/enp68s0f0, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet 10G 2P X520 Adapter, driver: ixgbe, SR-IOV enabled
  • eth1/enp68s0f1, Ethernet, model: Intel Ethernet 10G 2P X520 Adapter, driver: ixgbe - unavailable for experiment
  • eth2/eno1, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth3/eno2, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth4/eno3, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth5/eno4, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment

Clusters in testing queue

neowise

10 nodes, 10 cpus, 480 cores (json)

Reservation example:

Terminal.png flyon:
oarsub -q testing -t exotic -p "cluster='neowise'" -I
Access condition: testing queue, exotic job type
Model: AMD-Penguin Computing
Date of arrival: 2021-05-17
CPU: AMD EPYC 7642 (Zen 2, 1 CPU/node, 48 cores/CPU)
Memory: 512 GiB
Storage: disk0, 1.92 TB SSD NVME Samsung SAMSUNG MZ1LB1T9HALS-00007 (dev: /dev/nvme0n1, by-path: /dev/disk/by-path/pci-0000:82:00.0-nvme-1) (primary disk)
Network:
  • eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Mellanox Technologies MT27710 Family [ConnectX-4 Lx], driver: mlx5_core, SR-IOV enabled
  • eth1/eno2, Ethernet, configured rate: 10 Gbps, model: Mellanox Technologies MT27710 Family [ConnectX-4 Lx], driver: mlx5_core, SR-IOV enabled - no KaVLAN
  • ib0, InfiniBand, configured rate: 200 Gbps, model: Mellanox Technologies MT28908 Family [ConnectX-6], driver: mlx5_core
  • ib1, InfiniBand, configured rate: 200 Gbps, model: Mellanox Technologies MT28908 Family [ConnectX-6], driver: mlx5_core
GPU: 8 x AMD Radeon Instinct MI50 32GB

Last generated from the Grid'5000 Reference API on 2021-08-31 (commit a5c6e6cff2)