Lille:Hardware

From Grid5000
Jump to: navigation, search

See also: Network topology for Lille

Summary

4 clusters, 39 nodes, 972 cores, 39.6 TFLOPS

Cluster Access Condition Date of arrival Nodes CPU Cores Memory Storage Network Accelerators
chetemi 2016-12-01 15 2 x Intel Xeon E5-2630 v4 10 cores/CPU 256 GiB [1-9,11-15]: 1 x 300 GB HDD + 1 x 300 GB HDD
10: 600 GB HDD + 300 GB HDD
2 x 10 Gbps 
chiclet 2018-08-06 8 2 x AMD EPYC 7301 16 cores/CPU 128 GiB 480 GB SSD + 2 x 4.0 TB HDD* 2 x 25 Gbps 
chifflet 2016-12-01 8 2 x Intel Xeon E5-2680 v4 14 cores/CPU 768 GiB 400 GB SSD + 400 GB SSD* + 2 x 4.0 TB HDD* 2 x 10 Gbps  2 x Nvidia GTX 1080 Ti
chifflot 2018-08-01 8 2 x Intel Xeon Gold 6126 12 cores/CPU 192 GiB 480 GB SSD + 480 GB SSD* + 4 x 4.0 TB HDD* 2 x 25 Gbps  [1-6]: 2 x Nvidia Tesla P100
[7-8]: 2 x Nvidia Tesla V100

*: disk is reservable

Clusters in default queue

chetemi

15 nodes, 30 cpus, 300 cores, split as follows due to differences between nodes (json)

Reservation example:

Terminal.png flille:
oarsub -p "cluster='chetemi'" -I
chetemi-[1-9,11-15] (14 nodes, 28 cpus, 280 cores)
Model: Dell PowerEdge R630
Date of arrival: 2016-12-01
CPU: Intel Xeon E5-2630 v4 (Broadwell, 2.20GHz, 2 CPUs/node, 10 cores/CPU)
Memory: 256 GiB
Storage:
  • disk0, 300 GB HDD SAS Seagate ST300MP0005 (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:0) (primary disk)
  • disk1, 300 GB HDD SAS Seagate ST300MP0005 (dev: /dev/sdb, by-path: /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:1:0)
Network:
  • eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe
  • eth1/eno2, Ethernet, configured rate: 10 Gbps, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe (multi NICs example)
  • eth2/eno3, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth3/eno4, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment

chetemi-10 (1 node, 2 cpus, 20 cores)
Model: Dell PowerEdge R630
Date of arrival: 2016-12-01
CPU: Intel Xeon E5-2630 v4 (Broadwell, 2.20GHz, 2 CPUs/node, 10 cores/CPU)
Memory: 256 GiB
Storage:
  • disk0, 600 GB HDD SAS Seagate ST600MP0036 (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:0) (primary disk)
  • disk1, 300 GB HDD SAS Seagate ST300MP0005 (dev: /dev/sdb, by-path: /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:1:0)
Network:
  • eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe
  • eth1/eno2, Ethernet, configured rate: 10 Gbps, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe (multi NICs example)
  • eth2/eno3, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth3/eno4, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment

chiclet

8 nodes, 16 cpus, 256 cores (json)

Reservation example:

Terminal.png flille:
oarsub -p "cluster='chiclet'" -I
Model: Dell PowerEdge R7425
Date of arrival: 2018-08-06
CPU: AMD EPYC 7301 (Zen, 2 CPUs/node, 16 cores/CPU)
Memory: 128 GiB
Storage:
  • disk0, 480 GB SSD SAS Toshiba PX05SVB048Y (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:61:00.0-scsi-0:0:0:0) (primary disk)
  • disk1, 4.0 TB HDD SAS Seagate ST4000NM0295 (dev: /dev/sdb*, by-path: /dev/disk/by-path/pci-0000:61:00.0-scsi-0:0:1:0) (reservable)
  • disk2, 4.0 TB HDD SAS Seagate ST4000NM0295 (dev: /dev/sdc*, by-path: /dev/disk/by-path/pci-0000:61:00.0-scsi-0:0:2:0) (reservable)

*: the disk block device name /dev/sd? may vary in deployed environments, prefer referring to the by-path identifier

Network:
  • eth0/enp98s0f0, Ethernet, configured rate: 25 Gbps, model: Intel Ethernet Controller XXV710 for 25GbE SFP28, driver: i40e
  • eth1/enp98s0f1, Ethernet, configured rate: 25 Gbps, model: Intel Ethernet Controller XXV710 for 25GbE SFP28, driver: i40e (multi NICs example)
  • eth2/eno1, Ethernet, model: Broadcom NetXtreme BCM5720 Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • eth3/eno2, Ethernet, model: Broadcom NetXtreme BCM5720 Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • eth4/eno3, Ethernet, model: Broadcom NetXtreme BCM5720 Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • eth5/eno4, Ethernet, model: Broadcom NetXtreme BCM5720 Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment

chifflet

8 nodes, 16 cpus, 224 cores (json)

Reservation example:

Terminal.png flille:
oarsub -p "cluster='chifflet'" -I
Model: Dell PowerEdge R730
Date of arrival: 2016-12-01
CPU: Intel Xeon E5-2680 v4 (Broadwell, 2.40GHz, 2 CPUs/node, 14 cores/CPU)
Memory: 768 GiB
Storage:
  • disk0, 400 GB SSD SAS Toshiba PX04SMB040 (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:0) (primary disk)
  • disk1, 400 GB SSD SAS Toshiba PX04SMB040 (dev: /dev/sdb*, by-path: /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:1:0) (reservable)
  • disk2, 4.0 TB HDD SATA Toshiba TOSHIBA MG04ACA4 (dev: /dev/sdc*, by-path: /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:2:0) (reservable)
  • disk3, 4.0 TB HDD SATA Toshiba TOSHIBA MG04ACA4 (dev: /dev/sdd*, by-path: /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:3:0) (reservable)

*: the disk block device name /dev/sd? may vary in deployed environments, prefer referring to the by-path identifier

Network:
  • eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe
  • eth1/eno2, Ethernet, configured rate: 10 Gbps, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe (multi NICs example)
  • eth2/eno3, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth3/eno4, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
GPU: 2 x Nvidia GeForce GTX 1080 Ti

chifflot

8 nodes, 16 cpus, 192 cores, split as follows due to differences between nodes (json)

Reservation example:

Terminal.png flille:
oarsub -p "cluster='chifflot'" -I
chifflot-[1,4-6] (4 nodes, 8 cpus, 96 cores)
Model: Dell PowerEdge R740
Date of arrival: 2018-08-01
CPU: Intel Xeon Gold 6126 (Skylake, 2.60GHz, 2 CPUs/node, 12 cores/CPU)
Memory: 192 GiB
Storage:
  • disk0, 480 GB SSD SATA Intel SSDSC2KG480G7R (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:0:0) (primary disk)
  • disk1, 480 GB SSD SATA Intel SSDSC2KG480G7R (dev: /dev/sdb*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:1:0) (reservable)
  • disk2, 4.0 TB HDD SAS Seagate ST4000NM0295 (dev: /dev/sdc*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:2:0) (reservable)
  • disk3, 4.0 TB HDD SAS Seagate ST4000NM0295 (dev: /dev/sdd*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:3:0) (reservable)
  • disk4, 4.0 TB HDD SAS Seagate ST4000NM0295 (dev: /dev/sde*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:4:0) (reservable)
  • disk5, 4.0 TB HDD SAS Seagate ST4000NM0295 (dev: /dev/sdf*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:5:0) (reservable)

*: the disk block device name /dev/sd? may vary in deployed environments, prefer referring to the by-path identifier

Network:
  • eth0/ens5f0, Ethernet, configured rate: 25 Gbps, model: Intel Ethernet Controller XXV710 for 25GbE SFP28, driver: i40e
  • eth1/ens5f1, Ethernet, configured rate: 25 Gbps, model: Intel Ethernet Controller XXV710 for 25GbE SFP28, driver: i40e (multi NICs example)
  • eth2/eno1, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth3/eno2, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth4/eno3, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth5/eno4, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
GPU: 2 x Nvidia Tesla P100-PCIE-16GB

chifflot-2 (1 node, 2 cpus, 24 cores)
Model: Dell PowerEdge R740
Date of arrival: 2018-08-01
CPU: Intel Xeon Gold 6126 (Skylake, 2.60GHz, 2 CPUs/node, 12 cores/CPU)
Memory: 192 GiB
Storage:
  • disk0, 480 GB SSD SATA Intel SSDSC2KG480G7R (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:0:0) (primary disk)
  • disk1, 480 GB SSD SATA Intel SSDSC2KG480G7R (dev: /dev/sdb*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:1:0) (reservable)
  • disk2, 4.0 TB HDD SAS Seagate ST4000NM0295 (dev: /dev/sdc*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:2:0) (reservable)
  • disk3, 4.0 TB HDD SAS Toshiba MG04SCA40ENY (dev: /dev/sdd*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:3:0) (reservable)
  • disk4, 4.0 TB HDD SAS Seagate ST4000NM0295 (dev: /dev/sde*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:4:0) (reservable)
  • disk5, 4.0 TB HDD SAS Seagate ST4000NM0295 (dev: /dev/sdf*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:5:0) (reservable)

*: the disk block device name /dev/sd? may vary in deployed environments, prefer referring to the by-path identifier

Network:
  • eth0/ens5f0, Ethernet, configured rate: 25 Gbps, model: Intel Ethernet Controller XXV710 for 25GbE SFP28, driver: i40e
  • eth1/ens5f1, Ethernet, configured rate: 25 Gbps, model: Intel Ethernet Controller XXV710 for 25GbE SFP28, driver: i40e (multi NICs example)
  • eth2/eno1, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth3/eno2, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth4/eno3, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth5/eno4, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
GPU: 2 x Nvidia Tesla P100-PCIE-16GB

chifflot-3 (1 node, 2 cpus, 24 cores)
Model: Dell PowerEdge R740
Date of arrival: 2018-08-01
CPU: Intel Xeon Gold 6126 (Skylake, 2.60GHz, 2 CPUs/node, 12 cores/CPU)
Memory: 192 GiB
Storage:
  • disk0, 480 GB SSD SATA Intel SSDSC2KG480G7R (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:0:0) (primary disk)
  • disk1, 480 GB SSD SATA Intel SSDSC2KG480G7R (dev: /dev/sdb*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:1:0) (reservable)
  • disk2, 4.0 TB HDD SAS Toshiba MG04SCA40ENY (dev: /dev/sdc*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:2:0) (reservable)
  • disk3, 4.0 TB HDD SAS Toshiba MG04SCA40ENY (dev: /dev/sdd*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:3:0) (reservable)
  • disk4, 4.0 TB HDD SAS Toshiba MG04SCA40ENY (dev: /dev/sde*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:4:0) (reservable)
  • disk5, 4.0 TB HDD SAS Toshiba MG04SCA40ENY (dev: /dev/sdf*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:5:0) (reservable)

*: the disk block device name /dev/sd? may vary in deployed environments, prefer referring to the by-path identifier

Network:
  • eth0/ens5f0, Ethernet, configured rate: 25 Gbps, model: Intel Ethernet Controller XXV710 for 25GbE SFP28, driver: i40e
  • eth1/ens5f1, Ethernet, configured rate: 25 Gbps, model: Intel Ethernet Controller XXV710 for 25GbE SFP28, driver: i40e (multi NICs example)
  • eth2/eno1, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth3/eno2, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth4/eno3, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth5/eno4, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
GPU: 2 x Nvidia Tesla P100-PCIE-16GB

chifflot-[7-8] (2 nodes, 4 cpus, 48 cores)
Model: Dell PowerEdge R740
Date of arrival: 2018-08-01
CPU: Intel Xeon Gold 6126 (Skylake, 2.60GHz, 2 CPUs/node, 12 cores/CPU)
Memory: 192 GiB
Storage:
  • disk0, 480 GB SSD SATA Intel SSDSC2KG480G7R (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:0:0) (primary disk)
  • disk1, 480 GB SSD SATA Intel SSDSC2KG480G7R (dev: /dev/sdb*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:1:0) (reservable)
  • disk2, 4.0 TB HDD SAS Hitachi HUS726040ALS210 (dev: /dev/sdc*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:2:0) (reservable)
  • disk3, 4.0 TB HDD SAS Hitachi HUS726040ALS210 (dev: /dev/sdd*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:3:0) (reservable)
  • disk4, 4.0 TB HDD SAS Hitachi HUS726040ALS210 (dev: /dev/sde*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:4:0) (reservable)
  • disk5, 4.0 TB HDD SAS Hitachi HUS726040ALS210 (dev: /dev/sdf*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:5:0) (reservable)

*: the disk block device name /dev/sd? may vary in deployed environments, prefer referring to the by-path identifier

Network:
  • eth0/ens5f0, Ethernet, configured rate: 25 Gbps, model: Intel Ethernet Controller XXV710 for 25GbE SFP28, driver: i40e
  • eth1/ens5f1, Ethernet, configured rate: 25 Gbps, model: Intel Ethernet Controller XXV710 for 25GbE SFP28, driver: i40e (multi NICs example)
  • eth2/eno1, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth3/eno2, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth4/eno3, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth5/eno4, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
GPU: 2 x Nvidia Tesla V100-PCIE-32GB
Last generated from the Grid'5000 Reference API on 2021-07-02 (commit ba8c1211c6)