See also: Network topology for Nancy
Summary
14 clusters, 375 nodes, 7900 cores, 327.2 TFLOPS
Cluster
|
Access Condition
|
Date of arrival
|
Nodes
|
CPU
|
Cores
|
Memory
|
Storage
|
Network
|
Accelerators
|
graffiti |
production queue |
2019-06-07 |
13 |
2 x Intel Xeon Silver 4110 |
8 cores/CPU |
128 GiB |
479 GB HDD |
10 Gbps |
[1-12]: 4 x Nvidia RTX 2080 Ti 13: 4 x Nvidia Quadro RTX 6000
|
graoully |
production queue |
2016-01-04 |
16 |
2 x Intel Xeon E5-2630 v3 |
8 cores/CPU |
128 GiB |
1 x 600 GB HDD + 1 x 600 GB HDD |
10 Gbps (SR‑IOV) + 56 Gbps InfiniBand |
|
graphique |
production queue |
2015-05-12 |
5 |
2 x Intel Xeon E5-2620 v3 |
6 cores/CPU |
64 GiB |
299 GB HDD |
10 Gbps + 56 Gbps InfiniBand |
2 x Nvidia GTX 980
|
graphite |
|
2013-12-05 |
4 |
2 x Intel Xeon E5-2650 |
8 cores/CPU |
256 GiB |
1 x 300 GB SSD + 1 x 300 GB SSD |
10 Gbps (SR‑IOV) + 56 Gbps InfiniBand |
Intel Xeon Phi 7120P
|
grappe |
production queue |
2020-08-20 |
16 |
2 x Intel Xeon Gold 5218R |
20 cores/CPU |
96 GiB |
480 GB SSD + 8.0 TB HDD* |
25 Gbps |
|
grcinq |
production queue |
2013-04-09 |
47 |
2 x Intel Xeon E5-2650 |
8 cores/CPU |
64 GiB |
1.0 TB HDD |
1 Gbps (SR‑IOV) + 56 Gbps InfiniBand |
|
grele |
production queue |
2017-06-26 |
14 |
2 x Intel Xeon E5-2650 v4 |
12 cores/CPU |
128 GiB |
1 x 299 GB HDD + 1 x 299 GB HDD |
10 Gbps (SR‑IOV) + 100 Gbps Omni-Path |
2 x Nvidia GTX 1080 Ti
|
grimani |
production queue |
2016-08-30 |
6 |
2 x Intel Xeon E5-2603 v3 |
6 cores/CPU |
64 GiB |
1.0 TB HDD |
10 Gbps (SR‑IOV) + 100 Gbps Omni-Path |
2 x Nvidia Tesla K40M
|
grimoire |
|
2016-01-22 |
8 |
2 x Intel Xeon E5-2630 v3 |
8 cores/CPU |
128 GiB |
600 GB HDD + 4 x 600 GB HDD* + 200 GB SSD* |
4 x 10 Gbps (SR‑IOV) + 56 Gbps InfiniBand |
|
grisou |
|
2016-01-04 |
51 |
2 x Intel Xeon E5-2630 v3 |
8 cores/CPU |
128 GiB |
1 x 600 GB HDD + 1 x 600 GB HDD |
[1-48]: 1 Gbps + 4 x 10 Gbps (SR‑IOV) 49: 4 x 10 Gbps (SR‑IOV) [50-51]: 4 x 10 Gbps (SR‑IOV) + 56 Gbps InfiniBand |
|
gros |
|
2019-09-04 |
124 |
Intel Xeon Gold 5220 |
18 cores/CPU |
96 GiB |
480 GB SSD + 960 GB SSD* |
2 x 25 Gbps (SR‑IOV) |
|
grouille |
exotic job type |
2021-01-13 |
2 |
2 x AMD EPYC 7452 |
32 cores/CPU |
128 GiB |
1.92 TB SSD + 960 GB SSD* |
25 Gbps |
2 x Nvidia A100
|
grue |
production queue |
2019-11-25 |
5 |
2 x AMD EPYC 7351 |
16 cores/CPU |
128 GiB |
479 GB HDD |
10 Gbps |
4 x Nvidia Tesla T4
|
grvingt |
production queue |
2018-04-11 |
64 |
2 x Intel Xeon Gold 6130 |
16 cores/CPU |
192 GiB |
1.0 TB HDD |
10 Gbps + 100 Gbps Omni-Path |
|
*: disk is reservable
Clusters in default queue
graphite
4 nodes, 8 cpus, 64 cores (json)
Reservation example:
|
fnancy :
|
oarsub -p "cluster='graphite'" -I
|
Model:
|
Dell PowerEdge R720
|
Date of arrival:
|
2013-12-05
|
CPU:
|
Intel Xeon E5-2650 (Sandy Bridge, 2.00GHz, 2 CPUs/node, 8 cores/CPU)
|
Memory:
|
256 GiB
|
Storage:
|
- disk0, 300 GB SSD SATA Intel INTEL SSDSC2BB30 (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:02:00.0-scsi-0:0:0:0) (primary disk)
- disk1, 300 GB SSD SATA Intel INTEL SSDSC2BB30 (dev: /dev/sdb, by-path: /dev/disk/by-path/pci-0000:02:00.0-scsi-0:0:1:0)
|
Network:
|
- eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe, SR-IOV enabled
- eth1/eno2, Ethernet, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe - unavailable for experiment
- eth2/eno3, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
- eth3/eno4, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
- ib0, InfiniBand, configured rate: 56 Gbps, model: Mellanox Technologies MT27500 Family [ConnectX-3], driver: mlx4_core
|
Xeon Phi:
|
Intel Xeon Phi 7120P
|
grimoire
8 nodes, 16 cpus, 128 cores (json)
Reservation example:
|
fnancy :
|
oarsub -p "cluster='grimoire'" -I
|
Model:
|
Dell PowerEdge R630
|
Date of arrival:
|
2016-01-22
|
CPU:
|
Intel Xeon E5-2630 v3 (Haswell, 2.40GHz, 2 CPUs/node, 8 cores/CPU)
|
Memory:
|
128 GiB
|
Storage:
|
- disk0, 600 GB HDD SAS Seagate ST600MM0088 (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:0) (primary disk)
- disk1, 600 GB HDD SAS Seagate ST600MM0088 (dev: /dev/sdb*, by-path: /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:1:0) (reservable)
- disk2, 600 GB HDD SAS Seagate ST600MM0088 (dev: /dev/sdc*, by-path: /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:2:0) (reservable)
- disk3, 600 GB HDD SAS Seagate ST600MM0088 (dev: /dev/sdd*, by-path: /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:3:0) (reservable)
- disk4, 600 GB HDD SAS Seagate ST600MM0088 (dev: /dev/sde*, by-path: /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:4:0) (reservable)
- disk5, 200 GB SSD SAS Toshiba PX02SSF020 (dev: /dev/sdf*, by-path: /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:5:0) (reservable)
*: the disk block device name /dev/sd? may vary in deployed environments, prefer referring to the by-path identifier
|
Network:
|
- eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe, SR-IOV enabled
- eth1/eno2, Ethernet, configured rate: 10 Gbps, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe, SR-IOV enabled (multi NICs example)
- eth2/enp129s0f0, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet 10G 2P X520 Adapter, driver: ixgbe, SR-IOV enabled (multi NICs example)
- eth3/enp129s0f1, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet 10G 2P X520 Adapter, driver: ixgbe, SR-IOV enabled (multi NICs example)
- eth4/eno3, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
- eth5/eno4, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
- ib0, InfiniBand, configured rate: 56 Gbps, model: Mellanox Technologies MT27500 Family [ConnectX-3], driver: mlx4_core
|
grisou
51 nodes, 102 cpus, 816 cores, split as follows due to differences between nodes (json)
Reservation example:
|
fnancy :
|
oarsub -p "cluster='grisou'" -I
|
- grisou-[1-48] (48 nodes, 96 cpus, 768 cores)
Model:
|
Dell PowerEdge R630
|
Date of arrival:
|
2016-01-04
|
CPU:
|
Intel Xeon E5-2630 v3 (Haswell, 2.40GHz, 2 CPUs/node, 8 cores/CPU)
|
Memory:
|
128 GiB
|
Storage:
|
- disk0, 600 GB HDD SAS Seagate ST600MM0088 (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:0) (primary disk)
- disk1, 600 GB HDD SAS Seagate ST600MM0088 (dev: /dev/sdb, by-path: /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:1:0)
|
Network:
|
- eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe, SR-IOV enabled
- eth1/eno2, Ethernet, configured rate: 10 Gbps, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe, SR-IOV enabled (multi NICs example)
- eth2/enp4s0f0, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet 10G 2P X520 Adapter, driver: ixgbe, SR-IOV enabled (multi NICs example)
- eth3/enp4s0f1, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet 10G 2P X520 Adapter, driver: ixgbe, SR-IOV enabled (multi NICs example)
- eth4/eno3, Ethernet, configured rate: 1 Gbps, model: Intel I350 Gigabit Network Connection, driver: igb (multi NICs example)
- eth5/eno4, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
|
- grisou-49 (1 node, 2 cpus, 16 cores)
Model:
|
Dell PowerEdge R630
|
Date of arrival:
|
2016-01-04
|
CPU:
|
Intel Xeon E5-2630 v3 (Haswell, 2.40GHz, 2 CPUs/node, 8 cores/CPU)
|
Memory:
|
128 GiB
|
Storage:
|
- disk0, 600 GB HDD SAS Seagate ST600MM0088 (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:0) (primary disk)
- disk1, 600 GB HDD SAS Seagate ST600MM0088 (dev: /dev/sdb, by-path: /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:1:0)
|
Network:
|
- eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe, SR-IOV enabled
- eth1/eno2, Ethernet, configured rate: 10 Gbps, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe, SR-IOV enabled (multi NICs example)
- eth2/enp4s0f0, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet 10G 2P X520 Adapter, driver: ixgbe, SR-IOV enabled (multi NICs example)
- eth3/enp4s0f1, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet 10G 2P X520 Adapter, driver: ixgbe, SR-IOV enabled (multi NICs example)
- eth4/eno3, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
- eth5/eno4, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
|
- grisou-[50-51] (2 nodes, 4 cpus, 32 cores)
Model:
|
Dell PowerEdge R630
|
Date of arrival:
|
2016-01-04
|
CPU:
|
Intel Xeon E5-2630 v3 (Haswell, 2.40GHz, 2 CPUs/node, 8 cores/CPU)
|
Memory:
|
128 GiB
|
Storage:
|
- disk0, 600 GB HDD SAS Seagate ST600MM0088 (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:0) (primary disk)
- disk1, 600 GB HDD SAS Seagate ST600MM0088 (dev: /dev/sdb, by-path: /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:1:0)
|
Network:
|
- eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe, SR-IOV enabled
- eth1/eno2, Ethernet, configured rate: 10 Gbps, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe, SR-IOV enabled (multi NICs example)
- eth2/enp129s0f0, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet 10G 2P X520 Adapter, driver: ixgbe, SR-IOV enabled (multi NICs example)
- eth3/enp129s0f1, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet 10G 2P X520 Adapter, driver: ixgbe, SR-IOV enabled (multi NICs example)
- eth4/eno3, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
- eth5/eno4, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
- ib0, InfiniBand, configured rate: 56 Gbps, model: Mellanox Technologies MT27500 Family [ConnectX-3], driver: mlx4_core
|
gros
124 nodes, 124 cpus, 2232 cores, split as follows due to differences between nodes (json)
Reservation example:
|
fnancy :
|
oarsub -p "cluster='gros'" -I
|
- gros-[1-67,69-124] (123 nodes, 123 cpus, 2214 cores)
Model:
|
Dell PowerEdge R640
|
Date of arrival:
|
2019-09-04
|
CPU:
|
Intel Xeon Gold 5220 (Cascade Lake-SP, 2.20GHz, 1 CPU/node, 18 cores/CPU)
|
Memory:
|
96 GiB
|
Storage:
|
- disk0, 480 GB SSD SATA Micron MTFDDAK480TDN (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:17:00.0-scsi-0:0:0:0) (primary disk)
- disk1, 960 GB SSD SATA Micron MTFDDAK960TDN (dev: /dev/sdb*, by-path: /dev/disk/by-path/pci-0000:17:00.0-scsi-0:0:1:0) (reservable)
*: the disk block device name /dev/sd? may vary in deployed environments, prefer referring to the by-path identifier
|
Network:
|
- eth0/eno1, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27710 Family [ConnectX-4 Lx], driver: mlx5_core, SR-IOV enabled
- eth1/eno2, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27710 Family [ConnectX-4 Lx], driver: mlx5_core, SR-IOV enabled (multi NICs example)
|
- gros-68 (1 node, 1 cpu, 18 cores)
Model:
|
Dell PowerEdge R640
|
Date of arrival:
|
2019-09-04
|
CPU:
|
Intel Xeon Gold 5220 (Cascade Lake-SP, 2.20GHz, 1 CPU/node, 18 cores/CPU)
|
Memory:
|
96 GiB
|
Storage:
|
- disk0, 480 GB SSD SATA Micron MTFDDAK480TDN (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:17:00.0-scsi-0:0:0:0) (primary disk)
- disk1, 960 GB SSD SATA Intel SSDSC2KG960G8R (dev: /dev/sdb*, by-path: /dev/disk/by-path/pci-0000:17:00.0-scsi-0:0:1:0) (reservable)
*: the disk block device name /dev/sd? may vary in deployed environments, prefer referring to the by-path identifier
|
Network:
|
- eth0/eno1, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27710 Family [ConnectX-4 Lx], driver: mlx5_core, SR-IOV enabled
- eth1/eno2, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27710 Family [ConnectX-4 Lx], driver: mlx5_core, SR-IOV enabled (multi NICs example)
|
grouille
2 nodes, 4 cpus, 128 cores (json)
Reservation example:
|
fnancy :
|
oarsub -t exotic -p "cluster='grouille'" -I
|
Access condition:
|
exotic job type
|
Model:
|
Dell PowerEdge R7525
|
Date of arrival:
|
2021-01-13
|
CPU:
|
AMD EPYC 7452 (Zen 2, 2 CPUs/node, 32 cores/CPU)
|
Memory:
|
128 GiB
|
Storage:
|
- disk0, 1.92 TB SSD SAS Toshiba KRM5XVUG1T92 (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:01:00.0-scsi-0:0:1:0) (primary disk)
- disk1, 960 GB SSD SATA Micron MTFDDAK960TDT (dev: /dev/sdb*, by-path: /dev/disk/by-path/pci-0000:01:00.0-scsi-0:0:2:0) (reservable)
*: the disk block device name /dev/sd? may vary in deployed environments, prefer referring to the by-path identifier
|
Network:
|
- eth0/eno1, Ethernet, model: Broadcom NetXtreme BCM5720 Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
- eth1/eno2, Ethernet, model: Broadcom NetXtreme BCM5720 Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
- eth2/eno33, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27800 Family [ConnectX-5], driver: mlx5_core
- eth3/eno34, Ethernet, model: Mellanox Technologies MT27800 Family [ConnectX-5], driver: mlx5_core - unavailable for experiment
|
GPU:
|
2 x Nvidia A100-PCIE-40GB
|
Clusters in production queue
graffiti
13 nodes, 26 cpus, 208 cores, split as follows due to differences between nodes (json)
Reservation example:
|
fnancy :
|
oarsub -q production -p "cluster='graffiti'" -I
|
Max walltime per nodes:
- graffiti-[1-3]: 24h
- graffiti-[4-6]: 48h
- graffiti-[7-13]: 168h
- graffiti-[1-12] (12 nodes, 24 cpus, 192 cores)
Access condition:
|
production queue
|
Model:
|
Dell PowerEdge T640
|
Date of arrival:
|
2019-06-07
|
CPU:
|
Intel Xeon Silver 4110 (Skylake, 2.10GHz, 2 CPUs/node, 8 cores/CPU)
|
Memory:
|
128 GiB
|
Storage:
|
disk0, 479 GB HDD SATA Dell PERC H330 Adp (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:19:00.0-scsi-0:2:0:0) (primary disk)
|
Network:
|
- eth0/eno1np0, Ethernet, model: Broadcom BCM57416 NetXtreme-E 10GBase-T RDMA Ethernet Controller, driver: bnxt_en - unavailable for experiment
- eth1/eno2np1, Ethernet, model: Broadcom BCM57416 NetXtreme-E 10GBase-T RDMA Ethernet Controller, driver: bnxt_en - unavailable for experiment
- eth2/ens4f0, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e
- eth3/ens4f1, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
|
GPU:
|
4 x Nvidia GeForce RTX 2080 Ti
|
- graffiti-13 (1 node, 2 cpus, 16 cores)
Access condition:
|
production queue
|
Model:
|
Dell PowerEdge T640
|
Date of arrival:
|
2019-06-07
|
CPU:
|
Intel Xeon Silver 4110 (Skylake, 2.10GHz, 2 CPUs/node, 8 cores/CPU)
|
Memory:
|
128 GiB
|
Storage:
|
disk0, 479 GB HDD SATA Dell PERC H330 Adp (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:19:00.0-scsi-0:2:0:0) (primary disk)
|
Network:
|
- eth0/eno1np0, Ethernet, model: Broadcom BCM57416 NetXtreme-E 10GBase-T RDMA Ethernet Controller, driver: bnxt_en - unavailable for experiment
- eth1/eno2np1, Ethernet, model: Broadcom BCM57416 NetXtreme-E 10GBase-T RDMA Ethernet Controller, driver: bnxt_en - unavailable for experiment
- eth2/ens4f0, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e
- eth3/ens4f1, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
|
GPU:
|
4 x Nvidia Quadro RTX 6000
|
graoully
16 nodes, 32 cpus, 256 cores (json)
Reservation example:
|
fnancy :
|
oarsub -q production -p "cluster='graoully'" -I
|
Max walltime per nodes:
- graoully-[1-2]: 4h
- graoully-[3-4]: 12h
- graoully-[5-16]: 168h
Access condition:
|
production queue
|
Model:
|
Dell PowerEdge R630
|
Date of arrival:
|
2016-01-04
|
CPU:
|
Intel Xeon E5-2630 v3 (Haswell, 2.40GHz, 2 CPUs/node, 8 cores/CPU)
|
Memory:
|
128 GiB
|
Storage:
|
- disk0, 600 GB HDD SAS Seagate ST600MM0088 (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:0) (primary disk)
- disk1, 600 GB HDD SAS Seagate ST600MM0088 (dev: /dev/sdb, by-path: /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:1:0)
|
Network:
|
- eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe, SR-IOV enabled
- eth1/eno2, Ethernet, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe - unavailable for experiment
- eth2/enp129s0f0, Ethernet, model: Intel Ethernet 10G 2P X520 Adapter, driver: ixgbe - unavailable for experiment
- eth3/enp129s0f1, Ethernet, model: Intel Ethernet 10G 2P X520 Adapter, driver: ixgbe - unavailable for experiment
- eth4/eno3, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
- eth5/eno4, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
- ib0, InfiniBand, configured rate: 56 Gbps, model: Mellanox Technologies MT27500 Family [ConnectX-3], driver: mlx4_core
|
graphique
5 nodes, 10 cpus, 60 cores (json)
Reservation example:
|
fnancy :
|
oarsub -q production -p "cluster='graphique'" -I
|
Max walltime per nodes:
- graphique-2: 48h
- graphique-[3-6]: 168h
Access condition:
|
production queue
|
Model:
|
Dell PowerEdge R720
|
Date of arrival:
|
2015-05-12
|
CPU:
|
Intel Xeon E5-2620 v3 (Haswell, 2.40GHz, 2 CPUs/node, 6 cores/CPU)
|
Memory:
|
64 GiB
|
Storage:
|
disk0, 299 GB HDD RAID-1 (2 disks) Dell PERC H330 Mini (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:02:00.0-scsi-0:2:0:0) (primary disk)
|
Network:
|
- eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Broadcom NetXtreme II BCM57800 1/10 Gigabit Ethernet, driver: bnx2x
- eth1/eno2, Ethernet, model: Broadcom NetXtreme II BCM57800 1/10 Gigabit Ethernet, driver: bnx2x - unavailable for experiment
- eth2/eno3, Ethernet, model: Broadcom NetXtreme II BCM57800 1/10 Gigabit Ethernet, driver: bnx2x - unavailable for experiment
- eth3/eno4, Ethernet, model: Broadcom NetXtreme II BCM57800 1/10 Gigabit Ethernet, driver: bnx2x - unavailable for experiment
- ib0, InfiniBand, configured rate: 56 Gbps, model: Mellanox Technologies MT27500 Family [ConnectX-3], driver: mlx4_core
|
GPU:
|
2 x Nvidia GeForce GTX 980
|
grappe
16 nodes, 32 cpus, 640 cores (json)
Reservation example:
|
fnancy :
|
oarsub -q production -p "cluster='grappe'" -I
|
Max walltime per nodes:
- grappe-[1-4]: 48h
- grappe-[5-8]: 96h
- grappe-[9-16]: 168h
Access condition:
|
production queue
|
Model:
|
Dell PowerEdge R640
|
Date of arrival:
|
2020-08-20
|
CPU:
|
Intel Xeon Gold 5218R (Cascade Lake-SP, 2.10GHz, 2 CPUs/node, 20 cores/CPU)
|
Memory:
|
96 GiB
|
Storage:
|
- disk0, 480 GB SSD SATA Intel SSDSC2KG480G8R (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:1a:00.0-scsi-0:0:0:0) (primary disk)
- disk1, 8.0 TB HDD SAS Seagate ST8000NM0185 (dev: /dev/sdb*, by-path: /dev/disk/by-path/pci-0000:1a:00.0-scsi-0:0:1:0) (reservable)
*: the disk block device name /dev/sd? may vary in deployed environments, prefer referring to the by-path identifier
|
Network:
|
- eth0/ens1f0, Ethernet, configured rate: 25 Gbps, model: Intel Ethernet Controller XXV710 for 25GbE SFP28, driver: i40e
- eth1/ens1f1, Ethernet, model: Intel Ethernet Controller XXV710 for 25GbE SFP28, driver: i40e - unavailable for experiment
- eth2/eno1, Ethernet, model: Broadcom NetXtreme BCM5720 Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
- eth3/eno2, Ethernet, model: Broadcom NetXtreme BCM5720 Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
- eth4/eno3, Ethernet, model: Broadcom NetXtreme BCM5720 Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
- eth5/eno4, Ethernet, model: Broadcom NetXtreme BCM5720 Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
|
grcinq
47 nodes, 94 cpus, 752 cores, split as follows due to differences between nodes (json)
Reservation example:
|
fnancy :
|
oarsub -q production -p "cluster='grcinq'" -I
|
Max walltime per nodes:
- grcinq-[1-8]: 4h
- grcinq-[9-16]: 12h
- grcinq-[17-47]: 168h
- grcinq-[1,5,8,18,30,46] (6 nodes, 12 cpus, 96 cores)
Access condition:
|
production queue
|
Model:
|
Dell PowerEdge C6220
|
Date of arrival:
|
2013-04-09
|
CPU:
|
Intel Xeon E5-2650 (Sandy Bridge, 2.00GHz, 2 CPUs/node, 8 cores/CPU)
|
Memory:
|
64 GiB
|
Storage:
|
disk0, 1.0 TB HDD SATA Seagate ST1000NM0033-9ZM (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:00:1f.2-ata-1) (primary disk)
|
Network:
|
- eth0/eno1, Ethernet, configured rate: 1 Gbps, model: Intel I350 Gigabit Network Connection, driver: igb, SR-IOV enabled
- eth1/eno2, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
- ib0, InfiniBand, configured rate: 56 Gbps, model: Mellanox Technologies MT27500 Family [ConnectX-3], driver: mlx4_core
|
- grcinq-[2-4,6-7,9-17,19-29,31-45,47] (41 nodes, 82 cpus, 656 cores)
Access condition:
|
production queue
|
Model:
|
Dell PowerEdge C6220
|
Date of arrival:
|
2013-04-09
|
CPU:
|
Intel Xeon E5-2650 (Sandy Bridge, 2.00GHz, 2 CPUs/node, 8 cores/CPU)
|
Memory:
|
64 GiB
|
Storage:
|
disk0, 1.0 TB HDD SATA Western Digital WDC WD1003FBYX-1 (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:00:1f.2-ata-1) (primary disk)
|
Network:
|
- eth0/eno1, Ethernet, configured rate: 1 Gbps, model: Intel I350 Gigabit Network Connection, driver: igb, SR-IOV enabled
- eth1/eno2, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
- ib0, InfiniBand, configured rate: 56 Gbps, model: Mellanox Technologies MT27500 Family [ConnectX-3], driver: mlx4_core
|
grele
14 nodes, 28 cpus, 336 cores (json)
Reservation example:
|
fnancy :
|
oarsub -q production -p "cluster='grele'" -I
|
Max walltime per nodes:
- grele-[1-3]: 24h
- grele-[4-6]: 48h
- grele-[7-14]: 168h
Access condition:
|
production queue
|
Model:
|
Dell PowerEdge R730
|
Date of arrival:
|
2017-06-26
|
CPU:
|
Intel Xeon E5-2650 v4 (Broadwell, 2.20GHz, 2 CPUs/node, 12 cores/CPU)
|
Memory:
|
128 GiB
|
Storage:
|
- disk0, 299 GB HDD SAS Dell PERC H730 Mini (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:0:0) (primary disk)
- disk1, 299 GB HDD SAS Dell PERC H730 Mini (dev: /dev/sdb, by-path: /dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:1:0)
|
Network:
|
- eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe, SR-IOV enabled
- eth1/eno2, Ethernet, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe - unavailable for experiment
- eth2/eno3, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
- eth3/eno4, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
- ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1
|
GPU:
|
2 x Nvidia GeForce GTX 1080 Ti
|
grimani
6 nodes, 12 cpus, 72 cores (json)
Reservation example:
|
fnancy :
|
oarsub -q production -p "cluster='grimani'" -I
|
Max walltime per nodes:
- grimani-1: 24h
- grimani-2: 48h
- grimani-[3-6]: 168h
Access condition:
|
production queue
|
Model:
|
Dell PowerEdge R730
|
Date of arrival:
|
2016-08-30
|
CPU:
|
Intel Xeon E5-2603 v3 (Haswell, 1.60GHz, 2 CPUs/node, 6 cores/CPU)
|
Memory:
|
64 GiB
|
Storage:
|
disk0, 1.0 TB HDD SATA Seagate ST1000NX0423 (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:0) (primary disk)
|
Network:
|
- eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe, SR-IOV enabled
- eth1/eno2, Ethernet, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe - unavailable for experiment
- eth2/eno3, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
- eth3/eno4, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
- ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1
|
GPU:
|
2 x Nvidia Tesla K40m
|
grue
5 nodes, 10 cpus, 160 cores (json)
Reservation example:
|
fnancy :
|
oarsub -q production -p "cluster='grue'" -I
|
Max walltime per nodes:
- grue-[1-2]: 24h
- grue-[3-4]: 48h
- grue-5: 168h
Access condition:
|
production queue
|
Model:
|
Dell PowerEdge R7425
|
Date of arrival:
|
2019-11-25
|
CPU:
|
AMD EPYC 7351 (Zen, 2 CPUs/node, 16 cores/CPU)
|
Memory:
|
128 GiB
|
Storage:
|
disk0, 479 GB HDD SAS Dell PERC H730P Adp (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:e1:00.0-scsi-0:2:0:0) (primary disk)
|
Network:
|
- eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e
- eth1/eno2, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
- eth2/eno3, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
- eth3/eno4, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
|
GPU:
|
4 x Nvidia Tesla T4
|
grvingt
64 nodes, 128 cpus, 2048 cores (json)
Reservation example:
|
fnancy :
|
oarsub -q production -p "cluster='grvingt'" -I
|
Max walltime per nodes:
- grvingt-[1-8]: 4h
- grvingt-[9-16]: 12h
- grvingt-[17-64]: 168h
Access condition:
|
production queue
|
Model:
|
Dell PowerEdge C6420
|
Date of arrival:
|
2018-04-11
|
CPU:
|
Intel Xeon Gold 6130 (Skylake, 2.10GHz, 2 CPUs/node, 16 cores/CPU)
|
Memory:
|
192 GiB
|
Storage:
|
disk0, 1.0 TB HDD SATA Seagate ST1000NX0443 (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:00:11.5-ata-3) (primary disk)
|
Network:
|
- eth0/enp24s0f0, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e
- eth1/enp24s0f1, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
- ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1
|
Last generated from the Grid'5000 Reference API on 2021-09-25 (commit 0b96d83e15)