Sophia:Hardware

From Grid5000
Jump to navigation Jump to search
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.

See also: Network topology for Sophia

Summary

  • 26 clusters
  • 67 nodes
  • 1868 CPU cores
  • 109 GPUs
  • 595968 GPUs cores
  • 18.2 TiB RAM
  • 17 SSDs and 85 HDDs on nodes (total: 224.47 TB)
  • 114.0 TFLOPS (excluding GPUs)

Default queue resources

  • 1 cluster
  • 4 nodes
  • 48 CPU cores
  • 384 GiB RAM
  • 4 HDDs on nodes (total: 1.0 TB)
  • 0.6 TFLOPS (excluding GPUs)

Abaca queue resources

  • 23 clusters
  • 58 nodes
  • 1532 CPU cores
  • 93 GPUs
  • 522240 GPUs cores
  • 15.33 TiB RAM
  • 16 SSDs and 71 HDDs on nodes (total: 203.08 TB)
  • 93.3 TFLOPS (excluding GPUs)

Clusters summary

Default queue resources

Cluster Access Condition Date of arrival Manufacturing date Nodes CPU Memory Storage Network Accelerators
# Name Cores Architecture
uvb 2011-01-04 2011-01-04 4 2 Intel Xeon X5670 6 cores/CPU x86_64 96 GiB 250 GB HDD 1 Gbps (SR‑IOV) + 40 Gbps InfiniBand

**: crossed GPUs are not supported by Grid'5000 default environments

Abaca queue resources

Cluster Access Condition Date of arrival Manufacturing date Nodes CPU Memory Storage Network Accelerators
# Name Cores Architecture
esterel1 abaca queue 2025-03-13 2016-11-02 1 2 Intel Xeon E5-2623 v4 4 cores/CPU x86_64 32 GiB 399 GB HDD + 399 GB HDD 1 Gbps + 40 Gbps InfiniBand
esterel2 abaca queue 2025-03-12 2016-05-04 1 2 Intel Xeon E5-2620 v3 6 cores/CPU x86_64 128 GiB 2.0 TB HDD 1 Gbps + 40 Gbps InfiniBand 4 x Nvidia GTX 1080 Ti (11 GiB)
esterel3 abaca queue 2025-03-25 2016-06-06 1 2 Intel Xeon E5-2630 v4 10 cores/CPU x86_64 48 GiB 2.0 TB HDD + 399 GB HDD 1 Gbps + 40 Gbps InfiniBand 4 x Nvidia GTX TITAN X (12 GiB)
esterel4 abaca queue 2025-03-20 2016-06-08 1 2 Intel Xeon E5-2630 v4 10 cores/CPU x86_64 128 GiB 2.0 TB HDD + 1.6 TB HDD 1 Gbps + 40 Gbps InfiniBand 4 x Nvidia GTX TITAN X (12 GiB)
esterel5 abaca queue 2025-02-25 2016-06-08 2 2 Intel Xeon E5-2630 v4 10 cores/CPU x86_64 128 GiB 2.0 TB HDD + 1.6 TB HDD 1 Gbps + 40 Gbps InfiniBand 3 x Nvidia GTX 1080 (8 GiB)
esterel6 abaca queue 2025-02-26 2017-04-18 1 2 Intel Xeon E5-2650 v4 12 cores/CPU x86_64 64 GiB 2.0 TB HDD 1 Gbps + 40 Gbps InfiniBand 4 x Nvidia GTX 1080 Ti (11 GiB)
esterel7 abaca queue 2025-03-06 2017-05-23 2 2 Intel Xeon E5-2620 v4 8 cores/CPU x86_64 128 GiB 999 GB HDD + 399 GB HDD 1 Gbps + 40 Gbps InfiniBand 4 x Nvidia GTX 1080 Ti (11 GiB)
esterel10 abaca queue 2024-12-19 2017-11-15 3 2 Intel Xeon E5-2630 v4 10 cores/CPU x86_64 128 GiB 1.6 TB SSD + 2 x 600 GB HDD 1 Gbps + 56 Gbps InfiniBand [1-2]: 4 x Nvidia GTX 1080 Ti (11 GiB)
3: 3 x Nvidia GTX 1080 Ti (11 GiB)
esterel11 abaca queue 2025-04-30 2017-11-14 2 2 Intel Xeon E5-2620 v4 8 cores/CPU x86_64 128 GiB 599 GB HDD + 479 GB HDD 1 Gbps + 40 Gbps InfiniBand 1: 4 x Nvidia GTX 1080 Ti (11 GiB)
2: 3 x Nvidia GTX 1080 Ti (11 GiB)
esterel12 abaca queue 2025-04-11 2018-03-02 1 2 Intel Xeon E5-2630 v4 10 cores/CPU x86_64 64 GiB 599 GB HDD + 1.6 TB HDD 1 Gbps + 40 Gbps InfiniBand 4 x Nvidia GTX 1080 Ti (11 GiB)
esterel22 abaca queue 2025-04-09 2019-10-07 1 2 Intel Xeon Gold 6240 18 cores/CPU x86_64 384 GiB 599 GB HDD + 3.84 TB HDD 1 Gbps + 40 Gbps InfiniBand 4 x Nvidia Quadro RTX 6000 (23 GiB)
esterel24 abaca queue 2025-03-27 2020-09-27 1 2 Intel Xeon Gold 6240R 24 cores/CPU x86_64 384 GiB 599 GB HDD + 4.8 TB HDD 1 Gbps + 40 Gbps InfiniBand 4 x Nvidia Quadro RTX 8000 (45 GiB)
esterel26 abaca queue 2025-03-25 2020-10-30 1 2 Intel Xeon Silver 4216 16 cores/CPU x86_64 384 GiB 599 GB HDD + 3.84 TB HDD 1 Gbps + 40 Gbps InfiniBand 4 x Nvidia Quadro RTX 8000 (45 GiB)
esterel27 abaca queue 2025-03-17 2019-01-01 1 2 Intel Xeon Gold 5115 10 cores/CPU x86_64 256 GiB 511 GB HDD + 4.09 TB HDD 1 Gbps (SR‑IOV) + 40 Gbps InfiniBand 8 x Nvidia GTX 1080 Ti (11 GiB)
esterel32 abaca queue 2025-03-20 2020-08-09 1 2 Intel Xeon Gold 6238R 28 cores/CPU x86_64 768 GiB 479 GB HDD + 7.68 TB HDD 1 Gbps + 40 Gbps InfiniBand 4 x Nvidia Quadro RTX 8000 (45 GiB)
esterel33 abaca queue 2025-04-28 2021-05-13 1 2 AMD EPYC 7282 16 cores/CPU x86_64 256 GiB 479 GB SSD + 2.88 TB SSD 1 Gbps + 40 Gbps InfiniBand 3 x Nvidia A40 (45 GiB)
esterel41 abaca queue 2025-01-25 2024-03-01 1 2 Intel Xeon Gold 6426Y 16 cores/CPU x86_64 512 GiB 479 GB SSD + 2.88 TB SSD 1 Gbps + 56 Gbps InfiniBand 2 x Nvidia L40 (45 GiB)
mercantour1 abaca queue 2014-04-25 2014-04-25 16 2 Intel Xeon E5-2680 v2 10 cores/CPU x86_64 192 GiB 2.0 TB HDD 1 Gbps (SR‑IOV) + 40 Gbps InfiniBand
mercantour2 abaca queue 2025-01-16 2015-09-01 8 2 Intel Xeon E5-2650 v2 8 cores/CPU x86_64 256 GiB 1.0 TB HDD 1 Gbps (SR‑IOV) + 40 Gbps InfiniBand
mercantour5 abaca queue 2025-02-24 2019-07-30 4 2 Intel Xeon Gold 6240 18 cores/CPU x86_64 384 GiB 599 GB HDD + 959 GB HDD 1 Gbps + 40 Gbps InfiniBand
mercantour6 abaca queue 2025-02-27 2020-10-05 1 2 AMD EPYC 7542 32 cores/CPU x86_64 1.0 TiB 239 GB SSD + 1.92 TB SSD 1 Gbps + 40 Gbps InfiniBand
mercantour7 abaca queue 2025-03-20 2020-11-13 1 2 AMD EPYC 7502 32 cores/CPU x86_64 384 GiB 959 GB SSD + 48.01 TB HDD 1 Gbps + 40 Gbps InfiniBand
musa abaca queue 2025-01-16 2024-12-09 6 2 AMD EPYC 9254 24 cores/CPU x86_64 512 GiB 6.4 TB SSD 25 Gbps  2 x Nvidia Tesla H100 (94 GiB)

**: crossed GPUs are not supported by Grid'5000 default environments

Testing queue resources

Cluster Access Condition Date of arrival Manufacturing date Nodes CPU Memory Storage Network Accelerators
# Name Cores Architecture
esterel31 testing queue 2025-04-16 2020-08-10 4 2 Intel Xeon Gold 6230R 26 cores/CPU x86_64 384 GiB 479 GB HDD + 3.84 TB HDD 10 Gbps + 40 Gbps InfiniBand 4 x Nvidia Quadro RTX 8000 (45 GiB)
mercantour4 testing queue 2025-03-14 2017-11-29 1 4 Intel Xeon Gold 6148 20 cores/CPU x86_64 1.0 TiB 1.92 TB SSD + 2 x 599 GB HDD 1 Gbps + 20 Gbps InfiniBand

**: crossed GPUs are not supported by Grid'5000 default environments

Clusters in the default queue

uvb

4 nodes, 8 cpus, 48 cores (json)

Reservation example:

Terminal.png fsophia:
oarsub -q default -p uvb -I
Model: Dell PowerEdge C6100
Manufacturing date: 2011-01-04
Date of arrival: 2011-01-04
CPU: Intel Xeon X5670 (Westmere), x86_64, 2.93GHz, 2 CPUs/node, 6 cores/CPU
Memory: 96 GiB
Storage: disk0, 250 GB HDD SATA Western Digital WDC WD2502ABYS-1 (dev: /dev/disk0) (primary disk)
Network:
  • eth0/eno1, Ethernet, configured rate: 1 Gbps, model: Intel 82576 Gigabit Network Connection, driver: igb, SR-IOV enabled
  • eth1/eno2, Ethernet, model: Intel 82576 Gigabit Network Connection, driver: igb - unavailable for experiment
  • ib0, InfiniBand, configured rate: 40 Gbps, model: Mellanox Technologies MT25408A0-FCC-QI ConnectX, Dual Port 40Gb/s InfiniBand / 10GigE Adapter IC with PCIe 2.0 x8 5.0GT/s Interface, driver: mlx4_core
  • ib1, InfiniBand, model: Mellanox Technologies MT25408A0-FCC-QI ConnectX, Dual Port 40Gb/s InfiniBand / 10GigE Adapter IC with PCIe 2.0 x8 5.0GT/s Interface, driver: mlx4_core - unavailable for experiment

Clusters in the abaca queue

esterel1

1 node, 2 cpus, 8 cores (json)

Reservation example:

Terminal.png fsophia:
oarsub -q abaca -p esterel1 -I

Max walltime per nodes:

  • esterel1-1: 168h
Access condition: production queue
Model: Dell PowerEdge R730
Manufacturing date: 2016-11-02
Date of arrival: 2025-03-13
CPU: Intel Xeon E5-2623 v4 (Broadwell), x86_64, 2.60GHz, 2 CPUs/node, 4 cores/CPU
Memory: 32 GiB
Storage:
  • disk0, 399 GB HDD SAS Dell PERC H730 Mini (dev: /dev/disk0) (primary disk)
  • disk1, 399 GB HDD SAS Dell PERC H730 Mini (dev: /dev/disk1)
Network:
  • eth0/enp1s0f0np0, Ethernet, configured rate: 1 Gbps, model: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe, driver: tg3 - no KaVLAN
  • eth1/eno2, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • eth2/eno3, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • eth3/eno4, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • ib0, InfiniBand, configured rate: 40 Gbps, model: Mellanox Technologies MT27500 Family [ConnectX-3], driver: mlx4_core

esterel2

1 node, 2 cpus, 12 cores (json)

Reservation example:

Terminal.png fsophia:
oarsub -q abaca -p esterel2 -I

Max walltime per nodes:

  • esterel2-1: 168h
Access condition: production queue
Model: Dell PowerEdge T630
Manufacturing date: 2016-05-04
Date of arrival: 2025-03-12
CPU: Intel Xeon E5-2620 v3 (Haswell), x86_64, 2.40GHz, 2 CPUs/node, 6 cores/CPU
Memory: 128 GiB
Storage: disk0, 2.0 TB HDD SAS Dell PERC H730 Adp (dev: /dev/disk0) (primary disk)
Network:
  • eth0/enp1s0f0np0, Ethernet, configured rate: 1 Gbps, model: Intel I350 Gigabit Network Connection, driver: igb - no KaVLAN
  • eth1/eno2, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • ib0, InfiniBand, configured rate: 40 Gbps, model: Mellanox Technologies MT27500 Family [ConnectX-3], driver: mlx4_core
GPU: 4 x Nvidia GeForce GTX 1080 Ti (11 GiB)
Compute capability: 6.1

esterel3

1 node, 2 cpus, 20 cores (json)

Reservation example:

Terminal.png fsophia:
oarsub -q abaca -p esterel3 -I

Max walltime per nodes:

  • esterel3-1: 168h
Access condition: production queue
Model: Dell PowerEdge T630
Manufacturing date: 2016-06-06
Date of arrival: 2025-03-25
CPU: Intel Xeon E5-2630 v4 (Broadwell), x86_64, 2.20GHz, 2 CPUs/node, 10 cores/CPU
Memory: 48 GiB
Storage:
  • disk0, 2.0 TB HDD SAS Dell PERC H730 Adp (dev: /dev/disk0) (primary disk)
  • disk1, 399 GB HDD SATA Dell PERC H730 Adp (dev: /dev/disk1)
Network:
  • eth0/enp1s0f0np0, Ethernet, configured rate: 1 Gbps, model: Intel I350 Gigabit Network Connection, driver: igb - no KaVLAN
  • eth1/eno2, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • ib0, InfiniBand, configured rate: 40 Gbps, model: Mellanox Technologies MT27500 Family [ConnectX-3], driver: mlx4_core
GPU: 4 x Nvidia GeForce GTX TITAN X (12 GiB)
Compute capability: 5.2

esterel4

1 node, 2 cpus, 20 cores (json)

Reservation example:

Terminal.png fsophia:
oarsub -q abaca -p esterel4 -I

Max walltime per nodes:

  • esterel4-1: 168h
Access condition: production queue
Model: Dell PowerEdge T630
Manufacturing date: 2016-06-08
Date of arrival: 2025-03-20
CPU: Intel Xeon E5-2630 v4 (Broadwell), x86_64, 2.20GHz, 2 CPUs/node, 10 cores/CPU
Memory: 128 GiB
Storage:
  • disk0, 2.0 TB HDD SAS Dell PERC H730 Adp (dev: /dev/disk0) (primary disk)
  • disk1, 1.6 TB HDD SATA Dell PERC H730 Adp (dev: /dev/disk1)
Network:
  • eth0/enp1s0f0np0, Ethernet, configured rate: 1 Gbps, model: Intel I350 Gigabit Network Connection, driver: igb - no KaVLAN
  • eth1/eno2, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • ib0, InfiniBand, configured rate: 40 Gbps, model: Mellanox Technologies MT27500 Family [ConnectX-3], driver: mlx4_core
GPU: 4 x Nvidia GeForce GTX TITAN X (12 GiB)
Compute capability: 5.2

esterel5

2 nodes, 4 cpus, 40 cores (json)

Reservation example:

Terminal.png fsophia:
oarsub -q abaca -p esterel5 -I

Max walltime per nodes:

  • esterel5-[1-2]: 168h
Access condition: production queue
Model: Dell PowerEdge T630
Manufacturing date: 2016-06-08
Date of arrival: 2025-02-25
CPU: Intel Xeon E5-2630 v4 (Broadwell), x86_64, 2.20GHz, 2 CPUs/node, 10 cores/CPU
Memory: 128 GiB
Storage:
  • disk0, 2.0 TB HDD SAS Dell PERC H730 Adp (dev: /dev/disk0) (primary disk)
  • disk1, 1.6 TB HDD SSD Dell PERC H730 Adp (dev: /dev/disk1)
Network:
  • eth0/eno1, Ethernet, configured rate: 1 Gbps, model: Intel I350 Gigabit Network Connection, driver: igb - no KaVLAN
  • eth1/eno2, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • ib0, InfiniBand, configured rate: 40 Gbps, model: Mellanox Technologies MT27500 Family [ConnectX-3], driver: mlx4_core
GPU: 3 x Nvidia GeForce GTX 1080 (8 GiB)
Compute capability: 6.1

esterel6

1 node, 2 cpus, 24 cores (json)

Reservation example:

Terminal.png fsophia:
oarsub -q abaca -p esterel6 -I

Max walltime per nodes:

  • esterel6-1: 0h
Access condition: production queue
Model: Dell PowerEdge T630
Manufacturing date: 2017-04-18
Date of arrival: 2025-02-26
CPU: Intel Xeon E5-2650 v4 (Broadwell), x86_64, 2.20GHz, 2 CPUs/node, 12 cores/CPU
Memory: 64 GiB
Storage: disk0, 2.0 TB HDD SAS Dell PERC H730 Adp (dev: /dev/disk0) (primary disk)
Network:
  • eth0/enp1s0f0np0, Ethernet, configured rate: 1 Gbps, model: Intel I350 Gigabit Network Connection, driver: igb - no KaVLAN
  • eth1/eno2, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • ib0, InfiniBand, configured rate: 40 Gbps, model: Mellanox Technologies MT27500 Family [ConnectX-3], driver: mlx4_core
GPU: 4 x Nvidia GeForce GTX 1080 Ti (11 GiB)
Compute capability: 6.1

esterel7

2 nodes, 4 cpus, 32 cores (json)

Reservation example:

Terminal.png fsophia:
oarsub -q abaca -p esterel7 -I

Max walltime per nodes:

  • esterel7-[1-2]: 168h
Access condition: production queue
Model: Dell PowerEdge T630
Manufacturing date: 2017-05-23
Date of arrival: 2025-03-06
CPU: Intel Xeon E5-2620 v4 (Broadwell), x86_64, 2.10GHz, 2 CPUs/node, 8 cores/CPU
Memory: 128 GiB
Storage:
  • disk0, 999 GB HDD RAID Dell PERC H730 Adp (dev: /dev/disk0) (primary disk)
  • disk1, 399 GB HDD RAID Dell PERC H730 Adp (dev: /dev/disk1)
Network:
  • eth0/enp1s0f0np0, Ethernet, configured rate: 1 Gbps, model: Intel I350 Gigabit Network Connection, driver: igb - no KaVLAN
  • eth1/eno2, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • ib0, InfiniBand, configured rate: 40 Gbps, model: Mellanox Technologies MT27500 Family [ConnectX-3], driver: mlx4_core
GPU: 4 x Nvidia GeForce GTX 1080 Ti (11 GiB)
Compute capability: 6.1

esterel10

3 nodes, 6 cpus, 60 cores, split as follows due to differences between nodes (json)

Reservation example:

Terminal.png fsophia:
oarsub -q abaca -p esterel10 -I

Max walltime per nodes:

  • esterel10-[1-3]: 168h
esterel10-[1-2] (2 nodes, 4 cpus, 40 cores)
Access condition: production queue
Model: Dell T630
Manufacturing date: 2017-11-15
Date of arrival: 2024-12-19
CPU: Intel Xeon E5-2630 v4 (Broadwell), x86_64, 2.20GHz, 2 CPUs/node, 10 cores/CPU
Memory: 128 GiB
Storage:
  • disk0, 1.6 TB SSD SAS Toshiba THNSF81D60CSE (dev: /dev/disk0) (primary disk)
  • disk1, 600 GB HDD SAS Toshiba AL14SEB060NY (dev: /dev/disk1)
  • disk2, 600 GB HDD SAS Toshiba AL14SEB060NY (dev: /dev/disk2)
Network:
  • eth0/eno1, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth1/enp1s0f1, Ethernet, configured rate: 1 Gbps, model: Intel I350 Gigabit Network Connection, driver: igb - no KaVLAN
  • ib0, InfiniBand, configured rate: 56 Gbps, model: Mellanox Technologies MT27500 Family [ConnectX-3], driver: mlx4_core
GPU: 4 x Nvidia GeForce GTX 1080 Ti (11 GiB)
Compute capability: 6.1

esterel10-3 (1 node, 2 cpus, 20 cores)
Access condition: production queue
Model: Dell T630
Manufacturing date: 2017-11-15
Date of arrival: 2024-12-19
CPU: Intel Xeon E5-2630 v4 (Broadwell), x86_64, 2.20GHz, 2 CPUs/node, 10 cores/CPU
Memory: 128 GiB
Storage:
  • disk0, 1.6 TB SSD SAS Toshiba THNSF81D60CSE (dev: /dev/disk0) (primary disk)
  • disk1, 600 GB HDD SAS Toshiba AL14SEB060NY (dev: /dev/disk1)
  • disk2, 600 GB HDD SAS Toshiba AL14SEB060NY (dev: /dev/disk2)
Network:
  • eth0/eno1, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth1/enp1s0f1, Ethernet, configured rate: 1 Gbps, model: Intel I350 Gigabit Network Connection, driver: igb - no KaVLAN
  • ib0, InfiniBand, configured rate: 56 Gbps, model: Mellanox Technologies MT27500 Family [ConnectX-3], driver: mlx4_core
GPU: 3 x Nvidia GeForce GTX 1080 Ti (11 GiB)
Compute capability: 6.1

esterel11

2 nodes, 4 cpus, 32 cores, split as follows due to differences between nodes (json)

Reservation example:

Terminal.png fsophia:
oarsub -q abaca -p esterel11 -I

Max walltime per nodes:

  • esterel11-[1-2]: 168h
esterel11-1 (1 node, 2 cpus, 16 cores)
Access condition: production queue
Model: Dell PowerEdge T630
Manufacturing date: 2017-11-14
Date of arrival: 2025-04-30
CPU: Intel Xeon E5-2620 v4 (Broadwell), x86_64, 2.10GHz, 2 CPUs/node, 8 cores/CPU
Memory: 128 GiB
Storage:
  • disk0, 599 GB HDD SAS Dell PERC H730P Adp (dev: /dev/disk0) (primary disk)
  • disk1, 479 GB HDD SATA Dell PERC H730P Adp (dev: /dev/disk1)
Network:
  • eth0/enp1s0f0np0, Ethernet, configured rate: 1 Gbps, model: Intel I350 Gigabit Network Connection, driver: igb - no KaVLAN
  • eth1/eno2, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • ib0, InfiniBand, configured rate: 40 Gbps, model: Mellanox Technologies MT27500 Family [ConnectX-3], driver: mlx4_core
GPU: 4 x Nvidia GeForce GTX 1080 Ti (11 GiB)
Compute capability: 6.1

esterel11-2 (1 node, 2 cpus, 16 cores)
Access condition: production queue
Model: Dell PowerEdge T630
Manufacturing date: 2017-11-14
Date of arrival: 2025-04-30
CPU: Intel Xeon E5-2620 v4 (Broadwell), x86_64, 2.10GHz, 2 CPUs/node, 8 cores/CPU
Memory: 128 GiB
Storage:
  • disk0, 599 GB HDD SAS Dell PERC H730P Adp (dev: /dev/disk0) (primary disk)
  • disk1, 479 GB HDD SATA Dell PERC H730P Adp (dev: /dev/disk1)
Network:
  • eth0/enp1s0f0np0, Ethernet, configured rate: 1 Gbps, model: Intel I350 Gigabit Network Connection, driver: igb - no KaVLAN
  • eth1/eno2, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • ib0, InfiniBand, configured rate: 40 Gbps, model: Mellanox Technologies MT27500 Family [ConnectX-3], driver: mlx4_core
GPU: 3 x Nvidia GeForce GTX 1080 Ti (11 GiB)
Compute capability: 6.1

esterel12

1 node, 2 cpus, 20 cores (json)

Reservation example:

Terminal.png fsophia:
oarsub -q abaca -p esterel12 -I

Max walltime per nodes:

  • esterel12-1: 168h
Access condition: production queue
Model: Dell PowerEdge T630
Manufacturing date: 2018-03-02
Date of arrival: 2025-04-11
CPU: Intel Xeon E5-2630 v4 (Broadwell), x86_64, 2.20GHz, 2 CPUs/node, 10 cores/CPU
Memory: 64 GiB
Storage:
  • disk0, 599 GB HDD RAID Dell PERC H730P Adp (dev: /dev/disk0) (primary disk)
  • disk1, 1.6 TB HDD RAID Dell PERC H730P Adp (dev: /dev/disk1)
Network:
  • eth0/enp1s0f0np0, Ethernet, configured rate: 1 Gbps, model: Intel I350 Gigabit Network Connection, driver: igb - no KaVLAN
  • eth1/eno2, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • ib0, InfiniBand, configured rate: 40 Gbps, model: Mellanox Technologies MT27500 Family [ConnectX-3], driver: mlx4_core
GPU: 4 x Nvidia GeForce GTX 1080 Ti (11 GiB)
Compute capability: 6.1

esterel22

1 node, 2 cpus, 36 cores (json)

Reservation example:

Terminal.png fsophia:
oarsub -q abaca -p esterel22 -I

Max walltime per nodes:

  • esterel22-1: 168h
Access condition: production queue
Model: Dell PowerEdge T640
Manufacturing date: 2019-10-07
Date of arrival: 2025-04-09
CPU: Intel Xeon Gold 6240 (Cascade Lake-SP), x86_64, 2.60GHz, 2 CPUs/node, 18 cores/CPU
Memory: 384 GiB
Storage:
  • disk0, 599 GB HDD SAS Dell PERC H730P Adp (dev: /dev/disk0) (primary disk)
  • disk1, 3.84 TB HDD SAS Dell PERC H730P Adp (dev: /dev/disk1)
Network:
  • eth0/enp1s0f0np0, Ethernet, configured rate: 1 Gbps, model: Broadcom Inc. and subsidiaries BCM57416 NetXtreme-E Dual-Media 10G RDMA Ethernet Controller, driver: bnxt_en - no KaVLAN
  • eth1/eno2np1, Ethernet, model: Broadcom Inc. and subsidiaries BCM57416 NetXtreme-E Dual-Media 10G RDMA Ethernet Controller, driver: bnxt_en - unavailable for experiment
  • ib0, InfiniBand, configured rate: 40 Gbps, model: Mellanox Technologies MT27500 Family [ConnectX-3], driver: mlx4_core
GPU: 4 x Nvidia Quadro RTX 6000 (23 GiB)
Compute capability: 7.5

esterel24

1 node, 2 cpus, 48 cores (json)

Reservation example:

Terminal.png fsophia:
oarsub -q abaca -p esterel24 -I

Max walltime per nodes:

  • esterel24-1: 168h
Access condition: production queue
Model: Dell PowerEdge T640
Manufacturing date: 2020-09-27
Date of arrival: 2025-03-27
CPU: Intel Xeon Gold 6240R (Cascade Lake-SP), x86_64, 2.40GHz, 2 CPUs/node, 24 cores/CPU
Memory: 384 GiB
Storage:
  • disk0, 599 GB HDD SAS Dell PERC H730P Adp (dev: /dev/disk0) (primary disk)
  • disk1, 4.8 TB HDD SATA Dell PERC H730P Adp (dev: /dev/disk1)
Network:
  • eth0/enp1s0f0np0, Ethernet, configured rate: 1 Gbps, model: Broadcom Inc. and subsidiaries BCM57416 NetXtreme-E Dual-Media 10G RDMA Ethernet Controller, driver: bnxt_en - no KaVLAN
  • eth1/eno2np1, Ethernet, model: Broadcom Inc. and subsidiaries BCM57416 NetXtreme-E Dual-Media 10G RDMA Ethernet Controller, driver: bnxt_en - unavailable for experiment
  • ib0, InfiniBand, configured rate: 40 Gbps, model: Mellanox Technologies MT25408A0-FCC-QI ConnectX, Dual Port 40Gb/s InfiniBand / 10GigE Adapter IC with PCIe 2.0 x8 5.0GT/s Interface, driver: mlx4_core
GPU: 4 x Nvidia Quadro RTX 8000 (45 GiB)
Compute capability: 7.5

esterel26

1 node, 2 cpus, 32 cores (json)

Reservation example:

Terminal.png fsophia:
oarsub -q abaca -p esterel26 -I

Max walltime per nodes:

  • esterel26-1: 168h
Access condition: production queue
Model: Dell PowerEdge T640
Manufacturing date: 2020-10-30
Date of arrival: 2025-03-25
CPU: Intel Xeon Silver 4216 (Cascade Lake-SP), x86_64, 2.10GHz, 2 CPUs/node, 16 cores/CPU
Memory: 384 GiB
Storage:
  • disk0, 599 GB HDD RAID Dell PERC H730P Adp (dev: /dev/disk0) (primary disk)
  • disk1, 3.84 TB HDD RAID Dell PERC H730P Adp (dev: /dev/disk1)
Network:
  • eth0/enp1s0f0np0, Ethernet, configured rate: 1 Gbps, model: Broadcom Inc. and subsidiaries BCM57416 NetXtreme-E Dual-Media 10G RDMA Ethernet Controller, driver: bnxt_en - no KaVLAN
  • eth1/eno2np1, Ethernet, model: Broadcom Inc. and subsidiaries BCM57416 NetXtreme-E Dual-Media 10G RDMA Ethernet Controller, driver: bnxt_en - unavailable for experiment
  • eth2/enp137s0d1, Ethernet, model: Mellanox Technologies MT25408A0-FCC-QI ConnectX, Dual Port 40Gb/s InfiniBand / 10GigE Adapter IC with PCIe 2.0 x8 5.0GT/s Interface, driver: mlx4_en - unavailable for experiment
  • ib0, InfiniBand, configured rate: 40 Gbps, model: Mellanox Technologies MT25408A0-FCC-QI ConnectX, Dual Port 40Gb/s InfiniBand / 10GigE Adapter IC with PCIe 2.0 x8 5.0GT/s Interface, driver: mlx4_core
GPU: 4 x Nvidia Quadro RTX 8000 (45 GiB)
Compute capability: 7.5

esterel27

1 node, 2 cpus, 20 cores (json)

Reservation example:

Terminal.png fsophia:
oarsub -q abaca -p esterel27 -I

Max walltime per nodes:

  • esterel27-1: 168h
Access condition: production queue
Model: Asus ESC8000G4
Manufacturing date: 2019-01-01
Date of arrival: 2025-03-17
CPU: Intel Xeon Gold 5115 (Skylake-SP), x86_64, 2.40GHz, 2 CPUs/node, 10 cores/CPU
Memory: 256 GiB
Storage:
  • disk0, 511 GB HDD SAS ASUS AsustekPIKE3108 (dev: /dev/disk0) (primary disk)
  • disk1, 4.09 TB HDD SAS ASUS AsustekPIKE3108 (dev: /dev/disk1)
Network:
  • eth0/enp1s0f0np0, Ethernet, configured rate: 1 Gbps, model: Intel I350 Gigabit Network Connection, driver: igb, SR-IOV enabled - no KaVLAN
  • eth1/enp129s0f1, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • ib0, InfiniBand, configured rate: 40 Gbps, model: Mellanox Technologies MT27500 Family [ConnectX-3], driver: mlx4_core
GPU: 8 x Nvidia GeForce GTX 1080 Ti (11 GiB)
Compute capability: 6.1

esterel32

1 node, 2 cpus, 56 cores (json)

Reservation example:

Terminal.png fsophia:
oarsub -q abaca -p esterel32 -I

Max walltime per nodes:

  • esterel32-1: 168h
Access condition: production queue
Model: Dell PowerEdge T640
Manufacturing date: 2020-08-09
Date of arrival: 2025-03-20
CPU: Intel Xeon Gold 6238R (Cascade Lake-SP), x86_64, 2.20GHz, 2 CPUs/node, 28 cores/CPU
Memory: 768 GiB
Storage:
  • disk0, 479 GB HDD RAID Dell PERC H730P Adp (dev: /dev/disk0) (primary disk)
  • disk1, 7.68 TB HDD RAID Dell PERC H730P Adp (dev: /dev/disk1)
Network:
  • eth0/enp1s0f0np0, Ethernet, configured rate: 1 Gbps, model: Broadcom Inc. and subsidiaries BCM57416 NetXtreme-E Dual-Media 10G RDMA Ethernet Controller, driver: bnxt_en - no KaVLAN
  • eth1/eno2np1, Ethernet, model: Broadcom Inc. and subsidiaries BCM57416 NetXtreme-E Dual-Media 10G RDMA Ethernet Controller, driver: bnxt_en - unavailable for experiment
  • eth2/enp137s0d1, Ethernet, model: Mellanox Technologies MT25408A0-FCC-QI ConnectX, Dual Port 40Gb/s InfiniBand / 10GigE Adapter IC with PCIe 2.0 x8 5.0GT/s Interface, driver: mlx4_en - unavailable for experiment
  • ib0, InfiniBand, configured rate: 40 Gbps, model: Mellanox Technologies MT25408A0-FCC-QI ConnectX, Dual Port 40Gb/s InfiniBand / 10GigE Adapter IC with PCIe 2.0 x8 5.0GT/s Interface, driver: mlx4_core
GPU: 4 x Nvidia Quadro RTX 8000 (45 GiB)
Compute capability: 7.5

esterel33

1 node, 2 cpus, 32 cores (json)

Reservation example:

Terminal.png fsophia:
oarsub -q abaca -p esterel33 -I

Max walltime per nodes:

  • esterel33-1: 168h
Access condition: production queue
Model: Dell PowerEdge R7525
Manufacturing date: 2021-05-13
Date of arrival: 2025-04-28
CPU: AMD EPYC 7282 (Zen 2), x86_64, 2 CPUs/node, 16 cores/CPU
Memory: 256 GiB
Storage:
  • disk0, 479 GB SSD SATA Dell PERC H745 Frnt (dev: /dev/disk0) (primary disk)
  • disk1, 2.88 TB SSD SATA Dell PERC H745 Frnt (dev: /dev/disk1)
Network:
  • eth0/enp1s0f0np0, Ethernet, configured rate: 1 Gbps, model: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe, driver: tg3 - no KaVLAN
  • eth1/eno2, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • ib0, InfiniBand, configured rate: 40 Gbps, model: Mellanox Technologies MT27800 Family [ConnectX-5], driver: mlx5_core
GPU: 3 x Nvidia A40 (45 GiB)
Compute capability: 8.6

esterel41

1 node, 2 cpus, 32 cores (json)

Reservation example:

Terminal.png fsophia:
oarsub -q abaca -p esterel41 -I

Max walltime per nodes:

  • esterel41-1: 168h
Access condition: production queue
Model: DL380 Gen11
Manufacturing date: 2024-03-01
Date of arrival: 2025-01-25
CPU: Intel Xeon Gold 6426Y (Sapphire Rapids), x86_64, 2 CPUs/node, 16 cores/CPU
Memory: 512 GiB
Storage:
  • disk0, 479 GB SSD SATA HPE MR416i-o Gen11 (dev: /dev/disk0) (primary disk)
  • disk1, 2.88 TB SSD SATA HPE MR416i-o Gen11 (dev: /dev/disk1)
Network:
  • eth0/enp1s0f0np0, Ethernet, configured rate: 1 Gbps, model: Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe, driver: tg3 - no KaVLAN
  • eth1/ens15f1, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • eth2/ens15f2, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • eth3/ens15f3, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • ibs3, InfiniBand, configured rate: 56 Gbps, model: Mellanox Technologies MT28908 Family [ConnectX-6], driver: mlx5_core
GPU: 2 x Nvidia L40 (45 GiB)
Compute capability: 8.9

mercantour1

16 nodes, 32 cpus, 320 cores, split as follows due to differences between nodes (json)

Reservation example:

Terminal.png fsophia:
oarsub -q abaca -p mercantour1 -I

Max walltime per nodes:

  • mercantour1-[1-16]: 168h
mercantour1-[1-5,7-16] (15 nodes, 30 cpus, 300 cores)
Access condition: production queue
Model: Dell PowerEdge C6220 II
Manufacturing date: 2014-04-25
Date of arrival: 2014-04-25
CPU: Intel Xeon E5-2680 v2 (Ivy Bridge), x86_64, 2.80GHz, 2 CPUs/node, 10 cores/CPU
Memory: 192 GiB
Storage: disk0, 2.0 TB HDD SATA Seagate ST2000NM0033-9ZM (dev: /dev/disk0) (primary disk)
Network:
  • eth0/enp1s0f0np0, Ethernet, configured rate: 1 Gbps, model: Intel I350 Gigabit Network Connection, driver: igb, SR-IOV enabled
  • eth1/eno2, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • ib0, InfiniBand, configured rate: 40 Gbps, model: Mellanox Technologies MT27500 Family [ConnectX-3], driver: mlx4_core

mercantour1-6 (1 node, 2 cpus, 20 cores)
Access condition: production queue
Model: Dell PowerEdge C6220 II
Manufacturing date: 2014-04-25
Date of arrival: 2014-04-25
CPU: Intel Xeon E5-2680 v2 (Ivy Bridge), x86_64, 2.80GHz, 2 CPUs/node, 10 cores/CPU
Memory: 192 GiB
Storage: disk0, 2.0 TB HDD SATA Toshiba TOSHIBA MG03ACA2 (dev: /dev/disk0) (primary disk)
Network:
  • eth0/enp1s0f0np0, Ethernet, configured rate: 1 Gbps, model: Intel I350 Gigabit Network Connection, driver: igb, SR-IOV enabled
  • eth1/eno2, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • ib0, InfiniBand, configured rate: 40 Gbps, model: Mellanox Technologies MT27500 Family [ConnectX-3], driver: mlx4_core

mercantour2

8 nodes, 16 cpus, 128 cores, split as follows due to differences between nodes (json)

Reservation example:

Terminal.png fsophia:
oarsub -q abaca -p mercantour2 -I

Max walltime per nodes:

  • mercantour2-[1-8]: 168h
mercantour2-[1-6,8] (7 nodes, 14 cpus, 112 cores)
Access condition: production queue
Model: Dell PowerEdge C6220 II
Manufacturing date: 2015-09-01
Date of arrival: 2025-01-16
CPU: Intel Xeon E5-2650 v2 (Ivy Bridge), x86_64, 2.60GHz, 2 CPUs/node, 8 cores/CPU
Memory: 256 GiB
Storage: disk0, 1.0 TB HDD SATA Seagate ST1000NM0033-9ZM (dev: /dev/disk0) (primary disk)
Network:
  • eth0/enp1s0f0np0, Ethernet, configured rate: 1 Gbps, model: Intel I350 Gigabit Network Connection, driver: igb, SR-IOV enabled
  • eth1/eno2, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • ib0, InfiniBand, configured rate: 40 Gbps, model: Mellanox Technologies MT25408A0-FCC-QI ConnectX, Dual Port 40Gb/s InfiniBand / 10GigE Adapter IC with PCIe 2.0 x8 5.0GT/s Interface, driver: mlx4_core
  • ib1, InfiniBand, model: Mellanox Technologies MT25408A0-FCC-QI ConnectX, Dual Port 40Gb/s InfiniBand / 10GigE Adapter IC with PCIe 2.0 x8 5.0GT/s Interface, driver: mlx4_core - unavailable for experiment

mercantour2-7 (1 node, 2 cpus, 16 cores)
Access condition: production queue
Model: Dell PowerEdge C6220 II
Manufacturing date: 2015-09-01
Date of arrival: 2025-01-16
CPU: Intel Xeon E5-2650 v2 (Ivy Bridge), x86_64, 2.60GHz, 2 CPUs/node, 8 cores/CPU
Memory: 256 GiB
Storage: disk0, 1.0 TB HDD SATA Toshiba TOSHIBA MG03ACA1 (dev: /dev/disk0) (primary disk)
Network:
  • eth0/enp1s0f0np0, Ethernet, configured rate: 1 Gbps, model: Intel I350 Gigabit Network Connection, driver: igb, SR-IOV enabled
  • eth1/eno2, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • ib0, InfiniBand, configured rate: 40 Gbps, model: Mellanox Technologies MT25408A0-FCC-QI ConnectX, Dual Port 40Gb/s InfiniBand / 10GigE Adapter IC with PCIe 2.0 x8 5.0GT/s Interface, driver: mlx4_core
  • ib1, InfiniBand, model: Mellanox Technologies MT25408A0-FCC-QI ConnectX, Dual Port 40Gb/s InfiniBand / 10GigE Adapter IC with PCIe 2.0 x8 5.0GT/s Interface, driver: mlx4_core - unavailable for experiment

mercantour5

4 nodes, 8 cpus, 144 cores (json)

Reservation example:

Terminal.png fsophia:
oarsub -q abaca -p mercantour5 -I

Max walltime per nodes:

  • mercantour5-[1-4]: 168h
Access condition: production queue
Model: Dell PowerEdge C6420
Manufacturing date: 2019-07-30
Date of arrival: 2025-02-24
CPU: Intel Xeon Gold 6240 (Cascade Lake-SP), x86_64, 2.60GHz, 2 CPUs/node, 18 cores/CPU
Memory: 384 GiB
Storage:
  • disk0, 599 GB HDD SAS Dell PERC H330 Mini (dev: /dev/disk0) (primary disk)
  • disk1, 959 GB HDD SSD Dell PERC H330 Mini (dev: /dev/disk1)
Network:
  • eth0/enp1s0f0np0, Ethernet, configured rate: 1 Gbps, model: Intel I350 Gigabit Network Connection, driver: igb - no KaVLAN
  • ib0, InfiniBand, configured rate: 40 Gbps, model: Mellanox Technologies MT27500 Family [ConnectX-3], driver: mlx4_core

mercantour6

1 node, 2 cpus, 64 cores (json)

Reservation example:

Terminal.png fsophia:
oarsub -q abaca -p mercantour6 -I

Max walltime per nodes:

  • mercantour6-1: 168h
Access condition: production queue
Model: Dell PowerEdge R7525
Manufacturing date: 2020-10-05
Date of arrival: 2025-02-27
CPU: AMD EPYC 7542 (Zen 2), x86_64, 2 CPUs/node, 32 cores/CPU
Memory: 1.0 TiB
Storage:
  • disk0, 239 GB SSD RAID Dell PERC H745 Frnt (dev: /dev/disk0) (primary disk)
  • disk1, 1.92 TB SSD RAID Dell PERC H745 Frnt (dev: /dev/disk1)
Network:
  • eth0/eno1, Ethernet, configured rate: 1 Gbps, model: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe, driver: tg3 - no KaVLAN
  • eth1/eno2, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • ib0, InfiniBand, configured rate: 40 Gbps, model: Mellanox Technologies MT27520 Family [ConnectX-3 Pro], driver: mlx4_core

mercantour7

1 node, 2 cpus, 64 cores (json)

Reservation example:

Terminal.png fsophia:
oarsub -q abaca -p mercantour7 -I

Max walltime per nodes:

  • mercantour7-1: 168h
Access condition: production queue
Model: Dell PowerEdge R7525
Manufacturing date: 2020-11-13
Date of arrival: 2025-03-20
CPU: AMD EPYC 7502 (Zen 2), x86_64, 2 CPUs/node, 32 cores/CPU
Memory: 384 GiB
Storage:
  • disk0, 959 GB SSD RAID Dell PERC H745 Adp (dev: /dev/disk0) (primary disk)
  • disk1, 48.01 TB HDD RAID Dell PERC H745 Adp (dev: /dev/disk1)
Network:
  • eth0/enp1s0f0np0, Ethernet, configured rate: 1 Gbps, model: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe, driver: tg3 - no KaVLAN
  • eth1/eno2, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • ib0, InfiniBand, configured rate: 40 Gbps, model: Mellanox Technologies MT27520 Family [ConnectX-3 Pro], driver: mlx4_core

musa

6 nodes, 12 cpus, 288 cores (json)

Reservation example:

Terminal.png fsophia:
oarsub -q abaca -p musa -I

Max walltime per nodes:

  • musa-[1-2]: 6h
  • musa-[3-4]: 24h
  • musa-5: 48h
  • musa-6: 168h
Access condition: production queue
Model: ProLiant DL385 Gen11
Manufacturing date: 2024-12-09
Date of arrival: 2025-01-16
CPU: AMD EPYC 9254 (Zen 4), x86_64, 2 CPUs/node, 24 cores/CPU
Memory: 512 GiB
Storage: disk0, 6.4 TB SSD NVME Samsung MO006400KYDND (dev: /dev/disk0) (primary disk)
Network:
  • eth0/enp1s0f0np0, Ethernet, configured rate: 25 Gbps, model: Broadcom Inc. and subsidiaries BCM57414 NetXtreme-E 10Gb/25Gb RDMA Ethernet Controller, driver: bnxt_en
  • eth1/ens22f1np1, Ethernet, model: Broadcom Inc. and subsidiaries BCM57414 NetXtreme-E 10Gb/25Gb RDMA Ethernet Controller, driver: bnxt_en - unavailable for experiment
GPU: 2 x Nvidia H100 NVL (94 GiB)
Compute capability: 9.0

Clusters in the testing queue

esterel31

4 nodes, 8 cpus, 208 cores (json)

Reservation example:

Terminal.png fsophia:
oarsub -q testing -p esterel31 -I
Access condition: testing queue
Model: Dell PowerEdge T640
Manufacturing date: 2020-08-10
Date of arrival: 2025-04-16
CPU: Intel Xeon Gold 6230R (Cascade Lake-SP), x86_64, 2.10GHz, 2 CPUs/node, 26 cores/CPU
Memory: 384 GiB
Storage:
  • disk0, 479 GB HDD SAS Dell PERC H730P Adp (dev: /dev/disk0) (primary disk)
  • disk1, 3.84 TB HDD SAS Dell PERC H730P Adp (dev: /dev/disk1)
Network:
  • eth0/enp1s0f0np0, Ethernet, configured rate: 10 Gbps, model: Broadcom Inc. and subsidiaries BCM57416 NetXtreme-E Dual-Media 10G RDMA Ethernet Controller, driver: bnxt_en - no KaVLAN
  • eth1/eno2np1, Ethernet, model: Broadcom Inc. and subsidiaries BCM57416 NetXtreme-E Dual-Media 10G RDMA Ethernet Controller, driver: bnxt_en - unavailable for experiment
  • eth2/enp137s0d1, Ethernet, model: Mellanox Technologies MT25408A0-FCC-QI ConnectX, Dual Port 40Gb/s InfiniBand / 10GigE Adapter IC with PCIe 2.0 x8 5.0GT/s Interface, driver: mlx4_en - unavailable for experiment
  • ib0, InfiniBand, configured rate: 40 Gbps, model: Mellanox Technologies MT25408A0-FCC-QI ConnectX, Dual Port 40Gb/s InfiniBand / 10GigE Adapter IC with PCIe 2.0 x8 5.0GT/s Interface, driver: mlx4_core
GPU: 4 x Nvidia Quadro RTX 8000 (45 GiB)
Compute capability: 7.5

mercantour4

1 node, 4 cpus, 80 cores (json)

Reservation example:

Terminal.png fsophia:
oarsub -q testing -p mercantour4 -I
Access condition: testing queue
Model: Dell PowerEdge R940
Manufacturing date: 2017-11-29
Date of arrival: 2025-03-14
CPU: Intel Xeon Gold 6148 (Skylake-SP), x86_64, 2.40GHz, 4 CPUs/node, 20 cores/CPU
Memory: 1.0 TiB
Storage:
  • disk0, 1.92 TB SSD SAS Dell PERC H740P Adp (dev: /dev/disk0) (primary disk)
  • disk1, 599 GB HDD SAS Dell PERC H740P Adp (dev: /dev/disk1)
  • disk2, 599 GB HDD SAS Dell PERC H740P Adp (dev: /dev/disk2)
Network:
  • eth0/enp1s0f0np0, Ethernet, configured rate: 1 Gbps, model: Intel I350 Gigabit Network Connection, driver: igb
  • eth1/eno2, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth2/eno3, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth3/eno4, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • ib0, InfiniBand, configured rate: 20 Gbps, model: Mellanox Technologies MT27700 Family [ConnectX-4], driver: mlx5_core

Last generated from the Grid'5000 Reference API on 2025-05-07 (commit 93bdbdc224)