Sophia:Hardware: Difference between revisions

From Grid5000
Jump to navigation Jump to search
No edit summary
No edit summary
Line 4: Line 4:
'''See also:''' [[Sophia:Network|Network topology for Sophia]]
'''See also:''' [[Sophia:Network|Network topology for Sophia]]
= Summary =
= Summary =
* 14 clusters
* 15 clusters
* 36 nodes
* 37 nodes
* 972 CPU cores
* 992 CPU cores
* 47 GPUs
* 55 GPUs
* 321536 GPUs cores
* 350208 GPUs cores
* 10.28 TiB RAM
* 10.53 TiB RAM
* 14 SSDs and 42 HDDs on nodes (total: 137.36 TB)
* 14 SSDs and 44 HDDs on nodes (total: 141.97 TB)
* 70.6 TFLOPS (excluding GPUs)
* 71.3 TFLOPS (excluding GPUs)
{|width="100%" border="0"
{|width="100%" border="0"
|-
|-
Line 128: Line 128:
|-
|-
|[[#esterel26|esterel26]]||<b>testing</b>&nbsp;queue||2025-03-25||2020-10-30||1||2||Intel Xeon Silver 4216||16&nbsp;cores/CPU||x86_64||data-sort-value="412316860416"|384&nbsp;GiB||data-sort-value="4133"|<b>599&nbsp;GB&nbsp;HDD</b> +&nbsp;3.84&nbsp;TB&nbsp;HDD||data-sort-value="41000"|1&nbsp;Gbps&nbsp;+&nbsp;40&nbsp;Gbps&nbsp;InfiniBand||
|[[#esterel26|esterel26]]||<b>testing</b>&nbsp;queue||2025-03-25||2020-10-30||1||2||Intel Xeon Silver 4216||16&nbsp;cores/CPU||x86_64||data-sort-value="412316860416"|384&nbsp;GiB||data-sort-value="4133"|<b>599&nbsp;GB&nbsp;HDD</b> +&nbsp;3.84&nbsp;TB&nbsp;HDD||data-sort-value="41000"|1&nbsp;Gbps&nbsp;+&nbsp;40&nbsp;Gbps&nbsp;InfiniBand||
|-
|[[#esterel27|esterel27]]||<b>testing</b>&nbsp;queue||2025-03-17||2019-01-01||1||2||Intel Xeon Gold 5115||10&nbsp;cores/CPU||x86_64||data-sort-value="274877906944"|256&nbsp;GiB||data-sort-value="4289"|<b>511&nbsp;GB&nbsp;HDD</b> +&nbsp;4.09&nbsp;TB&nbsp;HDD||data-sort-value="41000"|1&nbsp;Gbps&nbsp;(SR&#8209;IOV)&nbsp;+&nbsp;40&nbsp;Gbps&nbsp;InfiniBand||8&nbsp;x&nbsp;Nvidia GTX&nbsp;1080&nbsp;Ti&nbsp;(11&nbsp;GiB)
|-
|-
|[[#mercantour7|mercantour7]]||<b>testing</b>&nbsp;queue||2025-03-20||2020-11-13||1||2||AMD EPYC 7502||32&nbsp;cores/CPU||x86_64||data-sort-value="412316860416"|384&nbsp;GiB||data-sort-value="45602"|<b>959&nbsp;GB&nbsp;SSD</b> +&nbsp;48.01&nbsp;TB&nbsp;HDD||data-sort-value="41000"|1&nbsp;Gbps&nbsp;+&nbsp;40&nbsp;Gbps&nbsp;InfiniBand||
|[[#mercantour7|mercantour7]]||<b>testing</b>&nbsp;queue||2025-03-20||2020-11-13||1||2||AMD EPYC 7502||32&nbsp;cores/CPU||x86_64||data-sort-value="412316860416"|384&nbsp;GiB||data-sort-value="45602"|<b>959&nbsp;GB&nbsp;SSD</b> +&nbsp;48.01&nbsp;TB&nbsp;HDD||data-sort-value="41000"|1&nbsp;Gbps&nbsp;+&nbsp;40&nbsp;Gbps&nbsp;InfiniBand||
Line 793: Line 795:
* <span style="color:grey">eth2/enp137s0d1, Ethernet, model: Mellanox Technologies MT25408A0-FCC-QI ConnectX, Dual Port 40Gb/s InfiniBand / 10GigE Adapter IC with PCIe 2.0 x8 5.0GT/s Interface, driver: mlx4_en - unavailable for experiment</span><br />
* <span style="color:grey">eth2/enp137s0d1, Ethernet, model: Mellanox Technologies MT25408A0-FCC-QI ConnectX, Dual Port 40Gb/s InfiniBand / 10GigE Adapter IC with PCIe 2.0 x8 5.0GT/s Interface, driver: mlx4_en - unavailable for experiment</span><br />
* ib0, InfiniBand, configured rate: 40&nbsp;Gbps, model: Mellanox Technologies MT25408A0-FCC-QI ConnectX, Dual Port 40Gb/s InfiniBand / 10GigE Adapter IC with PCIe 2.0 x8 5.0GT/s Interface, driver: mlx4_core<br/>
* ib0, InfiniBand, configured rate: 40&nbsp;Gbps, model: Mellanox Technologies MT25408A0-FCC-QI ConnectX, Dual Port 40Gb/s InfiniBand / 10GigE Adapter IC with PCIe 2.0 x8 5.0GT/s Interface, driver: mlx4_core<br/>
|-
|}
== [https://intranet.grid5000.fr/oar/Sophia/drawgantt-svg/?filter=esterel27%20only esterel27] ==
'''1 node, 2 cpus, 20 cores''' ([https://public-api.grid5000.fr/stable/sites/sophia/clusters/esterel27/nodes.json?pretty=1 json])
'''Reservation example:'''
{{Term|location=fsophia|cmd=<code class="command">oarsub</code> <code class="replace">-q testing</code> <code class="env">-p esterel27</code> <code>-I</code>}}
{|
|-
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Access condition:'''
| testing queue<br/>
|-
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Model:'''
| Asus ESC8000G4<br/>
|-
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Manufacturing date:'''
| 2019-01-01<br/>
|-
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Date of arrival:'''
| 2025-03-17<br/>
|-
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''CPU:'''
| Intel Xeon Gold 5115 (Skylake-SP), x86_64, 2.40GHz, 2&nbsp;CPUs/node, 10&nbsp;cores/CPU<br/>
|-
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Memory:'''
| 256&nbsp;GiB<br/>
|-
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Storage:'''
|
* disk0, 511&nbsp;GB HDD SAS ASUS AsustekPIKE3108 (dev: <code class="file">/dev/disk0</code>)  (primary disk)<br />
* disk0, 4.09&nbsp;TB HDD SAS ASUS AsustekPIKE3108 (dev: <code class="file">/dev/disk0</code>)  (primary disk)<br/>
|-
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Network:'''
|
* eth0/enp1s0f0np0, Ethernet, configured rate: 1&nbsp;Gbps, model: Intel I350 Gigabit Network Connection, driver: igb, SR-IOV enabled - no KaVLAN<br />
* <span style="color:grey">eth1/enp129s0f1, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment</span><br />
* ib0, InfiniBand, configured rate: 40&nbsp;Gbps, model: Mellanox Technologies MT27500 Family [ConnectX-3], driver: mlx4_core<br/>
|-
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''GPU:'''
| 8&nbsp;x&nbsp;Nvidia GeForce&nbsp;GTX&nbsp;1080&nbsp;Ti&nbsp;(11&nbsp;GiB)<br>Compute&nbsp;capability:&nbsp;6.1<br/>
|-
|-
|}
|}
Line 835: Line 881:
* ib0, InfiniBand, configured rate: 40&nbsp;Gbps, model: Mellanox Technologies MT27520 Family [ConnectX-3 Pro], driver: mlx4_core<br/>
* ib0, InfiniBand, configured rate: 40&nbsp;Gbps, model: Mellanox Technologies MT27520 Family [ConnectX-3 Pro], driver: mlx4_core<br/>
|-
|-
|}''<small>Last generated from the Grid'5000 Reference API on 2025-03-25 ([https://gitlab.inria.fr/grid5000/reference-repository/commit/f5fc647e1c commit f5fc647e1c])</small>''
|}''<small>Last generated from the Grid'5000 Reference API on 2025-03-26 ([https://gitlab.inria.fr/grid5000/reference-repository/commit/992ccbdaf8 commit 992ccbdaf8])</small>''

Revision as of 10:20, 26 March 2025

See also: Network topology for Sophia

Summary

  • 15 clusters
  • 37 nodes
  • 992 CPU cores
  • 55 GPUs
  • 350208 GPUs cores
  • 10.53 TiB RAM
  • 14 SSDs and 44 HDDs on nodes (total: 141.97 TB)
  • 71.3 TFLOPS (excluding GPUs)

Default queue resources

  • 1 cluster
  • 4 nodes
  • 48 CPU cores
  • 384 GiB RAM
  • 4 HDDs on nodes (total: 1.0 TB)
  • 0.6 TFLOPS (excluding GPUs)

Production queue resources

  • 10 clusters
  • 29 nodes
  • 808 CPU cores
  • 43 GPUs
  • 309248 GPUs cores
  • 9.03 TiB RAM
  • 13 SSDs and 33 HDDs on nodes (total: 79.36 TB)
  • 65.7 TFLOPS (excluding GPUs)

Clusters summary

Default queue resources

Cluster Access Condition Date of arrival Manufacturing date Nodes CPU Memory Storage Network Accelerators
# Name Cores Architecture
uvb 2011-01-04 2011-01-04 4 2 Intel Xeon X5670 6 cores/CPU x86_64 96 GiB 250 GB HDD 1 Gbps (SR‑IOV) + 40 Gbps InfiniBand

**: crossed GPUs are not supported by Grid'5000 default environments

Production queue resources

Cluster Access Condition Date of arrival Manufacturing date Nodes CPU Memory Storage Network Accelerators
# Name Cores Architecture
esterel1 production queue 2025-03-13 2016-11-02 1 2 Intel Xeon E5-2623 v4 4 cores/CPU x86_64 32 GiB 399 GB HDD + 399 GB HDD 1 Gbps + 40 Gbps InfiniBand
esterel2 production queue 2025-03-12 2016-05-04 1 2 Intel Xeon E5-2620 v3 6 cores/CPU x86_64 128 GiB 2.0 TB HDD 1 Gbps + 40 Gbps InfiniBand 4 x Nvidia GTX 1080 Ti (11 GiB)
esterel5 production queue 2025-02-25 2016-06-08 2 2 Intel Xeon E5-2630 v4 10 cores/CPU x86_64 128 GiB 2.0 TB HDD + 1.6 TB HDD 1 Gbps + 40 Gbps InfiniBand 3 x Nvidia GTX 1080 (8 GiB)
esterel7 production queue 2025-03-06 2017-05-23 2 2 Intel Xeon E5-2620 v4 8 cores/CPU x86_64 128 GiB 999 GB HDD + 399 GB HDD 1 Gbps + 40 Gbps InfiniBand 4 x Nvidia GTX 1080 Ti (11 GiB)
esterel10 production queue 2024-12-19 2017-11-15 3 2 Intel Xeon E5-2630 v4 10 cores/CPU x86_64 128 GiB 1.6 TB SSD + 2 x 600 GB HDD 1 Gbps + 56 Gbps InfiniBand [1-2]: 4 x Nvidia GTX 1080 Ti (11 GiB)
3: 3 x Nvidia GTX 1080 Ti (11 GiB)
esterel41 production queue 2025-01-25 2024-03-01 1 2 Intel Xeon Gold 6426Y 16 cores/CPU x86_64 512 GiB 479 GB SSD + 2.88 TB SSD 1 Gbps + 56 Gbps InfiniBand 2 x Nvidia L40 (45 GiB)
mercantour2 production queue 2025-01-16 2015-09-01 8 2 Intel Xeon E5-2650 v2 8 cores/CPU x86_64 256 GiB 1.0 TB HDD 1 Gbps (SR‑IOV) + 40 Gbps InfiniBand
mercantour5 production queue 2025-02-24 2019-07-30 4 2 Intel Xeon Gold 6240 18 cores/CPU x86_64 384 GiB 599 GB HDD + 959 GB HDD 1 Gbps + 40 Gbps InfiniBand
mercantour6 production queue 2025-02-27 2020-10-05 1 2 AMD EPYC 7542 32 cores/CPU x86_64 1.0 TiB 239 GB SSD + 1.92 TB SSD 1 Gbps + 40 Gbps InfiniBand
musa production queue 2025-01-16 2024-12-09 6 2 AMD EPYC 9254 24 cores/CPU x86_64 512 GiB 6.4 TB SSD 25 Gbps  2 x Nvidia Tesla H100 (94 GiB)

**: crossed GPUs are not supported by Grid'5000 default environments

Testing queue resources

Cluster Access Condition Date of arrival Manufacturing date Nodes CPU Memory Storage Network Accelerators
# Name Cores Architecture
esterel4 testing queue 2025-03-20 2016-06-08 1 2 Intel Xeon E5-2630 v4 10 cores/CPU x86_64 128 GiB 2.0 TB HDD + 1.6 TB HDD 1 Gbps + 40 Gbps InfiniBand 4 x Nvidia GTX TITAN X (12 GiB)
esterel26 testing queue 2025-03-25 2020-10-30 1 2 Intel Xeon Silver 4216 16 cores/CPU x86_64 384 GiB 599 GB HDD + 3.84 TB HDD 1 Gbps + 40 Gbps InfiniBand
esterel27 testing queue 2025-03-17 2019-01-01 1 2 Intel Xeon Gold 5115 10 cores/CPU x86_64 256 GiB 511 GB HDD + 4.09 TB HDD 1 Gbps (SR‑IOV) + 40 Gbps InfiniBand 8 x Nvidia GTX 1080 Ti (11 GiB)
mercantour7 testing queue 2025-03-20 2020-11-13 1 2 AMD EPYC 7502 32 cores/CPU x86_64 384 GiB 959 GB SSD + 48.01 TB HDD 1 Gbps + 40 Gbps InfiniBand

**: crossed GPUs are not supported by Grid'5000 default environments

Clusters in the default queue

uvb

4 nodes, 8 cpus, 48 cores (json)

Reservation example:

Terminal.png fsophia:
oarsub -p uvb -I
Model: Dell PowerEdge C6100
Manufacturing date: 2011-01-04
Date of arrival: 2011-01-04
CPU: Intel Xeon X5670 (Westmere), x86_64, 2.93GHz, 2 CPUs/node, 6 cores/CPU
Memory: 96 GiB
Storage: disk0, 250 GB HDD SATA Western Digital WDC WD2502ABYS-1 (dev: /dev/disk0) (primary disk)
Network:
  • eth0/eno1, Ethernet, configured rate: 1 Gbps, model: Intel 82576 Gigabit Network Connection, driver: igb, SR-IOV enabled
  • eth1/eno2, Ethernet, model: Intel 82576 Gigabit Network Connection, driver: igb - unavailable for experiment
  • ib0, InfiniBand, configured rate: 40 Gbps, model: Mellanox Technologies MT25408A0-FCC-QI ConnectX, Dual Port 40Gb/s InfiniBand / 10GigE Adapter IC with PCIe 2.0 x8 5.0GT/s Interface, driver: mlx4_core
  • ib1, InfiniBand, model: Mellanox Technologies MT25408A0-FCC-QI ConnectX, Dual Port 40Gb/s InfiniBand / 10GigE Adapter IC with PCIe 2.0 x8 5.0GT/s Interface, driver: mlx4_core - unavailable for experiment

Clusters in the production queue

esterel1

1 node, 2 cpus, 8 cores (json)

Reservation example:

Terminal.png fsophia:
oarsub -q production -p esterel1 -I

Max walltime per nodes:

  • esterel1-1: 168h
Access condition: production queue
Model: Dell PowerEdge R730
Manufacturing date: 2016-11-02
Date of arrival: 2025-03-13
CPU: Intel Xeon E5-2623 v4 (Broadwell), x86_64, 2.60GHz, 2 CPUs/node, 4 cores/CPU
Memory: 32 GiB
Storage:
  • disk0, 399 GB HDD SAS Dell PERC H730 Mini (dev: /dev/disk0) (primary disk)
  • disk1, 399 GB HDD SAS Dell PERC H730 Mini (dev: /dev/disk1)
Network:
  • eth0/enp1s0f0np0, Ethernet, configured rate: 1 Gbps, model: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe, driver: tg3 - no KaVLAN
  • eth1/eno2, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • eth2/eno3, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • eth3/eno4, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • ib0, InfiniBand, configured rate: 40 Gbps, model: Mellanox Technologies MT27500 Family [ConnectX-3], driver: mlx4_core

esterel2

1 node, 2 cpus, 12 cores (json)

Reservation example:

Terminal.png fsophia:
oarsub -q production -p esterel2 -I

Max walltime per nodes:

  • esterel2-1: 168h
Access condition: production queue
Model: Dell PowerEdge T630
Manufacturing date: 2016-05-04
Date of arrival: 2025-03-12
CPU: Intel Xeon E5-2620 v3 (Haswell), x86_64, 2.40GHz, 2 CPUs/node, 6 cores/CPU
Memory: 128 GiB
Storage: disk0, 2.0 TB HDD SAS Dell PERC H730 Adp (dev: /dev/disk0) (primary disk)
Network:
  • eth0/enp1s0f0np0, Ethernet, configured rate: 1 Gbps, model: Intel I350 Gigabit Network Connection, driver: igb - no KaVLAN
  • eth1/eno2, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • ib0, InfiniBand, configured rate: 40 Gbps, model: Mellanox Technologies MT27500 Family [ConnectX-3], driver: mlx4_core
GPU: 4 x Nvidia GeForce GTX 1080 Ti (11 GiB)
Compute capability: 6.1

esterel5

2 nodes, 4 cpus, 40 cores (json)

Reservation example:

Terminal.png fsophia:
oarsub -q production -p esterel5 -I

Max walltime per nodes:

  • esterel5-[1-2]: 168h
Access condition: production queue
Model: Dell PowerEdge T630
Manufacturing date: 2016-06-08
Date of arrival: 2025-02-25
CPU: Intel Xeon E5-2630 v4 (Broadwell), x86_64, 2.20GHz, 2 CPUs/node, 10 cores/CPU
Memory: 128 GiB
Storage:
  • disk0, 2.0 TB HDD SAS Dell PERC H730 Adp (dev: /dev/disk0) (primary disk)
  • disk1, 1.6 TB HDD SSD Dell PERC H730 Adp (dev: /dev/disk1)
Network:
  • eth0/eno1, Ethernet, configured rate: 1 Gbps, model: Intel I350 Gigabit Network Connection, driver: igb - no KaVLAN
  • eth1/eno2, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • ib0, InfiniBand, configured rate: 40 Gbps, model: Mellanox Technologies MT27500 Family [ConnectX-3], driver: mlx4_core
GPU: 3 x Nvidia GeForce GTX 1080 (8 GiB)
Compute capability: 6.1

esterel7

2 nodes, 4 cpus, 32 cores (json)

Reservation example:

Terminal.png fsophia:
oarsub -q production -p esterel7 -I

Max walltime per nodes:

  • esterel7-[1-2]: 168h
Access condition: production queue
Model: Dell PowerEdge T630
Manufacturing date: 2017-05-23
Date of arrival: 2025-03-06
CPU: Intel Xeon E5-2620 v4 (Broadwell), x86_64, 2.10GHz, 2 CPUs/node, 8 cores/CPU
Memory: 128 GiB
Storage:
  • disk0, 999 GB HDD RAID Dell PERC H730 Adp (dev: /dev/disk0) (primary disk)
  • disk1, 399 GB HDD RAID Dell PERC H730 Adp (dev: /dev/disk1)
Network:
  • eth0/enp1s0f0np0, Ethernet, configured rate: 1 Gbps, model: Intel I350 Gigabit Network Connection, driver: igb - no KaVLAN
  • eth1/eno2, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • ib0, InfiniBand, configured rate: 40 Gbps, model: Mellanox Technologies MT27500 Family [ConnectX-3], driver: mlx4_core
GPU: 4 x Nvidia GeForce GTX 1080 Ti (11 GiB)
Compute capability: 6.1

esterel10

3 nodes, 6 cpus, 60 cores, split as follows due to differences between nodes (json)

Reservation example:

Terminal.png fsophia:
oarsub -q production -p esterel10 -I

Max walltime per nodes:

  • esterel10-[1-3]: 168h
esterel10-[1-2] (2 nodes, 4 cpus, 40 cores)
Access condition: production queue
Model: Dell T630
Manufacturing date: 2017-11-15
Date of arrival: 2024-12-19
CPU: Intel Xeon E5-2630 v4 (Broadwell), x86_64, 2.20GHz, 2 CPUs/node, 10 cores/CPU
Memory: 128 GiB
Storage:
  • disk0, 1.6 TB SSD SAS Toshiba THNSF81D60CSE (dev: /dev/disk0) (primary disk)
  • disk1, 600 GB HDD SAS Toshiba AL14SEB060NY (dev: /dev/disk1)
  • disk2, 600 GB HDD SAS Toshiba AL14SEB060NY (dev: /dev/disk2)
Network:
  • eth0/eno1, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth1/enp1s0f1, Ethernet, configured rate: 1 Gbps, model: Intel I350 Gigabit Network Connection, driver: igb - no KaVLAN
  • ib0, InfiniBand, configured rate: 56 Gbps, model: Mellanox Technologies MT27500 Family [ConnectX-3], driver: mlx4_core
GPU: 4 x Nvidia GeForce GTX 1080 Ti (11 GiB)
Compute capability: 6.1

esterel10-3 (1 node, 2 cpus, 20 cores)
Access condition: production queue
Model: Dell T630
Manufacturing date: 2017-11-15
Date of arrival: 2024-12-19
CPU: Intel Xeon E5-2630 v4 (Broadwell), x86_64, 2.20GHz, 2 CPUs/node, 10 cores/CPU
Memory: 128 GiB
Storage:
  • disk0, 1.6 TB SSD SAS Toshiba THNSF81D60CSE (dev: /dev/disk0) (primary disk)
  • disk1, 600 GB HDD SAS Toshiba AL14SEB060NY (dev: /dev/disk1)
  • disk2, 600 GB HDD SAS Toshiba AL14SEB060NY (dev: /dev/disk2)
Network:
  • eth0/eno1, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth1/enp1s0f1, Ethernet, configured rate: 1 Gbps, model: Intel I350 Gigabit Network Connection, driver: igb - no KaVLAN
  • ib0, InfiniBand, configured rate: 56 Gbps, model: Mellanox Technologies MT27500 Family [ConnectX-3], driver: mlx4_core
GPU: 3 x Nvidia GeForce GTX 1080 Ti (11 GiB)
Compute capability: 6.1

esterel41

1 node, 2 cpus, 32 cores (json)

Reservation example:

Terminal.png fsophia:
oarsub -q production -p esterel41 -I

Max walltime per nodes:

  • esterel41-1: 168h
Access condition: production queue
Model: DL380 Gen11
Manufacturing date: 2024-03-01
Date of arrival: 2025-01-25
CPU: Intel Xeon Gold 6426Y (Sapphire Rapids), x86_64, 2 CPUs/node, 16 cores/CPU
Memory: 512 GiB
Storage:
  • disk0, 479 GB SSD SATA HPE MR416i-o Gen11 (dev: /dev/disk0) (primary disk)
  • disk1, 2.88 TB SSD SATA HPE MR416i-o Gen11 (dev: /dev/disk1)
Network:
  • eth0/enp1s0f0np0, Ethernet, configured rate: 1 Gbps, model: Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe, driver: tg3 - no KaVLAN
  • eth1/ens15f1, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • eth2/ens15f2, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • eth3/ens15f3, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • ibs3, InfiniBand, configured rate: 56 Gbps, model: Mellanox Technologies MT28908 Family [ConnectX-6], driver: mlx5_core
GPU: 2 x Nvidia L40 (45 GiB)
Compute capability: 8.9

mercantour2

8 nodes, 16 cpus, 128 cores, split as follows due to differences between nodes (json)

Reservation example:

Terminal.png fsophia:
oarsub -q production -p mercantour2 -I

Max walltime per nodes:

  • mercantour2-[1-8]: 168h
mercantour2-[1-6,8] (7 nodes, 14 cpus, 112 cores)
Access condition: production queue
Model: Dell PowerEdge C6220 II
Manufacturing date: 2015-09-01
Date of arrival: 2025-01-16
CPU: Intel Xeon E5-2650 v2 (Ivy Bridge), x86_64, 2.60GHz, 2 CPUs/node, 8 cores/CPU
Memory: 256 GiB
Storage: disk0, 1.0 TB HDD SATA Seagate ST1000NM0033-9ZM (dev: /dev/disk0) (primary disk)
Network:
  • eth0/enp1s0f0np0, Ethernet, configured rate: 1 Gbps, model: Intel I350 Gigabit Network Connection, driver: igb, SR-IOV enabled
  • eth1/eno2, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • ib0, InfiniBand, configured rate: 40 Gbps, model: Mellanox Technologies MT25408A0-FCC-QI ConnectX, Dual Port 40Gb/s InfiniBand / 10GigE Adapter IC with PCIe 2.0 x8 5.0GT/s Interface, driver: mlx4_core
  • ib1, InfiniBand, model: Mellanox Technologies MT25408A0-FCC-QI ConnectX, Dual Port 40Gb/s InfiniBand / 10GigE Adapter IC with PCIe 2.0 x8 5.0GT/s Interface, driver: mlx4_core - unavailable for experiment

mercantour2-7 (1 node, 2 cpus, 16 cores)
Access condition: production queue
Model: Dell PowerEdge C6220 II
Manufacturing date: 2015-09-01
Date of arrival: 2025-01-16
CPU: Intel Xeon E5-2650 v2 (Ivy Bridge), x86_64, 2.60GHz, 2 CPUs/node, 8 cores/CPU
Memory: 256 GiB
Storage: disk0, 1.0 TB HDD SATA Toshiba TOSHIBA MG03ACA1 (dev: /dev/disk0) (primary disk)
Network:
  • eth0/enp1s0f0np0, Ethernet, configured rate: 1 Gbps, model: Intel I350 Gigabit Network Connection, driver: igb, SR-IOV enabled
  • eth1/eno2, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • ib0, InfiniBand, configured rate: 40 Gbps, model: Mellanox Technologies MT25408A0-FCC-QI ConnectX, Dual Port 40Gb/s InfiniBand / 10GigE Adapter IC with PCIe 2.0 x8 5.0GT/s Interface, driver: mlx4_core
  • ib1, InfiniBand, model: Mellanox Technologies MT25408A0-FCC-QI ConnectX, Dual Port 40Gb/s InfiniBand / 10GigE Adapter IC with PCIe 2.0 x8 5.0GT/s Interface, driver: mlx4_core - unavailable for experiment

mercantour5

4 nodes, 8 cpus, 144 cores (json)

Reservation example:

Terminal.png fsophia:
oarsub -q production -p mercantour5 -I

Max walltime per nodes:

  • mercantour5-[1-4]: 168h
Access condition: production queue
Model: Dell PowerEdge C6420
Manufacturing date: 2019-07-30
Date of arrival: 2025-02-24
CPU: Intel Xeon Gold 6240 (Cascade Lake-SP), x86_64, 2.60GHz, 2 CPUs/node, 18 cores/CPU
Memory: 384 GiB
Storage:
  • disk0, 599 GB HDD SAS Dell PERC H330 Mini (dev: /dev/disk0) (primary disk)
  • disk1, 959 GB HDD SSD Dell PERC H330 Mini (dev: /dev/disk1)
Network:
  • eth0/enp1s0f0np0, Ethernet, configured rate: 1 Gbps, model: Intel I350 Gigabit Network Connection, driver: igb - no KaVLAN
  • ib0, InfiniBand, configured rate: 40 Gbps, model: Mellanox Technologies MT27500 Family [ConnectX-3], driver: mlx4_core

mercantour6

1 node, 2 cpus, 64 cores (json)

Reservation example:

Terminal.png fsophia:
oarsub -q production -p mercantour6 -I

Max walltime per nodes:

  • mercantour6-1: 168h
Access condition: production queue
Model: Dell PowerEdge R7525
Manufacturing date: 2020-10-05
Date of arrival: 2025-02-27
CPU: AMD EPYC 7542 (Zen 2), x86_64, 2 CPUs/node, 32 cores/CPU
Memory: 1.0 TiB
Storage:
  • disk0, 239 GB SSD RAID Dell PERC H745 Frnt (dev: /dev/disk0) (primary disk)
  • disk1, 1.92 TB SSD RAID Dell PERC H745 Frnt (dev: /dev/disk1)
Network:
  • eth0/eno1, Ethernet, configured rate: 1 Gbps, model: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe, driver: tg3 - no KaVLAN
  • eth1/eno2, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • ib0, InfiniBand, configured rate: 40 Gbps, model: Mellanox Technologies MT27520 Family [ConnectX-3 Pro], driver: mlx4_core

musa

6 nodes, 12 cpus, 288 cores (json)

Reservation example:

Terminal.png fsophia:
oarsub -q production -p musa -I

Max walltime per nodes:

  • musa-[1-2]: 6h
  • musa-[3-4]: 24h
  • musa-[5-6]: 48h
Access condition: production queue
Model: ProLiant DL385 Gen11
Manufacturing date: 2024-12-09
Date of arrival: 2025-01-16
CPU: AMD EPYC 9254 (Zen 4), x86_64, 2 CPUs/node, 24 cores/CPU
Memory: 512 GiB
Storage: disk0, 6.4 TB SSD NVME Samsung MO006400KYDND (dev: /dev/disk0) (primary disk)
Network:
  • eth0/enp1s0f0np0, Ethernet, configured rate: 25 Gbps, model: Broadcom Inc. and subsidiaries BCM57414 NetXtreme-E 10Gb/25Gb RDMA Ethernet Controller, driver: bnxt_en
  • eth1/ens22f1np1, Ethernet, model: Broadcom Inc. and subsidiaries BCM57414 NetXtreme-E 10Gb/25Gb RDMA Ethernet Controller, driver: bnxt_en - unavailable for experiment
GPU: 2 x Nvidia H100 NVL (94 GiB)
Compute capability: 9.0

Clusters in the testing queue

esterel4

1 node, 2 cpus, 20 cores (json)

Reservation example:

Terminal.png fsophia:
oarsub -q testing -p esterel4 -I
Access condition: testing queue
Model: Dell PowerEdge T630
Manufacturing date: 2016-06-08
Date of arrival: 2025-03-20
CPU: Intel Xeon E5-2630 v4 (Broadwell), x86_64, 2.20GHz, 2 CPUs/node, 10 cores/CPU
Memory: 128 GiB
Storage:
  • disk0, 2.0 TB HDD SAS Dell PERC H730 Adp (dev: /dev/disk0) (primary disk)
  • disk1, 1.6 TB HDD SATA Dell PERC H730 Adp (dev: /dev/disk1)
Network:
  • eth0/enp1s0f0np0, Ethernet, configured rate: 1 Gbps, model: Intel I350 Gigabit Network Connection, driver: igb - no KaVLAN
  • eth1/eno2, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • ib0, InfiniBand, configured rate: 40 Gbps, model: Mellanox Technologies MT27500 Family [ConnectX-3], driver: mlx4_core
GPU: 4 x Nvidia GeForce GTX TITAN X (12 GiB)
Compute capability: 5.2

esterel26

1 node, 2 cpus, 32 cores (json)

Reservation example:

Terminal.png fsophia:
oarsub -q testing -p esterel26 -I
Access condition: testing queue
Model: Dell PowerEdge T640
Manufacturing date: 2020-10-30
Date of arrival: 2025-03-25
CPU: Intel Xeon Silver 4216 (Cascade Lake-SP), x86_64, 2.10GHz, 2 CPUs/node, 16 cores/CPU
Memory: 384 GiB
Storage:
  • disk0, 599 GB HDD RAID Dell PERC H730P Adp (dev: /dev/disk0) (primary disk)
  • disk1, 3.84 TB HDD RAID Dell PERC H730P Adp (dev: /dev/disk1)
Network:
  • eth0/enp1s0f0np0, Ethernet, configured rate: 1 Gbps, model: Broadcom Inc. and subsidiaries BCM57416 NetXtreme-E Dual-Media 10G RDMA Ethernet Controller, driver: bnxt_en - no KaVLAN
  • eth1/eno2np1, Ethernet, model: Broadcom Inc. and subsidiaries BCM57416 NetXtreme-E Dual-Media 10G RDMA Ethernet Controller, driver: bnxt_en - unavailable for experiment
  • eth2/enp137s0d1, Ethernet, model: Mellanox Technologies MT25408A0-FCC-QI ConnectX, Dual Port 40Gb/s InfiniBand / 10GigE Adapter IC with PCIe 2.0 x8 5.0GT/s Interface, driver: mlx4_en - unavailable for experiment
  • ib0, InfiniBand, configured rate: 40 Gbps, model: Mellanox Technologies MT25408A0-FCC-QI ConnectX, Dual Port 40Gb/s InfiniBand / 10GigE Adapter IC with PCIe 2.0 x8 5.0GT/s Interface, driver: mlx4_core

esterel27

1 node, 2 cpus, 20 cores (json)

Reservation example:

Terminal.png fsophia:
oarsub -q testing -p esterel27 -I
Access condition: testing queue
Model: Asus ESC8000G4
Manufacturing date: 2019-01-01
Date of arrival: 2025-03-17
CPU: Intel Xeon Gold 5115 (Skylake-SP), x86_64, 2.40GHz, 2 CPUs/node, 10 cores/CPU
Memory: 256 GiB
Storage:
  • disk0, 511 GB HDD SAS ASUS AsustekPIKE3108 (dev: /dev/disk0) (primary disk)
  • disk0, 4.09 TB HDD SAS ASUS AsustekPIKE3108 (dev: /dev/disk0) (primary disk)
Network:
  • eth0/enp1s0f0np0, Ethernet, configured rate: 1 Gbps, model: Intel I350 Gigabit Network Connection, driver: igb, SR-IOV enabled - no KaVLAN
  • eth1/enp129s0f1, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • ib0, InfiniBand, configured rate: 40 Gbps, model: Mellanox Technologies MT27500 Family [ConnectX-3], driver: mlx4_core
GPU: 8 x Nvidia GeForce GTX 1080 Ti (11 GiB)
Compute capability: 6.1

mercantour7

1 node, 2 cpus, 64 cores (json)

Reservation example:

Terminal.png fsophia:
oarsub -q testing -p mercantour7 -I
Access condition: testing queue
Model: Dell PowerEdge R7525
Manufacturing date: 2020-11-13
Date of arrival: 2025-03-20
CPU: AMD EPYC 7502 (Zen 2), x86_64, 2 CPUs/node, 32 cores/CPU
Memory: 384 GiB
Storage:
  • disk0, 959 GB SSD RAID Dell PERC H745 Adp (dev: /dev/disk0) (primary disk)
  • disk1, 48.01 TB HDD RAID Dell PERC H745 Adp (dev: /dev/disk1)
Network:
  • eth0/enp1s0f0np0, Ethernet, configured rate: 1 Gbps, model: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe, driver: tg3 - no KaVLAN
  • eth1/eno2, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • ib0, InfiniBand, configured rate: 40 Gbps, model: Mellanox Technologies MT27520 Family [ConnectX-3 Pro], driver: mlx4_core

Last generated from the Grid'5000 Reference API on 2025-03-26 (commit 992ccbdaf8)