Sophia:Hardware: Difference between revisions

From Grid5000
Jump to navigation Jump to search
No edit summary
No edit summary
 
(14 intermediate revisions by 9 users not shown)
Line 1: Line 1:
__NOTOC__
__NOEDITSECTION__
__NOEDITSECTION__
{{Portal|User}}
{{Portal|User}}
<div class="sitelink">Hardware: [[Hardware|Global]] | [[Grenoble:Hardware|Grenoble]] | [[Lille:Hardware|Lille]] | [[Luxembourg:Hardware|Luxembourg]] | [[Lyon:Hardware|Lyon]] | [[Nancy:Hardware|Nancy]] | [[Nantes:Hardware|Nantes]] | [[Rennes:Hardware|Rennes]] | [[Sophia:Hardware|Sophia]]</div>
<div class="sitelink">Hardware: [[Hardware|Global]] | [[Grenoble:Hardware|Grenoble]] | [[Lille:Hardware|Lille]] | [[Luxembourg:Hardware|Luxembourg]] | [[Lyon:Hardware|Lyon]] | [[Nancy:Hardware|Nancy]] | [[Nantes:Hardware|Nantes]] | [[Rennes:Hardware|Rennes]] | [[Sophia:Hardware|Sophia]] | [[Strasbourg:Hardware|Strasbourg]] | [[Toulouse:Hardware|Toulouse]]</div>
'''See also:''' [[Sophia:Network|Network topology for Sophia]]
'''See also:''' [[Sophia:Network|Network topology for Sophia]]
= Summary =
* 1 cluster
* 4 nodes
* 48 CPU cores
* 384&nbsp;GiB RAM
* 4 HDDs on nodes (total: 1.0&nbsp;TB)
* 0.6 TFLOPS (excluding GPUs)


= Summary =
= Clusters summary =
'''1 cluster, 30 nodes, 360 cores, 4.2 TFLOPS'''
{|class="wikitable sortable"
{|class="wikitable sortable"
|-
|-
!|Cluster
!rowspan=2|Cluster
!|Access Condition
!rowspan=2|Access Condition
!|Date of arrival
!rowspan=2|Date of arrival
!data-sort-type="number"|Nodes
!rowspan=2|Manufacturing date
!|CPU
!data-sort-type="number" rowspan=2|Nodes
!colspan=4|CPU
!data-sort-type="number" rowspan=2|Memory
!data-sort-type="number" rowspan=2|Storage
!data-sort-type="number" rowspan=2|Network
|-
!data-sort-type="number"|#
!|Name
!data-sort-type="number"|Cores
!data-sort-type="number"|Cores
!data-sort-type="number"|Memory
!|Architecture
!data-sort-type="number"|Storage
!data-sort-type="number"|Network
|-
|-
|[[#uvb|uvb]]||||2011-01-04||30||2&nbsp;x&nbsp;Intel&nbsp;Xeon&nbsp;X5670||6&nbsp;cores/CPU||96&nbsp;GiB||data-sort-value="232"|<b>250&nbsp;GB&nbsp;HDD</b>||data-sort-value="41000"|1&nbsp;Gbps&nbsp;(SR&#8209;IOV)&nbsp;+&nbsp;40&nbsp;Gbps&nbsp;InfiniBand


|[[#uvb|uvb]]||||2011-01-04||2011-01-04||4||2||Intel Xeon X5670||6&nbsp;cores/CPU||x86_64||data-sort-value="103079215104"|96&nbsp;GiB||data-sort-value="232"|<b>250&nbsp;GB&nbsp;HDD</b>||data-sort-value="41000"|1&nbsp;Gbps&nbsp;(SR&#8209;IOV)&nbsp;+&nbsp;40&nbsp;Gbps&nbsp;InfiniBand
|-
|}
|}


= Clusters in default queue =
= Clusters in the [https://intranet.grid5000.fr/oar/Sophia/drawgantt-svg/ default queue] =


== uvb ==
== [https://intranet.grid5000.fr/oar/Sophia/drawgantt-svg/?filter=uvb%20only uvb] ==


'''30 nodes, 60 cpus, 360 cores''' ([https://public-api.grid5000.fr/stable/sites/sophia/clusters/uvb/nodes.json?pretty=1 json])
'''4 nodes, 8 cpus, 48 cores''' ([https://public-api.grid5000.fr/stable/sites/sophia/clusters/uvb/nodes.json?pretty=1 json])


'''Reservation example:'''
'''Reservation example:'''


{{Term|location=fsophia|cmd=<code class="command">oarsub</code> <code class="env">-p "cluster='uvb'"</code> <code>-I</code>}}
{{Term|location=fsophia|cmd=<code class="command">oarsub</code> <code class="env">-p uvb</code> <code>-I</code>}}


{|
{|
Line 37: Line 48:
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Model:'''
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Model:'''
| Dell PowerEdge C6100<br/>
| Dell PowerEdge C6100<br/>
|-
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Manufacturing date:'''
| 2011-01-04<br/>
|-
|-
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Date of arrival:'''
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Date of arrival:'''
Line 42: Line 56:
|-
|-
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''CPU:'''
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''CPU:'''
| Intel Xeon X5670 (Westmere, 2.93GHz, 2&nbsp;CPUs/node, 6&nbsp;cores/CPU)<br/>
| Intel Xeon X5670 (Westmere), x86_64, 2.93GHz, 2&nbsp;CPUs/node, 6&nbsp;cores/CPU<br/>
|-
|-
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Memory:'''
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Memory:'''
Line 48: Line 62:
|-
|-
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Storage:'''
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Storage:'''
|  disk0, 250&nbsp;GB HDD SATA Western Digital WDC WD2502ABYS-1 (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:00:1f.2-ata-1)  (primary disk)<br/>
|  disk0, 250&nbsp;GB HDD SATA Western Digital WDC WD2502ABYS-1 (dev: <code class="file">/dev/disk0</code>)  (primary disk)<br/>
|-
|-
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Network:'''
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Network:'''
Line 54: Line 68:
* eth0/eno1, Ethernet, configured rate: 1&nbsp;Gbps, model: Intel 82576 Gigabit Network Connection, driver: igb, SR-IOV enabled<br />
* eth0/eno1, Ethernet, configured rate: 1&nbsp;Gbps, model: Intel 82576 Gigabit Network Connection, driver: igb, SR-IOV enabled<br />
* <span style="color:grey">eth1/eno2, Ethernet, model: Intel 82576 Gigabit Network Connection, driver: igb - unavailable for experiment</span><br />
* <span style="color:grey">eth1/eno2, Ethernet, model: Intel 82576 Gigabit Network Connection, driver: igb - unavailable for experiment</span><br />
* ib0, InfiniBand, configured rate: 40&nbsp;Gbps, model: Mellanox Technologies MT26428 [ConnectX VPI PCIe 2.0 5GT/s - IB QDR / 10GigE], driver: mlx4_core<br />
* ib0, InfiniBand, configured rate: 40&nbsp;Gbps, model: Mellanox Technologies MT25408A0-FCC-QI ConnectX, Dual Port 40Gb/s InfiniBand / 10GigE Adapter IC with PCIe 2.0 x8 5.0GT/s Interface, driver: mlx4_core<br />
* <span style="color:grey">ib1, InfiniBand, model: Mellanox Technologies MT26428 [ConnectX VPI PCIe 2.0 5GT/s - IB QDR / 10GigE], driver: mlx4_core - unavailable for experiment</span><br/>
* <span style="color:grey">ib1, InfiniBand, model: Mellanox Technologies MT25408A0-FCC-QI ConnectX, Dual Port 40Gb/s InfiniBand / 10GigE Adapter IC with PCIe 2.0 x8 5.0GT/s Interface, driver: mlx4_core - unavailable for experiment</span><br/>
|-
|-
|}''<small>Last generated from the Grid'5000 Reference API on 2021-10-13 ([https://github.com/grid5000/reference-repository/commit/03749e388b commit 03749e388b])</small>''
|}''<small>Last generated from the Grid'5000 Reference API on 2024-10-09 ([https://gitlab.inria.fr/grid5000/reference-repository/commit/91dbeb72d3 commit 91dbeb72d3])</small>''

Latest revision as of 21:26, 9 October 2024

See also: Network topology for Sophia

Summary

  • 1 cluster
  • 4 nodes
  • 48 CPU cores
  • 384 GiB RAM
  • 4 HDDs on nodes (total: 1.0 TB)
  • 0.6 TFLOPS (excluding GPUs)

Clusters summary

Cluster Access Condition Date of arrival Manufacturing date Nodes CPU Memory Storage Network
# Name Cores Architecture
uvb 2011-01-04 2011-01-04 4 2 Intel Xeon X5670 6 cores/CPU x86_64 96 GiB 250 GB HDD 1 Gbps (SR‑IOV) + 40 Gbps InfiniBand

Clusters in the default queue

uvb

4 nodes, 8 cpus, 48 cores (json)

Reservation example:

Terminal.png fsophia:
oarsub -p uvb -I
Model: Dell PowerEdge C6100
Manufacturing date: 2011-01-04
Date of arrival: 2011-01-04
CPU: Intel Xeon X5670 (Westmere), x86_64, 2.93GHz, 2 CPUs/node, 6 cores/CPU
Memory: 96 GiB
Storage: disk0, 250 GB HDD SATA Western Digital WDC WD2502ABYS-1 (dev: /dev/disk0) (primary disk)
Network:
  • eth0/eno1, Ethernet, configured rate: 1 Gbps, model: Intel 82576 Gigabit Network Connection, driver: igb, SR-IOV enabled
  • eth1/eno2, Ethernet, model: Intel 82576 Gigabit Network Connection, driver: igb - unavailable for experiment
  • ib0, InfiniBand, configured rate: 40 Gbps, model: Mellanox Technologies MT25408A0-FCC-QI ConnectX, Dual Port 40Gb/s InfiniBand / 10GigE Adapter IC with PCIe 2.0 x8 5.0GT/s Interface, driver: mlx4_core
  • ib1, InfiniBand, model: Mellanox Technologies MT25408A0-FCC-QI ConnectX, Dual Port 40Gb/s InfiniBand / 10GigE Adapter IC with PCIe 2.0 x8 5.0GT/s Interface, driver: mlx4_core - unavailable for experiment

Last generated from the Grid'5000 Reference API on 2024-10-09 (commit 91dbeb72d3)