Sophia:Hardware: Difference between revisions

From Grid5000
Jump to navigation Jump to search
No edit summary
No edit summary
Line 15: Line 15:
{|class="wikitable sortable"
{|class="wikitable sortable"
|-
|-
!|Cluster
!rowspan=2|Cluster
!|Access Condition
!rowspan=2|Access Condition
!|Date of arrival
!rowspan=2|Date of arrival
!data-sort-type="number"|Nodes
!data-sort-type="number" rowspan=2|Nodes
!|CPU
!colspan=4|CPU
!data-sort-type="number" rowspan=2|Memory
!data-sort-type="number" rowspan=2|Storage
!data-sort-type="number" rowspan=2|Network
|-
!data-sort-type="number"|#
!|Name
!data-sort-type="number"|Cores
!data-sort-type="number"|Cores
!data-sort-type="number"|Memory
!|Architecture
!data-sort-type="number"|Storage
!data-sort-type="number"|Network
|-
|-
|[[#uvb|uvb]]||||2011-01-04||30||2&nbsp;x&nbsp;Intel&nbsp;Xeon&nbsp;X5670||6&nbsp;cores/CPU||96&nbsp;GiB||data-sort-value="232"|<b>250&nbsp;GB&nbsp;HDD</b>||data-sort-value="41000"|1&nbsp;Gbps&nbsp;(SR&#8209;IOV)&nbsp;+&nbsp;40&nbsp;Gbps&nbsp;InfiniBand


|[[#uvb|uvb]]||||2011-01-04||30||2||Intel Xeon X5670||6&nbsp;cores/CPU||x86_64||96&nbsp;GiB||data-sort-value="232"|<b>250&nbsp;GB&nbsp;HDD</b>||data-sort-value="41000"|1&nbsp;Gbps&nbsp;(SR&#8209;IOV)&nbsp;+&nbsp;40&nbsp;Gbps&nbsp;InfiniBand
|-
|}
|}


Line 63: Line 68:
* <span style="color:grey">ib1, InfiniBand, model: Mellanox Technologies MT25408A0-FCC-QI ConnectX, Dual Port 40Gb/s InfiniBand / 10GigE Adapter IC with PCIe 2.0 x8 5.0GT/s Interface, driver: mlx4_core - unavailable for experiment</span><br/>
* <span style="color:grey">ib1, InfiniBand, model: Mellanox Technologies MT25408A0-FCC-QI ConnectX, Dual Port 40Gb/s InfiniBand / 10GigE Adapter IC with PCIe 2.0 x8 5.0GT/s Interface, driver: mlx4_core - unavailable for experiment</span><br/>
|-
|-
|}''<small>Last generated from the Grid'5000 Reference API on 2022-03-31 ([https://github.com/grid5000/reference-repository/commit/edc701cd07 commit edc701cd07])</small>''
|}''<small>Last generated from the Grid'5000 Reference API on 2022-06-07 ([https://github.com/grid5000/reference-repository/commit/5d4b87f3e6 commit 5d4b87f3e6])</small>''

Revision as of 13:39, 7 June 2022

See also: Network topology for Sophia

Summary

  • 1 cluster
  • 30 nodes
  • 360 CPU cores
  • 2.81 TiB RAM
  • 30 HDDs on nodes (total: 7.5 TB)
  • 4.2 TFLOPS (excluding GPUs)

Clusters

Cluster Access Condition Date of arrival Nodes CPU Memory Storage Network
# Name Cores Architecture
uvb 2011-01-04 30 2 Intel Xeon X5670 6 cores/CPU x86_64 96 GiB 250 GB HDD 1 Gbps (SR‑IOV) + 40 Gbps InfiniBand

Clusters in the default queue

uvb

30 nodes, 60 cpus, 360 cores (json)

Reservation example:

Terminal.png fsophia:
oarsub -p uvb -I
Model: Dell PowerEdge C6100
Date of arrival: 2011-01-04
CPU: Intel Xeon X5670 (Westmere, 2.93GHz, 2 CPUs/node, 6 cores/CPU)
Memory: 96 GiB
Storage: disk0, 250 GB HDD SATA Western Digital WDC WD2502ABYS-1 (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:00:1f.2-ata-1) (primary disk)
Network:
  • eth0/eno1, Ethernet, configured rate: 1 Gbps, model: Intel 82576 Gigabit Network Connection, driver: igb, SR-IOV enabled
  • eth1/eno2, Ethernet, model: Intel 82576 Gigabit Network Connection, driver: igb - unavailable for experiment
  • ib0, InfiniBand, configured rate: 40 Gbps, model: Mellanox Technologies MT25408A0-FCC-QI ConnectX, Dual Port 40Gb/s InfiniBand / 10GigE Adapter IC with PCIe 2.0 x8 5.0GT/s Interface, driver: mlx4_core
  • ib1, InfiniBand, model: Mellanox Technologies MT25408A0-FCC-QI ConnectX, Dual Port 40Gb/s InfiniBand / 10GigE Adapter IC with PCIe 2.0 x8 5.0GT/s Interface, driver: mlx4_core - unavailable for experiment

Last generated from the Grid'5000 Reference API on 2022-06-07 (commit 5d4b87f3e6)