Sophia:Hardware: Difference between revisions

From Grid5000
Jump to navigation Jump to search
No edit summary
No edit summary
Line 1: Line 1:
__NOEDITSECTION__
__NOEDITSECTION__
{{Portal|User}}
{{Portal|User}}
<div class="sitelink">Hardware: [[Hardware|Global]] | [[Grenoble:Hardware|Grenoble]] | [[Lille:Hardware|Lille]] | [[Luxembourg:Hardware|Luxembourg]] | [[Lyon:Hardware|Lyon]] | [[Nancy:Hardware|Nancy]] | [[Nantes:Hardware|Nantes]] | [[Rennes:Hardware|Rennes]] | [[Sophia:Hardware|Sophia]] | [[Toulouse:Hardware|Toulouse]]</div>
<div class="sitelink">Hardware: [[Hardware|Global]] | [[Grenoble:Hardware|Grenoble]] | [[Lille:Hardware|Lille]] | [[Luxembourg:Hardware|Luxembourg]] | [[Lyon:Hardware|Lyon]] | [[Nancy:Hardware|Nancy]] | [[Nantes:Hardware|Nantes]] | [[Rennes:Hardware|Rennes]] | [[Sophia:Hardware|Sophia]] | [[Strasbourg:Hardware|Strasbourg]] | [[Toulouse:Hardware|Toulouse]]</div>
'''See also:''' [[Sophia:Network|Network topology for Sophia]]
'''See also:''' [[Sophia:Network|Network topology for Sophia]]
= Summary =
= Summary =
Line 71: Line 71:
* <span style="color:grey">ib1, InfiniBand, model: Mellanox Technologies MT25408A0-FCC-QI ConnectX, Dual Port 40Gb/s InfiniBand / 10GigE Adapter IC with PCIe 2.0 x8 5.0GT/s Interface, driver: mlx4_core - unavailable for experiment</span><br/>
* <span style="color:grey">ib1, InfiniBand, model: Mellanox Technologies MT25408A0-FCC-QI ConnectX, Dual Port 40Gb/s InfiniBand / 10GigE Adapter IC with PCIe 2.0 x8 5.0GT/s Interface, driver: mlx4_core - unavailable for experiment</span><br/>
|-
|-
|}''<small>Last generated from the Grid'5000 Reference API on 2024-03-29 ([https://gitlab.inria.fr/grid5000/reference-repository/commit/f13d97f135 commit f13d97f135])</small>''
|}''<small>Last generated from the Grid'5000 Reference API on 2024-05-16 ([https://gitlab.inria.fr/grid5000/reference-repository/commit/037c3db019 commit 037c3db019])</small>''

Revision as of 17:35, 16 May 2024

See also: Network topology for Sophia

Summary

  • 1 cluster
  • 30 nodes
  • 360 CPU cores
  • 2.81 TiB RAM
  • 30 HDDs on nodes (total: 7.5 TB)
  • 4.2 TFLOPS (excluding GPUs)

Clusters summary

Cluster Access Condition Date of arrival Manufacturing date Nodes CPU Memory Storage Network
# Name Cores Architecture
uvb 2011-01-04 2011-01-04 30 2 Intel Xeon X5670 6 cores/CPU x86_64 96 GiB 250 GB HDD 1 Gbps (SR‑IOV) + 40 Gbps InfiniBand

Clusters in the default queue

uvb

30 nodes, 60 cpus, 360 cores (json)

Reservation example:

Terminal.png fsophia:
oarsub -p uvb -I
Model: Dell PowerEdge C6100
Manufacturing date: 2011-01-04
Date of arrival: 2011-01-04
CPU: Intel Xeon X5670 (Westmere), x86_64, 2.93GHz, 2 CPUs/node, 6 cores/CPU
Memory: 96 GiB
Storage: disk0, 250 GB HDD SATA Western Digital WDC WD2502ABYS-1 (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:00:1f.2-ata-1) (primary disk)
Network:
  • eth0/eno1, Ethernet, configured rate: 1 Gbps, model: Intel 82576 Gigabit Network Connection, driver: igb, SR-IOV enabled
  • eth1/eno2, Ethernet, model: Intel 82576 Gigabit Network Connection, driver: igb - unavailable for experiment
  • ib0, InfiniBand, configured rate: 40 Gbps, model: Mellanox Technologies MT25408A0-FCC-QI ConnectX, Dual Port 40Gb/s InfiniBand / 10GigE Adapter IC with PCIe 2.0 x8 5.0GT/s Interface, driver: mlx4_core
  • ib1, InfiniBand, model: Mellanox Technologies MT25408A0-FCC-QI ConnectX, Dual Port 40Gb/s InfiniBand / 10GigE Adapter IC with PCIe 2.0 x8 5.0GT/s Interface, driver: mlx4_core - unavailable for experiment

Last generated from the Grid'5000 Reference API on 2024-05-16 (commit 037c3db019)