Sophia:Hardware: Difference between revisions

From Grid5000
Jump to navigation Jump to search
No edit summary
No edit summary
Line 4: Line 4:
<div class="sitelink">Hardware: [[Hardware|Global]] | [[Grenoble:Hardware|Grenoble]] | [[Lille:Hardware|Lille]] | [[Luxembourg:Hardware|Luxembourg]] | [[Lyon:Hardware|Lyon]] | [[Nancy:Hardware|Nancy]] | [[Nantes:Hardware|Nantes]] | [[Rennes:Hardware|Rennes]] | [[Sophia:Hardware|Sophia]]</div>
<div class="sitelink">Hardware: [[Hardware|Global]] | [[Grenoble:Hardware|Grenoble]] | [[Lille:Hardware|Lille]] | [[Luxembourg:Hardware|Luxembourg]] | [[Lyon:Hardware|Lyon]] | [[Nancy:Hardware|Nancy]] | [[Nantes:Hardware|Nantes]] | [[Rennes:Hardware|Rennes]] | [[Sophia:Hardware|Sophia]]</div>
'''See also:''' [[Sophia:Network|Network topology for Sophia]]
'''See also:''' [[Sophia:Network|Network topology for Sophia]]
= Summary =
* 1 clusters
* 30 nodes
* 360 CPU cores
* 0 GPUs
* 2.81&nbsp;TiB RAM
* 0 SSDs and 30 HDDs on nodes (total: 7.5&nbsp;TB)
* 4.2 TFLOPS (excluding GPUs)


= Summary =
= Clusters =
'''1 cluster, 30 nodes, 360 cores, 4.2 TFLOPS'''
{|class="wikitable sortable"
{|class="wikitable sortable"
|-
|-
Line 57: Line 64:
* <span style="color:grey">ib1, InfiniBand, model: Mellanox Technologies MT25408A0-FCC-QI ConnectX, Dual Port 40Gb/s InfiniBand / 10GigE Adapter IC with PCIe 2.0 x8 5.0GT/s Interface, driver: mlx4_core - unavailable for experiment</span><br/>
* <span style="color:grey">ib1, InfiniBand, model: Mellanox Technologies MT25408A0-FCC-QI ConnectX, Dual Port 40Gb/s InfiniBand / 10GigE Adapter IC with PCIe 2.0 x8 5.0GT/s Interface, driver: mlx4_core - unavailable for experiment</span><br/>
|-
|-
|}''<small>Last generated from the Grid'5000 Reference API on 2022-03-30 ([https://github.com/grid5000/reference-repository/commit/bb81389cf0 commit bb81389cf0])</small>''
|}''<small>Last generated from the Grid'5000 Reference API on 2022-03-30 ([https://github.com/grid5000/reference-repository/commit/be3d572530 commit be3d572530])</small>''

Revision as of 22:28, 30 March 2022

See also: Network topology for Sophia

Summary

  • 1 clusters
  • 30 nodes
  • 360 CPU cores
  • 0 GPUs
  • 2.81 TiB RAM
  • 0 SSDs and 30 HDDs on nodes (total: 7.5 TB)
  • 4.2 TFLOPS (excluding GPUs)

Clusters

Cluster Access Condition Date of arrival Nodes CPU Cores Memory Storage Network
uvb 2011-01-04 30 2 x Intel Xeon X5670 6 cores/CPU 96 GiB 250 GB HDD 1 Gbps (SR‑IOV) + 40 Gbps InfiniBand

Clusters in the default queue

uvb

30 nodes, 60 cpus, 360 cores (json)

Reservation example:

Terminal.png fsophia:
oarsub -p "cluster='uvb'" -I
Model: Dell PowerEdge C6100
Date of arrival: 2011-01-04
CPU: Intel Xeon X5670 (Westmere, 2.93GHz, 2 CPUs/node, 6 cores/CPU)
Memory: 96 GiB
Storage: disk0, 250 GB HDD SATA Western Digital WDC WD2502ABYS-1 (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:00:1f.2-ata-1) (primary disk)
Network:
  • eth0/eno1, Ethernet, configured rate: 1 Gbps, model: Intel 82576 Gigabit Network Connection, driver: igb, SR-IOV enabled
  • eth1/eno2, Ethernet, model: Intel 82576 Gigabit Network Connection, driver: igb - unavailable for experiment
  • ib0, InfiniBand, configured rate: 40 Gbps, model: Mellanox Technologies MT25408A0-FCC-QI ConnectX, Dual Port 40Gb/s InfiniBand / 10GigE Adapter IC with PCIe 2.0 x8 5.0GT/s Interface, driver: mlx4_core
  • ib1, InfiniBand, model: Mellanox Technologies MT25408A0-FCC-QI ConnectX, Dual Port 40Gb/s InfiniBand / 10GigE Adapter IC with PCIe 2.0 x8 5.0GT/s Interface, driver: mlx4_core - unavailable for experiment

Last generated from the Grid'5000 Reference API on 2022-03-30 (commit be3d572530)