Difference between revisions of "Grenoble:Hardware"

From Grid5000
Jump to: navigation, search
Line 30: Line 30:
  
 
'''68 nodes, 136 cpus, 544 cores,''' split as follows due to differences between nodes ([https://public-api.grid5000.fr/stable/sites/grenoble/clusters/edel/nodes.json?pretty=1 json])
 
'''68 nodes, 136 cpus, 544 cores,''' split as follows due to differences between nodes ([https://public-api.grid5000.fr/stable/sites/grenoble/clusters/edel/nodes.json?pretty=1 json])
; edel-[1,<wbr>5-6,<wbr>8-9,<wbr>12,<wbr>15-16,<wbr>19,<wbr>21,<wbr>23-24,<wbr>28-29,<wbr>32,<wbr>37,<wbr>39-43,<wbr>48-49,<wbr>52,<wbr>55-57,<wbr>59,<wbr>62,<wbr>65-71] (36 nodes, 72 cpus, 288 cores)
+
; edel-[1,<wbr>5-6,<wbr>16,<wbr>37,<wbr>42] (6 nodes, 12 cpus, 48 cores)
 
{|
 
{|
 
|-
 
|-
Line 50: Line 50:
 
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Network:'''
 
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Network:'''
 
|  
 
|  
* eth0/enp1s0f0, Ethernet (driver: igb), configured rate: 1&nbsp;Gbps   <br />
+
* eth0/enp1s0f0, Ethernet, configured rate: 1&nbsp;Gbps, model: Intel 82576 Gigabit Network Connection, driver: igb<br />
* <span style="color:grey"> eth1/enp1s0f1, Ethernet (driver: igb), configured rate: n/c - unavailable for experiment </span><br />
+
* <span style="color:grey">eth1/enp1s0f1, Ethernet, configured rate: n/c, model: Intel 82576 Gigabit Network Connection, driver: igb- unavailable for experiment</span><br />
* ib0, InfiniBand (driver: mlx4_core), configured rate: 40&nbsp;Gbps   <br />
+
* ib0, InfiniBand, configured rate: 40&nbsp;Gbps, model: Mellanox Technologies MT26428 [ConnectX VPI PCIe 2.0 5GT/s - IB QDR / 10GigE], driver: mlx4_core<br />
* <span style="color:grey"> ib1, InfiniBand (driver: mlx4_core), configured rate: n/c - unavailable for experiment </span><br/>
+
* <span style="color:grey">ib1, InfiniBand, configured rate: n/c, model: N/A N/A, driver: mlx4_core- unavailable for experiment</span><br/>
 
|-
 
|-
 
|}
 
|}
 
<hr style="height:10pt; visibility:hidden;" />
 
<hr style="height:10pt; visibility:hidden;" />
  
; edel-[2-3,<wbr>7,<wbr>10-11,<wbr>13-14,<wbr>22,<wbr>26,<wbr>30,<wbr>36,<wbr>38,<wbr>44,<wbr>47,<wbr>51,<wbr>53,<wbr>58,<wbr>60,<wbr>64] (19 nodes, 38 cpus, 152 cores)
+
; edel-[2] (1 node, 2 cpus, 8 cores)
 
{|
 
{|
 
|-
 
|-
Line 78: Line 78:
 
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Network:'''
 
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Network:'''
 
|  
 
|  
* eth0/enp1s0f0, Ethernet (driver: igb), configured rate: 1&nbsp;Gbps   <br />
+
* eth0/enp1s0f0, Ethernet, configured rate: 1&nbsp;Gbps, model: Intel 82576 Gigabit Network Connection, driver: igb<br />
* <span style="color:grey"> eth1/enp1s0f1, Ethernet (driver: igb), configured rate: n/c - unavailable for experiment </span><br />
+
* <span style="color:grey">eth1/enp1s0f1, Ethernet, configured rate: n/c, model: Intel 82576 Gigabit Network Connection, driver: igb- unavailable for experiment</span><br />
* ib0, InfiniBand (driver: mlx4_core), configured rate: 40&nbsp;Gbps   <br />
+
* ib0, InfiniBand, configured rate: 40&nbsp;Gbps, model: Mellanox Technologies MT26428 [ConnectX VPI PCIe 2.0 5GT/s - IB QDR / 10GigE], driver: mlx4_core<br />
* <span style="color:grey"> ib1, InfiniBand (driver: mlx4_core), configured rate: n/c - unavailable for experiment </span><br/>
+
* <span style="color:grey">ib1, InfiniBand, configured rate: n/c, model: N/A N/A, driver: mlx4_core- unavailable for experiment</span><br/>
 +
|-
 +
|}
 +
<hr style="height:10pt; visibility:hidden;" />
 +
 
 +
; edel-[3,<wbr>7,<wbr>10-11,<wbr>13-14,<wbr>22,<wbr>26,<wbr>30,<wbr>36,<wbr>38,<wbr>44,<wbr>47,<wbr>51,<wbr>53,<wbr>58,<wbr>60,<wbr>64] (18 nodes, 36 cpus, 144 cores)
 +
{|
 +
|-
 +
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Model:'''
 +
| Bull bullx B500 compute blades<br/>
 +
|-
 +
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Date of arrival:'''
 +
| 2008-10-03<br/>
 +
|-
 +
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''CPU:'''
 +
| Intel Xeon E5520 Nehalem  2.27GHz (2&nbsp;CPUs/node, 4&nbsp;cores/CPU)<br/>
 +
|-
 +
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Memory:'''
 +
| 24&nbsp;GB<br/>
 +
|-
 +
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Storage:'''
 +
|  59&nbsp;GB SSD SATA C400-MTFDDAA064M  (driver: ahci, path: /dev/disk/by-path/pci-0000:00:1f.2-ata-1)<br/>
 +
|-
 +
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Network:'''
 +
|
 +
* eth0/enp1s0f0, Ethernet, configured rate: 1&nbsp;Gbps, model: Intel 82576 Gigabit Network Connection, driver: igb<br />
 +
* <span style="color:grey">eth1/enp1s0f1, Ethernet, configured rate: n/c, model: Intel 82576 Gigabit Network Connection, driver: igb- unavailable for experiment</span><br />
 +
* ib0, InfiniBand, configured rate: 40&nbsp;Gbps, model: Mellanox Technologies MT26428 [ConnectX VPI PCIe 2.0 5GT/s - IB QDR / 10GigE], driver: mlx4_core<br />
 +
* <span style="color:grey">ib1, InfiniBand, configured rate: n/c, model: Mellanox Technologies MT26428 [ConnectX VPI PCIe 2.0 5GT/s - IB QDR / 10GigE], driver: mlx4_core- unavailable for experiment</span><br/>
 +
|-
 +
|}
 +
<hr style="height:10pt; visibility:hidden;" />
 +
 
 +
; edel-[8-9,<wbr>12,<wbr>15,<wbr>19,<wbr>21,<wbr>23,<wbr>28-29,<wbr>32,<wbr>39-40,<wbr>43,<wbr>48-49,<wbr>52,<wbr>55-57,<wbr>59,<wbr>62,<wbr>65-69,<wbr>71] (27 nodes, 54 cpus, 216 cores)
 +
{|
 +
|-
 +
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Model:'''
 +
| Bull bullx B500 compute blades<br/>
 +
|-
 +
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Date of arrival:'''
 +
| 2008-10-03<br/>
 +
|-
 +
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''CPU:'''
 +
| Intel Xeon E5520 Nehalem  2.27GHz (2&nbsp;CPUs/node, 4&nbsp;cores/CPU)<br/>
 +
|-
 +
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Memory:'''
 +
| 24&nbsp;GB<br/>
 +
|-
 +
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Storage:'''
 +
|  119&nbsp;GB SSD SATA C400-MTFDDAA128M  (driver: ahci, path: /dev/disk/by-path/pci-0000:00:1f.2-ata-1)<br/>
 +
|-
 +
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Network:'''
 +
|
 +
* eth0/enp1s0f0, Ethernet, configured rate: 1&nbsp;Gbps, model: Intel 82576 Gigabit Network Connection, driver: igb<br />
 +
* <span style="color:grey">eth1/enp1s0f1, Ethernet, configured rate: n/c, model: Intel 82576 Gigabit Network Connection, driver: igb- unavailable for experiment</span><br />
 +
* ib0, InfiniBand, configured rate: 40&nbsp;Gbps, model: Mellanox Technologies MT26428 [ConnectX VPI PCIe 2.0 5GT/s - IB QDR / 10GigE], driver: mlx4_core<br />
 +
* <span style="color:grey">ib1, InfiniBand, configured rate: n/c, model: Mellanox Technologies MT26428 [ConnectX VPI PCIe 2.0 5GT/s - IB QDR / 10GigE], driver: mlx4_core- unavailable for experiment</span><br/>
 
|-
 
|-
 
|}
 
|}
Line 106: Line 162:
 
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Network:'''
 
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Network:'''
 
|  
 
|  
* eth0/enp1s0f0, Ethernet (driver: igb), configured rate: 1&nbsp;Gbps   <br />
+
* eth0/enp1s0f0, Ethernet, configured rate: 1&nbsp;Gbps, model: Intel 82576 Gigabit Network Connection, driver: igb<br />
* <span style="color:grey"> eth1/enp1s0f1, Ethernet (driver: igb), configured rate: n/c - unavailable for experiment </span><br />
+
* <span style="color:grey">eth1/enp1s0f1, Ethernet, configured rate: n/c, model: Intel 82576 Gigabit Network Connection, driver: igb- unavailable for experiment</span><br />
* ib0, InfiniBand (driver: mlx4_core), configured rate: 40&nbsp;Gbps   <br />
+
* ib0, InfiniBand, configured rate: 40&nbsp;Gbps, model: Mellanox Technologies MT26428 [ConnectX VPI PCIe 2.0 5GT/s - IB QDR / 10GigE], driver: mlx4_core<br />
* <span style="color:grey"> ib1, InfiniBand (driver: mlx4_core), configured rate: n/c - unavailable for experiment </span><br/>
+
* <span style="color:grey">ib1, InfiniBand, configured rate: n/c, model: Mellanox Technologies MT26428 [ConnectX VPI PCIe 2.0 5GT/s - IB QDR / 10GigE], driver: mlx4_core- unavailable for experiment</span><br/>
 
|-
 
|-
 
|}
 
|}
Line 134: Line 190:
 
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Network:'''
 
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Network:'''
 
|  
 
|  
* eth0/enp1s0f0, Ethernet (driver: igb), configured rate: 1&nbsp;Gbps   <br />
+
* eth0/enp1s0f0, Ethernet, configured rate: 1&nbsp;Gbps, model: Intel 82576 Gigabit Network Connection, driver: igb<br />
* <span style="color:grey"> eth1/enp1s0f1, Ethernet (driver: igb), configured rate: n/c - unavailable for experiment </span><br />
+
* <span style="color:grey">eth1/enp1s0f1, Ethernet, configured rate: n/c, model: Intel 82576 Gigabit Network Connection, driver: igb- unavailable for experiment</span><br />
* ib0, InfiniBand (driver: mlx4_core), configured rate: 40&nbsp;Gbps   <br />
+
* ib0, InfiniBand, configured rate: 40&nbsp;Gbps, model: N/A N/A, driver: mlx4_core<br />
* <span style="color:grey"> ib1, InfiniBand (driver: mlx4_core), configured rate: n/c - unavailable for experiment </span><br/>
+
* <span style="color:grey">ib1, InfiniBand, configured rate: n/c, model: N/A N/A, driver: mlx4_core- unavailable for experiment</span><br/>
 
|-
 
|-
 
|}
 
|}
Line 162: Line 218:
 
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Network:'''
 
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Network:'''
 
|  
 
|  
* eth0/enp1s0f0, Ethernet (driver: igb), configured rate: 1&nbsp;Gbps   <br />
+
* eth0/enp1s0f0, Ethernet, configured rate: 1&nbsp;Gbps, model: Intel 82576 Gigabit Network Connection, driver: igb<br />
* <span style="color:grey"> eth1/enp1s0f1, Ethernet (driver: igb), configured rate: n/c - unavailable for experiment </span><br />
+
* <span style="color:grey">eth1/enp1s0f1, Ethernet, configured rate: n/c, model: Intel 82576 Gigabit Network Connection, driver: igb- unavailable for experiment</span><br />
* ib0, InfiniBand (driver: mlx4_core), configured rate: 40&nbsp;Gbps   <br />
+
* ib0, InfiniBand, configured rate: 40&nbsp;Gbps, model: Mellanox Technologies MT26428 [ConnectX VPI PCIe 2.0 5GT/s - IB QDR / 10GigE], driver: mlx4_core<br />
* <span style="color:grey"> ib1, InfiniBand (driver: mlx4_core), configured rate: n/c - unavailable for experiment </span><br/>
+
* <span style="color:grey">ib1, InfiniBand, configured rate: n/c, model: Mellanox Technologies MT26428 [ConnectX VPI PCIe 2.0 5GT/s - IB QDR / 10GigE], driver: mlx4_core- unavailable for experiment</span><br/>
 +
|-
 +
|}
 +
<hr style="height:10pt; visibility:hidden;" />
 +
 
 +
; edel-[24,<wbr>41,<wbr>70] (3 nodes, 6 cpus, 24 cores)
 +
{|
 +
|-
 +
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Model:'''
 +
| Bull bullx B500 compute blades<br/>
 +
|-
 +
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Date of arrival:'''
 +
| 2008-10-03<br/>
 +
|-
 +
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''CPU:'''
 +
| Intel Xeon E5520 Nehalem  2.27GHz (2&nbsp;CPUs/node, 4&nbsp;cores/CPU)<br/>
 +
|-
 +
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Memory:'''
 +
| 24&nbsp;GB<br/>
 +
|-
 +
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Storage:'''
 +
|  119&nbsp;GB SSD SATA C400-MTFDDAA128M  (driver: ahci, path: /dev/disk/by-path/pci-0000:00:1f.2-ata-1)<br/>
 +
|-
 +
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Network:'''
 +
|
 +
* eth0/enp1s0f0, Ethernet, configured rate: 1&nbsp;Gbps, model: Intel 82576 Gigabit Network Connection, driver: igb<br />
 +
* <span style="color:grey">eth1/enp1s0f1, Ethernet, configured rate: n/c, model: Intel 82576 Gigabit Network Connection, driver: igb- unavailable for experiment</span><br />
 +
* ib0, InfiniBand, configured rate: 40&nbsp;Gbps, model: N/A N/A, driver: mlx4_core<br />
 +
* <span style="color:grey">ib1, InfiniBand, configured rate: n/c, model: N/A N/A, driver: mlx4_core- unavailable for experiment</span><br/>
 
|-
 
|-
 
|}
 
|}
Line 190: Line 274:
 
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Network:'''
 
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Network:'''
 
|  
 
|  
* eth0/enp1s0f0, Ethernet (driver: igb), configured rate: 1&nbsp;Gbps   <br />
+
* eth0/enp1s0f0, Ethernet, configured rate: 1&nbsp;Gbps, model: Intel 82576 Gigabit Network Connection, driver: igb<br />
* <span style="color:grey"> eth1/enp1s0f1, Ethernet (driver: igb), configured rate: n/c - unavailable for experiment </span><br />
+
* <span style="color:grey">eth1/enp1s0f1, Ethernet, configured rate: n/c, model: Intel 82576 Gigabit Network Connection, driver: igb- unavailable for experiment</span><br />
* ib0, InfiniBand (driver: mlx4_core), configured rate: 40&nbsp;Gbps   <br />
+
* ib0, InfiniBand, configured rate: 40&nbsp;Gbps, model: N/A N/A, driver: mlx4_core<br />
* <span style="color:grey"> ib1, InfiniBand (driver: mlx4_core), configured rate: n/c - unavailable for experiment </span><br/>
+
* <span style="color:grey">ib1, InfiniBand, configured rate: n/c, model: N/A N/A, driver: mlx4_core- unavailable for experiment</span><br/>
 
|-
 
|-
 
|}
 
|}
Line 219: Line 303:
 
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Network:'''
 
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Network:'''
 
|  
 
|  
* <span style="color:grey"> eth0/enp5s0f0, Ethernet (driver: e1000e), configured rate: n/c - unavailable for experiment </span><br />
+
* <span style="color:grey">eth0/enp5s0f0, Ethernet, configured rate: n/c, model: Intel 80003ES2LAN Gigabit Ethernet Controller (Copper), driver: e1000e- unavailable for experiment</span><br />
* eth1/enp5s0f1, Ethernet (driver: e1000e), configured rate: 1&nbsp;Gbps   <br />
+
* eth1/enp5s0f1, Ethernet, configured rate: 1&nbsp;Gbps, model: Intel 80003ES2LAN Gigabit Ethernet Controller (Copper), driver: e1000e<br />
* ib0, InfiniBand (driver: mlx4_core), configured rate: 20&nbsp;Gbps   <br />
+
* ib0, InfiniBand, configured rate: 20&nbsp;Gbps, model: Mellanox Technologies MT26418 [ConnectX VPI PCIe 2.0 5GT/s - IB DDR / 10GigE], driver: mlx4_core<br />
* <span style="color:grey"> ib1, InfiniBand (driver: mlx4_core), configured rate: n/c - unavailable for experiment </span><br/>
+
* <span style="color:grey">ib1, InfiniBand, configured rate: n/c, model: Mellanox Technologies MT26418 [ConnectX VPI PCIe 2.0 5GT/s - IB DDR / 10GigE], driver: mlx4_core- unavailable for experiment</span><br/>
 
|-
 
|-
 
|}
 
|}
Line 252: Line 336:
 
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Network:'''
 
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Network:'''
 
|  
 
|  
* eth0/eno113, Ethernet (driver: i40e), configured rate: 10&nbsp;Gbps   <br />
+
* eth0/eno113, Ethernet, configured rate: 10&nbsp;Gbps, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e<br />
* <span style="color:grey"> eth1/eno114, Ethernet (driver: i40e), configured rate: n/c - unavailable for experiment </span><br />
+
* <span style="color:grey">eth1/eno114, Ethernet, configured rate: n/c, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e- unavailable for experiment</span><br />
* <span style="color:grey"> eth2/eno115, Ethernet (driver: i40e), configured rate: n/c - unavailable for experiment </span><br />
+
* <span style="color:grey">eth2/eno115, Ethernet, configured rate: n/c, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e- unavailable for experiment</span><br />
* <span style="color:grey"> eth3/eno116, Ethernet (driver: i40e), configured rate: n/c - unavailable for experiment </span><br/>
+
* <span style="color:grey">eth3/eno116, Ethernet, configured rate: n/c, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e- unavailable for experiment</span><br/>
 
|-
 
|-
|}''<small>Last generated from the Grid5000 Reference API on 2018-05-19 ([https://github.com/grid5000/reference-repository/commit/9182a0c2b commit 9182a0c2b])</small>''
+
|}''<small>Last generated from the Grid'5000 Reference API on 2018-05-19 ([https://github.com/grid5000/reference-repository/commit/6cc091305 commit 6cc091305])</small>''

Revision as of 19:50, 19 May 2018


Summary

3 clusters, 103 nodes, 1048 cores, 5.3 TFLOPS

Cluster Queue Date of arrival Nodes CPU Cores Memory Storage Network
edel default 2008-10-03 68 2 x Intel Xeon E5520 4 cores/CPU 24 GB [1,5-6,8-9,12,15-16,19,21,23-25,28-29,32,35,37,39-43,46,48-50,52,55-57,59,62-63,65-72]: 119 GB SSD
[2-4,7,10-11,13-14,17-18,20,22,26,30-31,33-34,36,38,44,47,51,53,58,60,64]: 59 GB SSD
1 Gbps + 40 Gbps InfiniBand
genepi default 2008-10-01 31 2 x Intel Xeon E5420 4 cores/CPU 8 GB 153 GB HDD 1 Gbps + 20 Gbps InfiniBand
yeti testing 2018-01-16 4 4 x Intel Xeon Gold 6130 16 cores/CPU 768 GB 446 GB SSD + 3 x 1.819 TB HDD 10 Gbps 

Cluster details

edel

68 nodes, 136 cpus, 544 cores, split as follows due to differences between nodes (json)

edel-[1,5-6,16,37,42] (6 nodes, 12 cpus, 48 cores)
Model: Bull bullx B500 compute blades
Date of arrival: 2008-10-03
CPU: Intel Xeon E5520 Nehalem 2.27GHz (2 CPUs/node, 4 cores/CPU)
Memory: 24 GB
Storage: 119 GB SSD SATA C400-MTFDDAA128M (driver: ahci, path: /dev/disk/by-path/pci-0000:00:1f.2-ata-1)
Network:
  • eth0/enp1s0f0, Ethernet, configured rate: 1 Gbps, model: Intel 82576 Gigabit Network Connection, driver: igb
  • eth1/enp1s0f1, Ethernet, configured rate: n/c, model: Intel 82576 Gigabit Network Connection, driver: igb- unavailable for experiment
  • ib0, InfiniBand, configured rate: 40 Gbps, model: Mellanox Technologies MT26428 [ConnectX VPI PCIe 2.0 5GT/s - IB QDR / 10GigE], driver: mlx4_core
  • ib1, InfiniBand, configured rate: n/c, model: N/A N/A, driver: mlx4_core- unavailable for experiment

edel-[2] (1 node, 2 cpus, 8 cores)
Model: Bull bullx B500 compute blades
Date of arrival: 2008-10-03
CPU: Intel Xeon E5520 Nehalem 2.27GHz (2 CPUs/node, 4 cores/CPU)
Memory: 24 GB
Storage: 59 GB SSD SATA C400-MTFDDAA064M (driver: ahci, path: /dev/disk/by-path/pci-0000:00:1f.2-ata-1)
Network:
  • eth0/enp1s0f0, Ethernet, configured rate: 1 Gbps, model: Intel 82576 Gigabit Network Connection, driver: igb
  • eth1/enp1s0f1, Ethernet, configured rate: n/c, model: Intel 82576 Gigabit Network Connection, driver: igb- unavailable for experiment
  • ib0, InfiniBand, configured rate: 40 Gbps, model: Mellanox Technologies MT26428 [ConnectX VPI PCIe 2.0 5GT/s - IB QDR / 10GigE], driver: mlx4_core
  • ib1, InfiniBand, configured rate: n/c, model: N/A N/A, driver: mlx4_core- unavailable for experiment

edel-[3,7,10-11,13-14,22,26,30,36,38,44,47,51,53,58,60,64] (18 nodes, 36 cpus, 144 cores)
Model: Bull bullx B500 compute blades
Date of arrival: 2008-10-03
CPU: Intel Xeon E5520 Nehalem 2.27GHz (2 CPUs/node, 4 cores/CPU)
Memory: 24 GB
Storage: 59 GB SSD SATA C400-MTFDDAA064M (driver: ahci, path: /dev/disk/by-path/pci-0000:00:1f.2-ata-1)
Network:
  • eth0/enp1s0f0, Ethernet, configured rate: 1 Gbps, model: Intel 82576 Gigabit Network Connection, driver: igb
  • eth1/enp1s0f1, Ethernet, configured rate: n/c, model: Intel 82576 Gigabit Network Connection, driver: igb- unavailable for experiment
  • ib0, InfiniBand, configured rate: 40 Gbps, model: Mellanox Technologies MT26428 [ConnectX VPI PCIe 2.0 5GT/s - IB QDR / 10GigE], driver: mlx4_core
  • ib1, InfiniBand, configured rate: n/c, model: Mellanox Technologies MT26428 [ConnectX VPI PCIe 2.0 5GT/s - IB QDR / 10GigE], driver: mlx4_core- unavailable for experiment

edel-[8-9,12,15,19,21,23,28-29,32,39-40,43,48-49,52,55-57,59,62,65-69,71] (27 nodes, 54 cpus, 216 cores)
Model: Bull bullx B500 compute blades
Date of arrival: 2008-10-03
CPU: Intel Xeon E5520 Nehalem 2.27GHz (2 CPUs/node, 4 cores/CPU)
Memory: 24 GB
Storage: 119 GB SSD SATA C400-MTFDDAA128M (driver: ahci, path: /dev/disk/by-path/pci-0000:00:1f.2-ata-1)
Network:
  • eth0/enp1s0f0, Ethernet, configured rate: 1 Gbps, model: Intel 82576 Gigabit Network Connection, driver: igb
  • eth1/enp1s0f1, Ethernet, configured rate: n/c, model: Intel 82576 Gigabit Network Connection, driver: igb- unavailable for experiment
  • ib0, InfiniBand, configured rate: 40 Gbps, model: Mellanox Technologies MT26428 [ConnectX VPI PCIe 2.0 5GT/s - IB QDR / 10GigE], driver: mlx4_core
  • ib1, InfiniBand, configured rate: n/c, model: Mellanox Technologies MT26428 [ConnectX VPI PCIe 2.0 5GT/s - IB QDR / 10GigE], driver: mlx4_core- unavailable for experiment

edel-[17,34] (2 nodes, 4 cpus, 16 cores)
Model: Bull bullx B500 compute blades
Date of arrival: 2008-10-03
CPU: Intel Xeon E5520 Nehalem 2.27GHz (2 CPUs/node, 4 cores/CPU)
Memory: 24 GB
Storage: 59 GB SSD SATA C300-MTFDDAA064M (driver: ahci, path: /dev/disk/by-path/pci-0000:00:1f.2-ata-1)
Network:
  • eth0/enp1s0f0, Ethernet, configured rate: 1 Gbps, model: Intel 82576 Gigabit Network Connection, driver: igb
  • eth1/enp1s0f1, Ethernet, configured rate: n/c, model: Intel 82576 Gigabit Network Connection, driver: igb- unavailable for experiment
  • ib0, InfiniBand, configured rate: 40 Gbps, model: Mellanox Technologies MT26428 [ConnectX VPI PCIe 2.0 5GT/s - IB QDR / 10GigE], driver: mlx4_core
  • ib1, InfiniBand, configured rate: n/c, model: Mellanox Technologies MT26428 [ConnectX VPI PCIe 2.0 5GT/s - IB QDR / 10GigE], driver: mlx4_core- unavailable for experiment

edel-[4,18,31,33] (4 nodes, 8 cpus, 32 cores)
Model: Bull bullx B500 compute blades
Date of arrival: 2008-10-03
CPU: Intel Xeon E5520 Nehalem 2.27GHz (2 CPUs/node, 4 cores/CPU)
Memory: 24 GB
Storage: 59 GB SSD SATA C400-MTFDDAA064M (driver: ahci, path: node-was-not-available-to-retrieve-this-value)
Network:
  • eth0/enp1s0f0, Ethernet, configured rate: 1 Gbps, model: Intel 82576 Gigabit Network Connection, driver: igb
  • eth1/enp1s0f1, Ethernet, configured rate: n/c, model: Intel 82576 Gigabit Network Connection, driver: igb- unavailable for experiment
  • ib0, InfiniBand, configured rate: 40 Gbps, model: N/A N/A, driver: mlx4_core
  • ib1, InfiniBand, configured rate: n/c, model: N/A N/A, driver: mlx4_core- unavailable for experiment

edel-[20] (1 node, 2 cpus, 8 cores)
Model: Bull bullx B500 compute blades
Date of arrival: 2008-10-03
CPU: Intel Xeon E5520 Nehalem 2.27GHz (2 CPUs/node, 4 cores/CPU)
Memory: 24 GB
Storage: 59 GB SSD SATA C400-MTFDDAA064M (driver: ahci, path: /dev/disk/by-id/wwn-0x500a07510202c04b)
Network:
  • eth0/enp1s0f0, Ethernet, configured rate: 1 Gbps, model: Intel 82576 Gigabit Network Connection, driver: igb
  • eth1/enp1s0f1, Ethernet, configured rate: n/c, model: Intel 82576 Gigabit Network Connection, driver: igb- unavailable for experiment
  • ib0, InfiniBand, configured rate: 40 Gbps, model: Mellanox Technologies MT26428 [ConnectX VPI PCIe 2.0 5GT/s - IB QDR / 10GigE], driver: mlx4_core
  • ib1, InfiniBand, configured rate: n/c, model: Mellanox Technologies MT26428 [ConnectX VPI PCIe 2.0 5GT/s - IB QDR / 10GigE], driver: mlx4_core- unavailable for experiment

edel-[24,41,70] (3 nodes, 6 cpus, 24 cores)
Model: Bull bullx B500 compute blades
Date of arrival: 2008-10-03
CPU: Intel Xeon E5520 Nehalem 2.27GHz (2 CPUs/node, 4 cores/CPU)
Memory: 24 GB
Storage: 119 GB SSD SATA C400-MTFDDAA128M (driver: ahci, path: /dev/disk/by-path/pci-0000:00:1f.2-ata-1)
Network:
  • eth0/enp1s0f0, Ethernet, configured rate: 1 Gbps, model: Intel 82576 Gigabit Network Connection, driver: igb
  • eth1/enp1s0f1, Ethernet, configured rate: n/c, model: Intel 82576 Gigabit Network Connection, driver: igb- unavailable for experiment
  • ib0, InfiniBand, configured rate: 40 Gbps, model: N/A N/A, driver: mlx4_core
  • ib1, InfiniBand, configured rate: n/c, model: N/A N/A, driver: mlx4_core- unavailable for experiment

edel-[25,35,46,50,63,72] (6 nodes, 12 cpus, 48 cores)
Model: Bull bullx B500 compute blades
Date of arrival: 2008-10-03
CPU: Intel Xeon E5520 Nehalem 2.27GHz (2 CPUs/node, 4 cores/CPU)
Memory: 24 GB
Storage: 119 GB SSD SATA C400-MTFDDAA128M (driver: ahci, path: node-was-not-available-to-retrieve-this-value)
Network:
  • eth0/enp1s0f0, Ethernet, configured rate: 1 Gbps, model: Intel 82576 Gigabit Network Connection, driver: igb
  • eth1/enp1s0f1, Ethernet, configured rate: n/c, model: Intel 82576 Gigabit Network Connection, driver: igb- unavailable for experiment
  • ib0, InfiniBand, configured rate: 40 Gbps, model: N/A N/A, driver: mlx4_core
  • ib1, InfiniBand, configured rate: n/c, model: N/A N/A, driver: mlx4_core- unavailable for experiment

genepi

31 nodes, 62 cpus, 248 cores (json)

Model: Bull R422-E1
Date of arrival: 2008-10-01
CPU: Intel Xeon E5420 Harpertown 2.50GHz (2 CPUs/node, 4 cores/CPU)
Memory: 8 GB
Storage: 153 GB HDD SATA WDC WD1600YS-01S (driver: ata_piix, path: /dev/disk/by-path/pci-0000:00:1f.2-ata-1)
Network:
  • eth0/enp5s0f0, Ethernet, configured rate: n/c, model: Intel 80003ES2LAN Gigabit Ethernet Controller (Copper), driver: e1000e- unavailable for experiment
  • eth1/enp5s0f1, Ethernet, configured rate: 1 Gbps, model: Intel 80003ES2LAN Gigabit Ethernet Controller (Copper), driver: e1000e
  • ib0, InfiniBand, configured rate: 20 Gbps, model: Mellanox Technologies MT26418 [ConnectX VPI PCIe 2.0 5GT/s - IB DDR / 10GigE], driver: mlx4_core
  • ib1, InfiniBand, configured rate: n/c, model: Mellanox Technologies MT26418 [ConnectX VPI PCIe 2.0 5GT/s - IB DDR / 10GigE], driver: mlx4_core- unavailable for experiment

yeti (testing queue)

4 nodes, 16 cpus, 256 cores (json)

Model: Dell PowerEdge R940
Date of arrival: 2018-01-16
CPU: Intel Xeon Gold 6130 Skylake 2.10GHz (4 CPUs/node, 16 cores/CPU)
Memory: 768 GB
Storage:
  • 446 GB SSD SAS PERC H740P Adp (driver: megaraid_sas, path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:2:0:0)
  • 1.819 TB HDD SAS PERC H740P Adp (driver: megaraid_sas, path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:2:1:0)
  • 1.819 TB HDD SAS PERC H740P Adp (driver: megaraid_sas, path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:2:2:0)
  • 1.819 TB HDD SAS PERC H740P Adp (driver: megaraid_sas, path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:2:3:0)
Network:
  • eth0/eno113, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e
  • eth1/eno114, Ethernet, configured rate: n/c, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e- unavailable for experiment
  • eth2/eno115, Ethernet, configured rate: n/c, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e- unavailable for experiment
  • eth3/eno116, Ethernet, configured rate: n/c, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e- unavailable for experiment
Last generated from the Grid'5000 Reference API on 2018-05-19 (commit 6cc091305)