Lille:Hardware: Difference between revisions
Jump to navigation
Jump to search
No edit summary |
No edit summary |
||
Line 37: | Line 37: | ||
|- | |- | ||
|[[#chifflot|chifflot]]||||2018-08-01||2018-07-17||8||2||Intel Xeon Gold 6126||12 cores/CPU||x86_64||data-sort-value="206158430208"|192 GiB||data-sort-value="15798"|<b>480 GB SSD</b> + 480 GB SSD[[Disk_reservation|*]] + 4 x 4.0 TB HDD[[Disk_reservation|*]]||data-sort-value="50000"|2 x 25 Gbps ||[1-6]: 2 x Nvidia Tesla P100 (16 GiB)<br />[7-8]: 2 x Nvidia Tesla V100 (32 GiB) | |[[#chifflot|chifflot]]||||2018-08-01||2018-07-17||8||2||Intel Xeon Gold 6126||12 cores/CPU||x86_64||data-sort-value="206158430208"|192 GiB||data-sort-value="15798"|<b>480 GB SSD</b> + 480 GB SSD[[Disk_reservation|*]] + 4 x 4.0 TB HDD[[Disk_reservation|*]]||data-sort-value="50000"|2 x 25 Gbps ||[1-6]: 2 x Nvidia Tesla P100 (16 GiB)<br />[7-8]: 2 x Nvidia Tesla V100 (32 GiB) | ||
|- | |- | ||
|} | |} | ||
Line 62: | Line 60: | ||
|- | |- | ||
|[[#chirop|chirop]]||<b>testing</b> queue||2024-01-25||2023-05-02||5||2||Intel Xeon Platinum 8358||32 cores/CPU||x86_64||data-sort-value="549755813888"|512 GiB||data-sort-value="8940"|<b>1.92 TB SSD</b> + 2 x 3.84 TB SSD||data-sort-value="50000"|2 x 25 Gbps || | |||
|- | |||
|[[#chuc|chuc]]||<b>testing</b> queue||2024-01-22||2023-05-02||8||1||AMD EPYC 7513||32 cores/CPU||x86_64||data-sort-value="549755813888"|512 GiB||data-sort-value="7152"|<b>1.92 TB SSD</b> + 3 x 1.92 TB SSD||data-sort-value="50000"|2 x 25 Gbps (SR‑IOV) ||4 x Nvidia A100 (40 GiB) | |[[#chuc|chuc]]||<b>testing</b> queue||2024-01-22||2023-05-02||8||1||AMD EPYC 7513||32 cores/CPU||x86_64||data-sort-value="549755813888"|512 GiB||data-sort-value="7152"|<b>1.92 TB SSD</b> + 3 x 1.92 TB SSD||data-sort-value="50000"|2 x 25 Gbps (SR‑IOV) ||4 x Nvidia A100 (40 GiB) | ||
|- | |- | ||
Line 327: | Line 327: | ||
|- | |- | ||
|} | |} | ||
= Clusters in the testing queue = | |||
== [https://intranet.grid5000.fr/oar/Lille/drawgantt-svg/?filter=chirop%20only chirop] == | == [https://intranet.grid5000.fr/oar/Lille/drawgantt-svg/?filter=chirop%20only chirop] == | ||
Line 334: | Line 336: | ||
'''Reservation example:''' | '''Reservation example:''' | ||
{{Term|location=flille|cmd=<code class="command">oarsub</code> <code class="env">-p chirop</code> <code>-I</code>}} | {{Term|location=flille|cmd=<code class="command">oarsub</code> <code class="replace">-q testing</code> <code class="env">-p chirop</code> <code>-I</code>}} | ||
{| | {| | ||
|- | |||
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Access condition:''' | |||
| testing queue<br/> | |||
|- | |- | ||
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Model:''' | | valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Model:''' | ||
Line 365: | Line 370: | ||
|- | |- | ||
|} | |} | ||
== [https://intranet.grid5000.fr/oar/Lille/drawgantt-svg/?filter=chuc%20only chuc] == | == [https://intranet.grid5000.fr/oar/Lille/drawgantt-svg/?filter=chuc%20only chuc] == | ||
Line 633: | Line 636: | ||
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Storage:''' | | valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Storage:''' | ||
| | | | ||
* disk0, 1.92 TB SSD SAS HPE | * disk0, 1.92 TB SSD SAS HPE MO001920RXKRH (dev: <code class="file">/dev/disk0</code>, by-path: <code class="file">/dev/disk/by-path/pci-0000:c8:00.0-sas-0x5000c500ec8b3b75-lun-0</code>) (primary disk)<br /> | ||
* disk1, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: <code class="file">/dev/disk1</code>, by-path: <code class="file">/dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281df92-lun-0</code>) <br /> | * disk1, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: <code class="file">/dev/disk1</code>, by-path: <code class="file">/dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281df92-lun-0</code>) <br /> | ||
* disk2, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: <code class="file">/dev/disk2</code>, by-path: <code class="file">/dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281df9a-lun-0</code>) <br /> | * disk2, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: <code class="file">/dev/disk2</code>, by-path: <code class="file">/dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281df9a-lun-0</code>) <br /> | ||
Line 685: | Line 688: | ||
| 4 x Nvidia A100-SXM4-40GB (40 GiB)<br>Compute capability: 8.0<br/> | | 4 x Nvidia A100-SXM4-40GB (40 GiB)<br>Compute capability: 8.0<br/> | ||
|- | |- | ||
|}''<small>Last generated from the Grid'5000 Reference API on 2024-04-03 ([https://gitlab.inria.fr/grid5000/reference-repository/commit/ | |}''<small>Last generated from the Grid'5000 Reference API on 2024-04-03 ([https://gitlab.inria.fr/grid5000/reference-repository/commit/5cab569506 commit 5cab569506])</small>'' |
Revision as of 09:41, 3 April 2024
Hardware: Global | Grenoble | Lille | Luxembourg | Lyon | Nancy | Nantes | Rennes | Sophia | Toulouse
See also: Network topology for Lille
Summary
- 4 clusters
- 29 nodes
- 1024 CPU cores
- 48 GPUs
- 284672 GPUs cores
- 9.0 TiB RAM
- 71 SSDs and 48 HDDs on nodes (total: 313.02 TB)
- 57.8 TFLOPS (excluding GPUs)
Clusters summary
Default queue ressources
Cluster | Access Condition | Date of arrival | Manufacturing date | Nodes | CPU | Memory | Storage | Network | Accelerators | |||
---|---|---|---|---|---|---|---|---|---|---|---|---|
# | Name | Cores | Architecture | |||||||||
chiclet | 2018-08-06 | 2018-07-27 | 8 | 2 | AMD EPYC 7301 | 16 cores/CPU | x86_64 | 128 GiB | 480 GB SSD + 2 x 4.0 TB HDD* | 2 x 25 Gbps | ||
chifflot | 2018-08-01 | 2018-07-17 | 8 | 2 | Intel Xeon Gold 6126 | 12 cores/CPU | x86_64 | 192 GiB | 480 GB SSD + 480 GB SSD* + 4 x 4.0 TB HDD* | 2 x 25 Gbps | [1-6]: 2 x Nvidia Tesla P100 (16 GiB) [7-8]: 2 x Nvidia Tesla V100 (32 GiB) |
*: disk is reservable **: crossed GPUs are not supported by Grid'5000 default environments
Testing queue ressources
Cluster | Access Condition | Date of arrival | Manufacturing date | Nodes | CPU | Memory | Storage | Network | Accelerators | |||
---|---|---|---|---|---|---|---|---|---|---|---|---|
# | Name | Cores | Architecture | |||||||||
chirop | testing queue | 2024-01-25 | 2023-05-02 | 5 | 2 | Intel Xeon Platinum 8358 | 32 cores/CPU | x86_64 | 512 GiB | 1.92 TB SSD + 2 x 3.84 TB SSD | 2 x 25 Gbps | |
chuc | testing queue | 2024-01-22 | 2023-05-02 | 8 | 1 | AMD EPYC 7513 | 32 cores/CPU | x86_64 | 512 GiB | 1.92 TB SSD + 3 x 1.92 TB SSD | 2 x 25 Gbps (SR‑IOV) | 4 x Nvidia A100 (40 GiB) |
*: disk is reservable **: crossed GPUs are not supported by Grid'5000 default environments
Clusters in the default queue
chiclet
8 nodes, 16 cpus, 256 cores (json)
Reservation example:
Model: | Dell PowerEdge R7425 |
Manufacturing date: | 2018-07-27 |
Date of arrival: | 2018-08-06 |
CPU: | AMD EPYC 7301 (Zen), x86_64, 2 CPUs/node, 16 cores/CPU |
Memory: | 128 GiB |
Storage: |
|
Network: |
|
chifflot
8 nodes, 16 cpus, 192 cores, split as follows due to differences between nodes (json)
Reservation example:
- chifflot-[1,
4-5] (3 nodes, 6 cpus, 72 cores)
Model: | Dell PowerEdge R740 |
Manufacturing date: | 2018-07-17 |
Date of arrival: | 2018-08-01 |
CPU: | Intel Xeon Gold 6126 (Skylake), x86_64, 2.60GHz, 2 CPUs/node, 12 cores/CPU |
Memory: | 192 GiB |
Storage: |
|
Network: |
|
GPU: | 2 x Nvidia Tesla P100-PCIE-16GB (16 GiB) Compute capability: 6.0 |
- chifflot-2 (1 node, 2 cpus, 24 cores)
Model: | Dell PowerEdge R740 |
Manufacturing date: | 2018-07-17 |
Date of arrival: | 2018-08-01 |
CPU: | Intel Xeon Gold 6126 (Skylake), x86_64, 2.60GHz, 2 CPUs/node, 12 cores/CPU |
Memory: | 192 GiB |
Storage: |
|
Network: |
|
GPU: | 2 x Nvidia Tesla P100-PCIE-16GB (16 GiB) Compute capability: 6.0 |
- chifflot-3 (1 node, 2 cpus, 24 cores)
Model: | Dell PowerEdge R740 |
Manufacturing date: | 2018-07-17 |
Date of arrival: | 2018-08-01 |
CPU: | Intel Xeon Gold 6126 (Skylake), x86_64, 2.60GHz, 2 CPUs/node, 12 cores/CPU |
Memory: | 192 GiB |
Storage: |
|
Network: |
|
GPU: | 2 x Nvidia Tesla P100-PCIE-16GB (16 GiB) Compute capability: 6.0 |
- chifflot-6 (1 node, 2 cpus, 24 cores)
Model: | Dell PowerEdge R740 |
Manufacturing date: | 2018-07-17 |
Date of arrival: | 2018-08-01 |
CPU: | Intel Xeon Gold 6126 (Skylake), x86_64, 2.60GHz, 2 CPUs/node, 12 cores/CPU |
Memory: | 192 GiB |
Storage: |
|
Network: |
|
GPU: | 2 x Nvidia Tesla P100-PCIE-16GB (16 GiB) Compute capability: 6.0 |
- chifflot-[7-8] (2 nodes, 4 cpus, 48 cores)
Model: | Dell PowerEdge R740 |
Manufacturing date: | 2018-07-17 |
Date of arrival: | 2018-08-01 |
CPU: | Intel Xeon Gold 6126 (Skylake), x86_64, 2.60GHz, 2 CPUs/node, 12 cores/CPU |
Memory: | 192 GiB |
Storage: |
|
Network: |
|
GPU: | 2 x Nvidia Tesla V100-PCIE-32GB (32 GiB) Compute capability: 7.0 |
Clusters in the testing queue
chirop
5 nodes, 10 cpus, 320 cores (json)
Reservation example:
Access condition: | testing queue |
Model: | DL360 Gen10+ |
Manufacturing date: | 2023-05-02 |
Date of arrival: | 2024-01-25 |
CPU: | Intel Xeon Platinum 8358 (Ice Lake), x86_64, 2.60GHz, 2 CPUs/node, 32 cores/CPU |
Memory: | 512 GiB |
Storage: |
|
Network: |
|
chuc
8 nodes, 8 cpus, 256 cores, split as follows due to differences between nodes (json)
Reservation example:
- chuc-1 (1 node, 1 cpu, 32 cores)
Access condition: | testing queue |
Model: | Apollo 6500 Gen10 Plus |
Manufacturing date: | 2023-05-02 |
Date of arrival: | 2024-01-22 |
CPU: | AMD EPYC 7513 (Zen 3), x86_64, 1 CPU/node, 32 cores/CPU |
Memory: | 512 GiB |
Storage: |
|
Network: |
|
GPU: | 4 x Nvidia A100-SXM4-40GB (40 GiB) Compute capability: 8.0 |
- chuc-2 (1 node, 1 cpu, 32 cores)
Access condition: | testing queue |
Model: | Apollo 6500 Gen10 Plus |
Manufacturing date: | 2023-05-02 |
Date of arrival: | 2024-01-22 |
CPU: | AMD EPYC 7513 (Zen 3), x86_64, 1 CPU/node, 32 cores/CPU |
Memory: | 512 GiB |
Storage: |
|
Network: |
|
GPU: | 4 x Nvidia A100-SXM4-40GB (40 GiB) Compute capability: 8.0 |
- chuc-3 (1 node, 1 cpu, 32 cores)
Access condition: | testing queue |
Model: | Apollo 6500 Gen10 Plus |
Manufacturing date: | 2023-05-02 |
Date of arrival: | 2024-01-22 |
CPU: | AMD EPYC 7513 (Zen 3), x86_64, 1 CPU/node, 32 cores/CPU |
Memory: | 512 GiB |
Storage: |
|
Network: |
|
GPU: | 4 x Nvidia A100-SXM4-40GB (40 GiB) Compute capability: 8.0 |
- chuc-4 (1 node, 1 cpu, 32 cores)
Access condition: | testing queue |
Model: | Apollo 6500 Gen10 Plus |
Manufacturing date: | 2023-05-02 |
Date of arrival: | 2024-01-22 |
CPU: | AMD EPYC 7513 (Zen 3), x86_64, 1 CPU/node, 32 cores/CPU |
Memory: | 512 GiB |
Storage: |
|
Network: |
|
GPU: | 4 x Nvidia A100-SXM4-40GB (40 GiB) Compute capability: 8.0 |
- chuc-5 (1 node, 1 cpu, 32 cores)
Access condition: | testing queue |
Model: | Apollo 6500 Gen10 Plus |
Manufacturing date: | 2023-05-02 |
Date of arrival: | 2024-01-22 |
CPU: | AMD EPYC 7513 (Zen 3), x86_64, 1 CPU/node, 32 cores/CPU |
Memory: | 512 GiB |
Storage: |
|
Network: |
|
GPU: | 4 x Nvidia A100-SXM4-40GB (40 GiB) Compute capability: 8.0 |
- chuc-6 (1 node, 1 cpu, 32 cores)
Access condition: | testing queue |
Model: | Apollo 6500 Gen10 Plus |
Manufacturing date: | 2023-05-02 |
Date of arrival: | 2024-01-22 |
CPU: | AMD EPYC 7513 (Zen 3), x86_64, 1 CPU/node, 32 cores/CPU |
Memory: | 512 GiB |
Storage: |
|
Network: |
|
GPU: | 4 x Nvidia A100-SXM4-40GB (40 GiB) Compute capability: 8.0 |
- chuc-7 (1 node, 1 cpu, 32 cores)
Access condition: | testing queue |
Model: | Apollo 6500 Gen10 Plus |
Manufacturing date: | 2023-05-02 |
Date of arrival: | 2024-01-22 |
CPU: | AMD EPYC 7513 (Zen 3), x86_64, 1 CPU/node, 32 cores/CPU |
Memory: | 512 GiB |
Storage: |
|
Network: |
|
GPU: | 4 x Nvidia A100-SXM4-40GB (40 GiB) Compute capability: 8.0 |
- chuc-8 (1 node, 1 cpu, 32 cores)
Access condition: | testing queue |
Model: | Apollo 6500 Gen10 Plus |
Manufacturing date: | 2023-05-02 |
Date of arrival: | 2024-01-22 |
CPU: | AMD EPYC 7513 (Zen 3), x86_64, 1 CPU/node, 32 cores/CPU |
Memory: | 512 GiB |
Storage: |
|
Network: |
|
GPU: | 4 x Nvidia A100-SXM4-40GB (40 GiB) Compute capability: 8.0 |
Last generated from the Grid'5000 Reference API on 2024-04-03 (commit 5cab569506)