Lille:Hardware: Difference between revisions
Jump to navigation
Jump to search
No edit summary |
No edit summary |
||
Line 633: | Line 633: | ||
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Storage:''' | | valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Storage:''' | ||
| | | | ||
* disk0, 1.92 TB SSD SAS HPE MO001920RXRRH (dev: <code class="file">/dev/disk0</code>, by-path: <code class="file">/dev/disk/by-path/pci-0000:c8:00.0-sas- | * disk0, 1.92 TB SSD SAS HPE MO001920RXRRH (dev: <code class="file">/dev/disk0</code>, by-path: <code class="file">/dev/disk/by-path/pci-0000:c8:00.0-sas-0x5000c500ec8b3b75-lun-0</code>) (primary disk)<br /> | ||
* disk1, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: <code class="file">/dev/disk1</code>, by-path: <code class="file">/dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281df92-lun-0</code>) <br /> | * disk1, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: <code class="file">/dev/disk1</code>, by-path: <code class="file">/dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281df92-lun-0</code>) <br /> | ||
* disk2, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: <code class="file">/dev/disk2</code>, by-path: <code class="file">/dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281df9a-lun-0</code>) <br /> | * disk2, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: <code class="file">/dev/disk2</code>, by-path: <code class="file">/dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281df9a-lun-0</code>) <br /> | ||
Line 685: | Line 685: | ||
| 4 x Nvidia A100-SXM4-40GB (40 GiB)<br>Compute capability: 8.0<br/> | | 4 x Nvidia A100-SXM4-40GB (40 GiB)<br>Compute capability: 8.0<br/> | ||
|- | |- | ||
|}''<small>Last generated from the Grid'5000 Reference API on 2024-03 | |}''<small>Last generated from the Grid'5000 Reference API on 2024-04-03 ([https://gitlab.inria.fr/grid5000/reference-repository/commit/3a579ada98 commit 3a579ada98])</small>'' |
Revision as of 08:30, 3 April 2024
Hardware: Global | Grenoble | Lille | Luxembourg | Lyon | Nancy | Nantes | Rennes | Sophia | Toulouse
See also: Network topology for Lille
Summary
- 4 clusters
- 29 nodes
- 1024 CPU cores
- 48 GPUs
- 284672 GPUs cores
- 9.0 TiB RAM
- 71 SSDs and 48 HDDs on nodes (total: 313.02 TB)
- 57.8 TFLOPS (excluding GPUs)
Clusters summary
Default queue ressources
Cluster | Access Condition | Date of arrival | Manufacturing date | Nodes | CPU | Memory | Storage | Network | Accelerators | |||
---|---|---|---|---|---|---|---|---|---|---|---|---|
# | Name | Cores | Architecture | |||||||||
chiclet | 2018-08-06 | 2018-07-27 | 8 | 2 | AMD EPYC 7301 | 16 cores/CPU | x86_64 | 128 GiB | 480 GB SSD + 2 x 4.0 TB HDD* | 2 x 25 Gbps | ||
chifflot | 2018-08-01 | 2018-07-17 | 8 | 2 | Intel Xeon Gold 6126 | 12 cores/CPU | x86_64 | 192 GiB | 480 GB SSD + 480 GB SSD* + 4 x 4.0 TB HDD* | 2 x 25 Gbps | [1-6]: 2 x Nvidia Tesla P100 (16 GiB) [7-8]: 2 x Nvidia Tesla V100 (32 GiB) | |
chirop | 2024-01-25 | 2023-05-02 | 5 | 2 | Intel Xeon Platinum 8358 | 32 cores/CPU | x86_64 | 512 GiB | 1.92 TB SSD + 2 x 3.84 TB SSD | 2 x 25 Gbps |
*: disk is reservable **: crossed GPUs are not supported by Grid'5000 default environments
Testing queue ressources
Cluster | Access Condition | Date of arrival | Manufacturing date | Nodes | CPU | Memory | Storage | Network | Accelerators | |||
---|---|---|---|---|---|---|---|---|---|---|---|---|
# | Name | Cores | Architecture | |||||||||
chuc | testing queue | 2024-01-22 | 2023-05-02 | 8 | 1 | AMD EPYC 7513 | 32 cores/CPU | x86_64 | 512 GiB | 1.92 TB SSD + 3 x 1.92 TB SSD | 2 x 25 Gbps (SR‑IOV) | 4 x Nvidia A100 (40 GiB) |
*: disk is reservable **: crossed GPUs are not supported by Grid'5000 default environments
Clusters in the default queue
chiclet
8 nodes, 16 cpus, 256 cores (json)
Reservation example:
Model: | Dell PowerEdge R7425 |
Manufacturing date: | 2018-07-27 |
Date of arrival: | 2018-08-06 |
CPU: | AMD EPYC 7301 (Zen), x86_64, 2 CPUs/node, 16 cores/CPU |
Memory: | 128 GiB |
Storage: |
|
Network: |
|
chifflot
8 nodes, 16 cpus, 192 cores, split as follows due to differences between nodes (json)
Reservation example:
- chifflot-[1,
4-5] (3 nodes, 6 cpus, 72 cores)
Model: | Dell PowerEdge R740 |
Manufacturing date: | 2018-07-17 |
Date of arrival: | 2018-08-01 |
CPU: | Intel Xeon Gold 6126 (Skylake), x86_64, 2.60GHz, 2 CPUs/node, 12 cores/CPU |
Memory: | 192 GiB |
Storage: |
|
Network: |
|
GPU: | 2 x Nvidia Tesla P100-PCIE-16GB (16 GiB) Compute capability: 6.0 |
- chifflot-2 (1 node, 2 cpus, 24 cores)
Model: | Dell PowerEdge R740 |
Manufacturing date: | 2018-07-17 |
Date of arrival: | 2018-08-01 |
CPU: | Intel Xeon Gold 6126 (Skylake), x86_64, 2.60GHz, 2 CPUs/node, 12 cores/CPU |
Memory: | 192 GiB |
Storage: |
|
Network: |
|
GPU: | 2 x Nvidia Tesla P100-PCIE-16GB (16 GiB) Compute capability: 6.0 |
- chifflot-3 (1 node, 2 cpus, 24 cores)
Model: | Dell PowerEdge R740 |
Manufacturing date: | 2018-07-17 |
Date of arrival: | 2018-08-01 |
CPU: | Intel Xeon Gold 6126 (Skylake), x86_64, 2.60GHz, 2 CPUs/node, 12 cores/CPU |
Memory: | 192 GiB |
Storage: |
|
Network: |
|
GPU: | 2 x Nvidia Tesla P100-PCIE-16GB (16 GiB) Compute capability: 6.0 |
- chifflot-6 (1 node, 2 cpus, 24 cores)
Model: | Dell PowerEdge R740 |
Manufacturing date: | 2018-07-17 |
Date of arrival: | 2018-08-01 |
CPU: | Intel Xeon Gold 6126 (Skylake), x86_64, 2.60GHz, 2 CPUs/node, 12 cores/CPU |
Memory: | 192 GiB |
Storage: |
|
Network: |
|
GPU: | 2 x Nvidia Tesla P100-PCIE-16GB (16 GiB) Compute capability: 6.0 |
- chifflot-[7-8] (2 nodes, 4 cpus, 48 cores)
Model: | Dell PowerEdge R740 |
Manufacturing date: | 2018-07-17 |
Date of arrival: | 2018-08-01 |
CPU: | Intel Xeon Gold 6126 (Skylake), x86_64, 2.60GHz, 2 CPUs/node, 12 cores/CPU |
Memory: | 192 GiB |
Storage: |
|
Network: |
|
GPU: | 2 x Nvidia Tesla V100-PCIE-32GB (32 GiB) Compute capability: 7.0 |
chirop
5 nodes, 10 cpus, 320 cores (json)
Reservation example:
Model: | DL360 Gen10+ |
Manufacturing date: | 2023-05-02 |
Date of arrival: | 2024-01-25 |
CPU: | Intel Xeon Platinum 8358 (Ice Lake), x86_64, 2.60GHz, 2 CPUs/node, 32 cores/CPU |
Memory: | 512 GiB |
Storage: |
|
Network: |
|
Clusters in the testing queue
chuc
8 nodes, 8 cpus, 256 cores, split as follows due to differences between nodes (json)
Reservation example:
- chuc-1 (1 node, 1 cpu, 32 cores)
Access condition: | testing queue |
Model: | Apollo 6500 Gen10 Plus |
Manufacturing date: | 2023-05-02 |
Date of arrival: | 2024-01-22 |
CPU: | AMD EPYC 7513 (Zen 3), x86_64, 1 CPU/node, 32 cores/CPU |
Memory: | 512 GiB |
Storage: |
|
Network: |
|
GPU: | 4 x Nvidia A100-SXM4-40GB (40 GiB) Compute capability: 8.0 |
- chuc-2 (1 node, 1 cpu, 32 cores)
Access condition: | testing queue |
Model: | Apollo 6500 Gen10 Plus |
Manufacturing date: | 2023-05-02 |
Date of arrival: | 2024-01-22 |
CPU: | AMD EPYC 7513 (Zen 3), x86_64, 1 CPU/node, 32 cores/CPU |
Memory: | 512 GiB |
Storage: |
|
Network: |
|
GPU: | 4 x Nvidia A100-SXM4-40GB (40 GiB) Compute capability: 8.0 |
- chuc-3 (1 node, 1 cpu, 32 cores)
Access condition: | testing queue |
Model: | Apollo 6500 Gen10 Plus |
Manufacturing date: | 2023-05-02 |
Date of arrival: | 2024-01-22 |
CPU: | AMD EPYC 7513 (Zen 3), x86_64, 1 CPU/node, 32 cores/CPU |
Memory: | 512 GiB |
Storage: |
|
Network: |
|
GPU: | 4 x Nvidia A100-SXM4-40GB (40 GiB) Compute capability: 8.0 |
- chuc-4 (1 node, 1 cpu, 32 cores)
Access condition: | testing queue |
Model: | Apollo 6500 Gen10 Plus |
Manufacturing date: | 2023-05-02 |
Date of arrival: | 2024-01-22 |
CPU: | AMD EPYC 7513 (Zen 3), x86_64, 1 CPU/node, 32 cores/CPU |
Memory: | 512 GiB |
Storage: |
|
Network: |
|
GPU: | 4 x Nvidia A100-SXM4-40GB (40 GiB) Compute capability: 8.0 |
- chuc-5 (1 node, 1 cpu, 32 cores)
Access condition: | testing queue |
Model: | Apollo 6500 Gen10 Plus |
Manufacturing date: | 2023-05-02 |
Date of arrival: | 2024-01-22 |
CPU: | AMD EPYC 7513 (Zen 3), x86_64, 1 CPU/node, 32 cores/CPU |
Memory: | 512 GiB |
Storage: |
|
Network: |
|
GPU: | 4 x Nvidia A100-SXM4-40GB (40 GiB) Compute capability: 8.0 |
- chuc-6 (1 node, 1 cpu, 32 cores)
Access condition: | testing queue |
Model: | Apollo 6500 Gen10 Plus |
Manufacturing date: | 2023-05-02 |
Date of arrival: | 2024-01-22 |
CPU: | AMD EPYC 7513 (Zen 3), x86_64, 1 CPU/node, 32 cores/CPU |
Memory: | 512 GiB |
Storage: |
|
Network: |
|
GPU: | 4 x Nvidia A100-SXM4-40GB (40 GiB) Compute capability: 8.0 |
- chuc-7 (1 node, 1 cpu, 32 cores)
Access condition: | testing queue |
Model: | Apollo 6500 Gen10 Plus |
Manufacturing date: | 2023-05-02 |
Date of arrival: | 2024-01-22 |
CPU: | AMD EPYC 7513 (Zen 3), x86_64, 1 CPU/node, 32 cores/CPU |
Memory: | 512 GiB |
Storage: |
|
Network: |
|
GPU: | 4 x Nvidia A100-SXM4-40GB (40 GiB) Compute capability: 8.0 |
- chuc-8 (1 node, 1 cpu, 32 cores)
Access condition: | testing queue |
Model: | Apollo 6500 Gen10 Plus |
Manufacturing date: | 2023-05-02 |
Date of arrival: | 2024-01-22 |
CPU: | AMD EPYC 7513 (Zen 3), x86_64, 1 CPU/node, 32 cores/CPU |
Memory: | 512 GiB |
Storage: |
|
Network: |
|
GPU: | 4 x Nvidia A100-SXM4-40GB (40 GiB) Compute capability: 8.0 |
Last generated from the Grid'5000 Reference API on 2024-04-03 (commit 3a579ada98)