.. _DocProject:A ***************** Reference Systems ***************** .. _ElCapitanSystemDescription: El Capitan ========== El Capitan is the CORAL2 flagship supercomputer at Lawrence Livermore National Laboratory (LLNL), commissioned in 2024. It is an HPE Cray system designed for large-scale simulation and modeling, featuring AMD EPYC CPUs and AMD MI300A APUs (GPU+CPU). El Capitan is deployed in the Secure Computing Facility (SCF) and is a primary platform for the Advanced Simulation and Computing (ASC) program. +--------------------------+----------------------------------------------+ | OS | TOSS 4 | +--------------------------+----------------------------------------------+ | Scheduler | Flux | +--------------------------+----------------------------------------------+ | Interconnect | HPE Slingshot 11 | +--------------------------+----------------------------------------------+ | Peak Performance | 2,792.9 PFLOPS (CPU+GPU) | +--------------------------+----------------------------------------------+ | Login Nodes | 16 | +--------------------------+----------------------------------------------+ | Debug Nodes | 56 | +--------------------------+----------------------------------------------+ | Batch Nodes | 11,424 | +--------------------------+----------------------------------------------+ | Total Nodes | 11,520 | +--------------------------+----------------------------------------------+ | Total CPU Cores | 1,069,056 | +--------------------------+----------------------------------------------+ | Total GPUs (APUs) | 44,544 | +--------------------------+----------------------------------------------+ | Total System Memory | 5,701,632 GiB | +--------------------------+----------------------------------------------+ Node Architecture ----------------- +--------------------------+----------------------------------------------+ | CPU | 4th Gen AMD EPYC | +--------------------------+----------------------------------------------+ | CPU Cores/Node | 96 | +--------------------------+----------------------------------------------+ | GPU (APU) | 4x AMD MI300A (CDNA 3) per node | +--------------------------+----------------------------------------------+ | GPU Memory/Node | 512 GiB (shared across 4 APUs) | +--------------------------+----------------------------------------------+ | Node Memory | 512 GiB | +--------------------------+----------------------------------------------+ `El Capitan Hardware Overview `_ .. _CTSA-2SystemDescription: CTS-2 ===== Dane is a CTS-2 class supercomputer at Lawrence Livermore National Laboratory (LLNL), commissioned in 2023. It is based on Intel Sapphire Rapids processors and is designed for high-performance computing workloads supporting the ASC and M&IC programs. +--------------------------+----------------------------------------------+ | OS | TOSS 4 | +--------------------------+----------------------------------------------+ | Scheduler | Slurm | +--------------------------+----------------------------------------------+ | Interconnect | Cornelis Networks | +--------------------------+----------------------------------------------+ | Peak Performance | 10.7 PFLOPS (CPU) | +--------------------------+----------------------------------------------+ | Peak Performance | 10.723 PFLOPS (CPU+GPU) | +--------------------------+----------------------------------------------+ | Login Nodes | 8 | +--------------------------+----------------------------------------------+ | Batch Nodes | 1,496 | +--------------------------+----------------------------------------------+ | Total Nodes | 1,544 | +--------------------------+----------------------------------------------+ | Total CPU Cores | 167,552 | +--------------------------+----------------------------------------------+ | Total System Memory | 382,976 GiB | +--------------------------+----------------------------------------------+ Node Architecture ----------------- +--------------------------+----------------------------------------------+ | CPU Architecture | Intel Sapphire Rapids | +--------------------------+----------------------------------------------+ | CPUs/Node | 2 | +--------------------------+----------------------------------------------+ | CPU Cores/Node | 112 (56 per socket) | +--------------------------+----------------------------------------------+ | Threads/Node | 224 | +--------------------------+----------------------------------------------+ | Memory/Node | 256 GiB DDR5-4800 | +--------------------------+----------------------------------------------+ | Memory Channels/Node | 16 (8 per CPU) | +--------------------------+----------------------------------------------+ | Cache/Node | 210 MB (105 MB per CPU) | +--------------------------+----------------------------------------------+ | Max Turbo Frequency | 3.8 GHz | +--------------------------+----------------------------------------------+ | TDP/CPU | 350 W | +--------------------------+----------------------------------------------+ `Dane Hardware Overview `_ .. _H100SystemDescription: H100 System =========== Matrix is a CTS-2 class GPU-accelerated supercomputer at Lawrence Livermore National Laboratory (LLNL), commissioned in 2025. It is based on Intel Sapphire Rapids CPUs and NVIDIA H100 GPUs, designed for advanced high-performance computing and AI/ML workloads. +--------------------------+----------------------------------------------+ | Vendor | Dell | +--------------------------+----------------------------------------------+ | Location | LLNL CZ Zone | +--------------------------+----------------------------------------------+ | Year Commissioned | 2025 | +--------------------------+----------------------------------------------+ | Class | CTS-2 | +--------------------------+----------------------------------------------+ | OS | TOSS 4 | +--------------------------+----------------------------------------------+ | Scheduler | Slurm (GPU-scheduled) | +--------------------------+----------------------------------------------+ | Interconnect | InfiniBand | +--------------------------+----------------------------------------------+ | Peak Performance (CPU) | 0.198 PFLOPS | +--------------------------+----------------------------------------------+ | Peak Performance (GPU) | 3.800 PFLOPS | +--------------------------+----------------------------------------------+ | Peak Performance (Total) | 4.000 PFLOPS | +--------------------------+----------------------------------------------+ | Login Nodes | 2 | +--------------------------+----------------------------------------------+ | Batch Nodes | 26 | +--------------------------+----------------------------------------------+ | Debug Nodes | 2 | +--------------------------+----------------------------------------------+ | Total Nodes | 30 | +--------------------------+----------------------------------------------+ | Total CPU Cores | 1,792 | +--------------------------+----------------------------------------------+ | Total GPUs | 112 (NVIDIA H100) | +--------------------------+----------------------------------------------+ | Total System Memory | 8,064 GiB | +--------------------------+----------------------------------------------+ Node Architecture ----------------- +--------------------------+----------------------------------------------+ | CPU Architecture | Intel Xeon Platinum 8480+ (Sapphire Rapids) | +--------------------------+----------------------------------------------+ | CPUs/Node | 2 | +--------------------------+----------------------------------------------+ | CPU Cores/Node | 112 (56 per socket) | +--------------------------+----------------------------------------------+ | Memory/Node | 504 GiB DDR5 | +--------------------------+----------------------------------------------+ | GPUs/Node | 4x NVIDIA H100 | +--------------------------+----------------------------------------------+ | GPU Memory/Node | 320 GiB (80 GiB per GPU) | +--------------------------+----------------------------------------------+ `Matrix Hardware Overview `_