Aluminum 0.2 Released
January 30, 2019
First released in September 2018, Aluminum provides a generic interface to high-performance communication libraries with a focus on allreduce algorithms. Blocking and non-blocking algorithms and GPU-aware algorithms are supported. Aluminum also contains custom implementations of select algorithms to optimize for certain situations.
Improvements included in this release:
- Host-transfer implementations of standard collectives in the
- Experimental RMA Put/Get operations
- Improved algorithm specification, point-to-point operations, testing, and benchmarks
CCT 1.0.4 Released
January 25, 2019
The Coda Calibration Tool (CCT) calculates reliable moment magnitudes for small- to moderate-sized seismic events. The v1.0.4 release includes several performance and stability improvements along with a few new features:
- Shared interaction between data plots and the map display
- Data point selection highlights relevant elements on the map (events/stations) and vice-versa
- Support for arbitrary user-defined lists of WMS 1.3.0-compliant map tile servers/layers for display on the map
- Automatic display of context-sensitive information based on which tab the user is actively looking at
SUNDIALS 4.0.2 Released
January 23, 2019
This release is a patch with many changes, including (but not limited to):
- unified linear solver interfaces in all SUNDIALS packages
- encapsulated nonlinear solvers in all SUNDIALS integrators
- reorganization of ARKode to allow for the development of new integration methods
- new ARKode stepper for two-rate explicit/explicit multirate infinitesimal step methods
- Fortran 2003 interfaces to CVODE and several SUNDIALS modules
- an OpenMP 4.5+ NVECTOR
New Repo: VisIt
January 19, 2019
VisIt (Visualization and Data Analysis for Mesh-based Scientific Data) is an interactive, parallel visualization and graphical analysis tool for viewing scientific data on Unix and PC platforms. While this repo has been around for a while enjoying a robust developer community, it is newly housed on GitHub. Stay up to date with the latest release notes.
With VisIt, users can quickly generate visualizations from their data, animate them through time, manipulate them, and save the resulting images for presentations.
VisIt contains a rich set of visualization features so that you can view your data in a variety of ways. It can be used to visualize scalar and vector fields defined on 2D and 3D structured and unstructured meshes. VisIt was designed to handle very large data set sizes in the peta-scale range, yet it can also handle small data sets in the kilobyte range.
RAJA 0.7.0 Released
January 10, 2019
RAJA is a software abstraction that systematically encapsulates platform-specific code to enable applications to be portable across diverse hardware architectures without major source code disruption. The v0.7.0 release contains several major changes, new features, a variety of bug fixes, and expanded user documentation and accompanying example codes. Major changes include:
RAJA::forallN and RAJA::forall methods were marked deprecated in the v0.6.0 release and have been removed.
- CUDA execution policies for use in RAJA::kernel policies have been significantly reworked and redefined to be much more flexible and provide improved run time performance.
- Improved support for loop tiling algorithms, CPU cache blocking, CUDA GPU thread local data, and shared memory
- Expanded documentation and example codes for the
New Repo: MemSurfer
January 10, 2019
MemSurfer computes bilayer membrane surfaces found in a wide variety of large-scale molecular simulations. The tool works independent of the type of simulation, directly on the 3D point coordinates; as a result, it can handle a variety of membranes as well as atomic simulations.
Core functionality is written in Python and C++. Check out the GitHub repo for information about v0.1.
New Repo: Cardioid (Cardiac Simulation Toolkit)
December 28, 2018
Cardioid is a cardiac multiscale simulation suite spanning from subcellular mechanisms up to simulations of organ-level clinical phenomena. The suite contains tools for simulating cardiac electrophysiology, cardiac mechanics, torso-ECGs, cardiac meshing, and fiber-generation tools.
This project’s history goes back a few years – it was a finalist for the 2012 Gordon Bell Prize – but only now is the code available as open source. Initially developed by a team of LLNL and IBM scientists, Cardioid divides the heart into a large number of manageable subdomains. This replicates the electrophysiology of the human heart, accurately simulating the activation of each heart muscle cell and cell-to-cell electric coupling.
SUNDIALS 4.0 Released
December 20, 2018
This release includes unified linear solver interfaces in all SUNDIALS packages, encapsulated nonlinear solvers in all SUNDIALS integrators, a reorganization of ARKode to allow for the development of new integration methods, a new ARKode stepper for two-rate explicit/explicit multirate infinitesimal step methods, Fortran 2003 interfaces to CVODE and several SUNDIALS modules, an OpenMP 4.5+ NVECTOR, managed memory capabilities for the CUDA NVECTOR, and other improvements.
A patch release v4.0.1 was also released to fix a bug in ARKode where single precision builds would fail to compile.
Read more about v4.0.0 and the complete SUNDIALS release history. Downloads are available from the SUNDIALS website and GitHub.
ScrubJay: A Bird's-Eye View of Computing Performance
December 10, 2018
ScrubJay, an open-source performance data analysis tool, helps ensure that the Laboratory’s HPC center lives up to its name. Check out this new writeup in LLNL’s magazine, Science & Technology Review. Fork the repo on GitHub.
ScrubJay is an intuitive, scalable framework for automatic analysis of disparate HPC data. ScrubJay decouples the task of specifying data relationships from the task of analyzing data. Domain experts can store reusable transformations that describe the projection of one domain onto another. The program also automates performance analysis. Analysts provide a query over logical domains of interest, and ScrubJay automatically derives the needed steps to transform raw measurements. This process makes large-scale analysis tractable and reproducible, thus providing crucial insights into HPC facilities.
ESGF Installer 3.0 Beta Released
December 06, 2018
The long-awaited Earth System Grid Federation (ESGF) installer software v3.0 beta release is here!
LLNL’s William Hill unveiled the beta at ESGF’s annual conference in Washington, DC. His slides can be downloaded here – look for the second link on Day 3.
v3.0 is a complete rewrite of the ESGF installation stack and includes a 2.x migration script to upgrade an existing node. This release addresses several long-standing problems such as a lack of error handling, lack of extensibility, and a complicated installation process. Note that the Conda environment must be activated to run this installer. Documentation is updated accordingly.
vCDAT 1.0 Released
December 01, 2018
LLNL’s Community Data Analysis Tools (CDAT) provide a synergistic approach to climate modeling, allowing researchers to advance scientific visualization of large-scale climate data sets.
vCDAT is part of the CDAT suite. After beta testing, the new v1.0 is now available on GitHub. vCDAT is a desktop application that provides the graphical frontend for the CDAT package, using CDAT’s VCS and CDMS modules to render high-quality visualizations within a browser. Check out
With vCDAT you can export and import custom colormaps in multiple image formats. Tutorials and documentation are provided here. Installation requires Anaconda.
Stay tuned for v2.0, coming in spring 2019, which will leverage Jupyter Notebooks UI.
New Repo: UnifyCR
November 26, 2018
Hierarchical storage systems are the wave of the future for HPC centers like LLNL’s Livermore Computing Complex. The Unify project aims to improve I/O performance by utilizing distributed, node-local storage systems. This design scales bandwidth and capacity according to the computer resources used by a given job. Furthermore, Unify avoids inter-job interference from parallel file systems or shared burst buffers.
Unify is a suite of specialized, flexible file systems – the first is available on GitHub with more on the way – that can be included in a user’s job allocations. A user can request which Unify file system(s) to be loaded and the respective mount points. Tests on LLNL’s Catalyst cluster show more than 2x improvement in write performance.
Like much of LLNL’s HPC performance improvement software, Unify is open source. The first Unify file system, UnifyCR (for checkpoint/restart workloads), is already available on GitHub. The team is working on another file system in the Unify “family” designed for machine learning workloads, in which large data sets need to be distributed quickly. Additional Unify file systems are in development.
DOE Machines Dominate Record-Breaking SC18
November 20, 2018
Supercomputing ‘18 (SC18), held Nov. 11–16 in Dallas, broke records for attendees and exhibitors and saw LLNL once again make its presence felt on the world’s biggest HPC stage. For the first time in five years, the U.S. captured the top two spots on the TOP500 List of the world’s fastest supercomputers: Summit at ORNL and Sierra at LLNL.
Highlights from the conference include:
- Student program keynote from Bruce Hendrickson, associate director for Computation
- Women in HPC workshop led by Elsa Gonsiorowski
- Student technical program vice-chaired by Olga Pearce
- Spack tutorial
- Flux workshop
- P3HPC (performance, portability, and productivity) workshop
- Talks by LLNL experts at industry booths (e.g., Penguin Computing, NVIDIA)
New Repo: NLPVis
November 19, 2018
Machine learning gurus, this one’s for you! NLPVis enables visualization of neural networks in natural-language ML models. Setup is straightforward and includes a pre-trained model.
New Computing Cluster Coming to LLNL
November 13, 2018
The Lab is looking forward to Corona, a new unclassified HPC cluster that will provide unique capabilities for Lab researchers and industry partners to explore data science, machine learning, and big data analytics. Corona will help NNSA assess future architectures, fill institutional needs to develop leadership in data science and machine learning capabilities at scale, and provide access to HPC partners.
Read more about LLNL’s commodity clusters.
Sierra Supercomputer Dedicated and Ranked
November 12, 2018
LLNL recently unveiled the new 125-petaflop-capable Sierra supercomputer. Sierra will serve the NNSA’s three nuclear security laboratories: LLNL, Sandia National Laboratories, and Los Alamos National Laboratory, providing high-fidelity simulations in support of NNSA’s core mission of ensuring the safety, security, and effectiveness of the nation’s nuclear stockpile. Its arrival represents years of procurement, design, code development and installation, requiring the efforts of hundreds of computer scientists, developers and operations personnel working in close partnership with IBM, NVIDIA, and Mellanox.
Just a few weeks later, Sierra rose from third to second place on the TOP500 list of the world’s fastest computing systems after reaching 94.6 petaflops on the Linpack benchmark test. The latest rankings were announced at SC18.
See Sierra’s system details and watch a video of the dedication.
Earth System Grid Federation's Annual Conference Coming Up
November 03, 2018
The LLNL-led international Earth System Grid Federation (ESGF) will meet December 3-7 in Washington, DC, to plan the future of Earth system data analysis and more. Registration info is available on the ESGF website along with the conference agenda. Fork this 2017 R&D 100 winner on GitHub.
Good Times at GitHub Universe
November 01, 2018
LLNL open-source champions Laura Weber, Ian Lee, and David Beckingsale attended the 2018 GitHub Universe conference in San Francisco. Billed as “a conference for the builders, planners, and leaders defining the future of software”, the team enjoyed hearing about upcoming GitHub enhancements and being able to network with GitHub Federal employees and other GitHub users.
One recurring theme was inner source, the use of open source software development best practices and the establishment of an open-source-like culture within organizations. With this practice the organization may still develop proprietary software, but internally opens up its development.
Audio/Video: Spotlight on Spack
October 30, 2018
HPC developers build software from source code and optimize it for the targeted computer’s architecture. LLNL-developed Spack handles the process of downloading a tool and all the necessary dependencies – which can be tens or hundreds of other packages – and assembles those components and ensures they are properly linked and optimized for the machine. Todd Gamblin, Spack’s lead developer, talks with the Exascale Computing Project’s Exascale podcast. Both audio and video versions are available. Total runtime is 13:54.
- Spack: a US Department of Energy lab-developed app store for supercomputers (1:24)
- Spack’s role in ECP (2:26)
- The origins and evolution of Spack (5:23)
- Spack’s benefits and advantages (7:42)
- The use of Spack on Mac laptops and Linux machines (10:53)
- The plans for Spack (11:18)
Flux and Spack Events Coming to Supercomputing '18
October 27, 2018
LLNL staff are heading to Dallas, Texas, for the 30th annual Supercomputing Conference (SC18) on November 11–16. LLNL is leading 6 tutorials and 16 workshops with topics ranging from data analytics and data compression to performance analysis and productivity. LLNL-developed open-source tools Flux and Spack are subjects of a workshop and a tutorial, respectively. We hope to see you there!
Read more about our past experiences and tips for first-timers. A complete list of LLNL-led sessions can be found on the Computation website. All times are listed in Central Standard Time.
See all news in the archive