Machine learning (ML) has become an important part of many scientific computing projects, but effectively integrating ML techniques with massive datasets into large-scale HPC workflows is no easy task. To address this, an LLNL team developed Merlin. Merlin coordinates complex workflows through a persistent external queue server that lives outside of users’ high performance computing (HPC) systems, but that can communicate with nodes on users’ clusters. This dedicated task management center serves as a pivotal component, ensuring the efficient execution of tasks within users’ workflows. Additionally, because of the distributed nature of the task server, Merlin allows workflows to run across multiple machines simultaneously. By providing a centralized environment for task handling, the external task server optimizes resource allocation and enhances workflow performance. The tool scales easily, making it a great option for large-scale, ML-friendly workflows, which often require more concurrent simulations than a standard HPC workflow can typically execute. Read more about Merlin at LLNL Computing or visit its GitHub repo.