Aurora

A brand-new class of system

Each machine generation provides a fresh challenge to U.S. computer manufacturers—from the racks to the processors to the networking to the I/O system. Similarly, fulfilling the science potential of each new computing architecture requires significant changes to today’s software. The initiative is, and will continue to be, guided by experts in the mathematics and computational science community, stewarded by the DOE’s Office of Science, and operated at the cutting edge.

But while people have been using supercomputers to solve big problems for years, the capabilities of exascale machines will provide new opportunities to advance science and engineering.

Researchers will be able to run a greater diversity of workloads, including machine learning and data-intensive tasks, in addition to traditional simulation and modeling campaigns. Providing the data science software “stack”—the high-level programming languages, frameworks, and I/O middleware that are conventional toolkits—at exascale, is a major effort in deploying Aurora.

Science on day one

Argonne researchers are contributing to a broad range of activities to prepare Aurora for science on day one. To ensure key software is ready to run on the exascale system, scientists and developers participating in DOE’s Exascale Computing Project (ECP) and the ALCF’s Aurora Early Science Program (ESP) continue work to port and optimize dozens of scientific computing applications. With access to the Aurora software development kit and early hardware, the teams are working to improve the performance and functionality of various codes, frameworks, and libraries using the programming models that will be supported on Aurora. In addition, training events, such as workshops, hackathons, and webinars, have been an important mechanism for providing guidance on application development and disseminating the latest details on hardware and software.

Learn best practices for porting code to Aurora

Learn how ALCF staff experts are preparing for Aurora
 

Aurora

Aurora System Specifications

Compute Node
2 Intel Xeon CPU Max Series processors: 64GB HBM on each, 512GB DDR5 each; 6 Intel Data Center GPU Max Series, 128GB HBM on each, RAMBO cache on each; Unified Memory Architecture; 8 SlingShot 11 fabric endpoints
Software Stack
HPE Cray EX supercomputer software stack + Intel enhancements + data and learning
GPU Architecture
6 Intel Data Center GPU Max Series; Tile-based chiplets, HBM stack, Foveros 3D integration, 7nm
CPU-GPU Interconnect
CPU-GPU: PCIe; GPU-GPU: Xe Link
System Interconnect
Slingshot 11; Dragonfly topology with adaptive routing; Peak Injection bandwidth 2.12 PB/s; Peak Bisection bandwidth 0.69 PB/s
Network Switch
25.6 Tb/s per switch, from 64–200 Gbs ports (25 GB/s per direction)
Theoretical Peak Performance
> 2 Exaflops DP
High-Performance Storage
230 PB, 31 TB/s, 1024 Nodes (DAOS)
Programming Models
Intel oneAPI, MPI, OpenMP, C/C++, Fortran, SYCL/DPC++
Platform
HPE Cray EX supercomputer
Aggregate System Memory
10.9 PB
System Size
10,624 nodes


Revolutionary architecture

Aurora will feature several technological innovations, including a revolutionary I/O system—the Distributed Asynchronous Object Store (DAOS)—to support new types of workloads. Aurora will be built on a future generation of Intel Xeon Scalable processor accelerated by Intel’s Xe compute architecture. Slingshot 11 fabric and HPE Cray EX supercomputer platform will form the backbone of the system. Programming techniques already in use on current systems will apply directly to Aurora. The system will be highly optimized across multiple dimensions that are key to success in simulation, data, and learning applications.

Developer information

We are identifying ways that you can begin preparing your code for the ALCF's exascale system. Please check out the links provided on this page to get started. We’ll continue to keep update as more information can be made available to the public.

For links to early adopter information, please see alcf.anl.gov/support-center/aurora

Aurora Early Science Program

The Aurora Early Science Program will prepare key applications for Aurora’s scale and architecture, and will solidify libraries and infrastructure to pave the way for other production applications to run on the system.

The program has selected 15 projects, proposed by investigator-led teams from universities and national labs and covering a wide range of scientific areas and numerical methods.

In collaboration with experts from Intel and Cray, ALCF staff will train the teams on the Aurora hardware design and how to program it. This includes not only code migration and optimization, but also mapping the complex workflows of data-focused, deep learning, and crosscutting applications. The facility will publish technical reports that detail the techniques used to prepare the applications for the new system.

In addition to fostering application readiness for the future supercomputer, the Early Science Program allows researchers to pursue innovative computational science campaigns not possible on today’s leadership-class supercomputers.

Aurora Early Science Projects