Fighting for Performance: USU Computer Scientist Tackles Exascale Challenges

August 30th, 2022 • Mary-Ann Muffoletto
Steve Petruzza, assistant professor in Utah State University's Department of Computer Science, is among recipients of a $600,000 NSF collaborative research grant to address data input and output challenges with exascale computing. Credit: M. Muffoletto

This summer, Oak Ridge National Laboratory announced its Frontier supercomputer was the first machine to break the exascale barrier. What does that mean? First of all, a whole lot of zeros: Frontier demonstrated its ability to calculate a whopping quintillion calculations per second—or 1.1 exaflops of performance. (Quintillion: That's one followed by 18 zeros. Greased lightning fast.)

More exascale computing systems are on the way, with Argonne National Laboratory's Aurora planning its debut by the end of 2022, and Lawrence Livermore National Laboratory's El Capitan, as well as systems in the European Union, expected in 2023.

Utah State University computer scientist Steve Petruzza says such mind-boggling computing power will fuel waves of innovation in a wide range of disciplines, including energy, cosmology, earth science, medicine, national security and more.

"Exascale computing will distill data with increasing resolution, fidelity and speed, and offer the ability to perform ever-impressive large-scale simulations," he said. "But it also brings formidable challenges."

Among those is a widening gap between red-hot computing power—that is, the number of operations performed per unit of time—and yet-to-catch-up data movement—processing, storing and analyzing data.

Petruzza, with a colleague at the University of Alabama at Birmingham, has been awarded a three-year, $600,000 National Science Foundation collaborative research grant to develop tools and techniques to relieve the traffic jam. About half of the grant amount is awarded to USU.

"Our work will yield a scalable and extensible I/O runtime and tools for next-generation adaptive data layouts to alleviate the bottleneck," said Petruzza, assistant professor in USU's Department of Computer Science. "The proposed data layouts will be hierarchical, compressed and tunable, making them suitable to deal with the data deluge and the evolving landscape of high-performance computing."

Data layouts, he says, will address the increasing flood of data and enable efficient and seamless access.

"Data layouts describe how data is organized, and how it is written into memory or stored into files," Petruzza said. "The goal is to increase efficiency and flexibility, allowing users to pluck needed information from an ocean of data much faster."

A novel feature of Petruzza and his colleague's proposal is a WebGPU-powered visualization system that will harness the progressive nature of the layout to enable interactive exploration of very large datasets on web browsers.

"Web GPU is a future web standard and JavaScript application programming interface or 'API' that will allow users to perform computation through a web browser," he said. "It's a next-generation tool for creating advanced 3D visualization and analysis directly to your browser."

The world is facing big challenges, Petruzza says, and the ability to interpret data quickly and accurately is critical to finding viable solutions.

"That is why leveraging large-scale, supercomputing power is so important," he said. "Our role, as computer scientists, is to make this process as easy, efficient, affordable and accurate as possible."

Provided by Utah State University