This Science News Wire page contains a press release issued by an organization and is provided to you "as is" with little or no review from Science X staff.

Advancing scalability, efficiency, and reliability of over-parameterized machine learning models

December 15th, 2025 Jessalyn Tamez

Salar Fattahi has been awarded a $406,825 grant from the Office of Naval Research (ONR) for his project titled "Understanding Implicit Regularization of Gradient-based Algorithms in Over-Parameterized Models." As the sole principal investigator, his research aims to improve the reliability, efficiency and theoretical foundations of machine learning algorithms used in high-stakes defense applications such as sonar signal processing.

When more data isn't better: The over-parameterization problem

Modern machine learning models often contain far more variables than are strictly necessary, a phenomenon known as over-parameterization. While this flexibility has driven recent advances in artificial intelligence, it also increases the risk that standard algorithms will overfit, resulting in solutions that fail in real-world or safety-critical environments. These risks are particularly acute in domains important to the Navy, where even subtle modeling errors can undermine mission-critical systems.

Fattahi's project aims to identify the conditions under which gradient-based algorithms naturally avoid overfitting, a phenomenon known as implicit regularization. Although recent research suggests that these algorithms tend to favor simpler, low-dimensional solutions, existing theoretical explanations are limited to idealized settings that do not accurately reflect modern large-scale machine learning practice.

Strengthening high-stakes defense technology by building safer algorithms

If successful, this work will benefit a wide range of Navy-relevant applications. These include improving the efficiency and real-time decision-making capabilities of autonomous underwater vehicles (AUVs) and advancing sonar signal processing, both of which increasingly depend on large-scale, over-parameterized models.

This ONR-funded effort aims to:

  • Identifying practical, general conditions that enable implicit regularization in over-parameterized models.
  • Developing faster, more efficient algorithms that take advantage of this naturally occurring behavior.
  • Testing these algorithms across a range of real-world, large-scale machine learning problems, ensuring their relevance and reliability.

"Our goal is to understand why modern machine learning algorithms often find simple, reliable solutions even when models are extremely over-parameterized," said Fattahi. "We hope to then turn that insight into faster, more trustworthy methods. By doing so, we hope to strengthen the decision-making backbone of technologies that operate in high-stakes environments, where even small errors can have serious consequences."

The need for robust, trustworthy AI is underscored in recent national reports, including the Defense Science Board's study on autonomy and the 2023 Department of Defense Data, Analytics, and Artificial Intelligence Adoption Strategy. Fattahi is an assistant professor within the Industrial and Operations Engineering (IOE) Department at the University of Michigan (U-M).

Provided by University of Michigan Department of Industrial and Operations Engineering

Citation: Advancing scalability, efficiency, and reliability of over-parameterized machine learning models (2025, December 15) retrieved 15 December 2025 from https://sciencex.com/wire-news/527232441/advancing-scalability-efficiency-and-reliability-of-over-paramet.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.