This Science News Wire page contains a press release issued by an organization and is provided to you "as is" with little or no review from Science X staff.

Schelble's ARO Agreement to Help Create More Secure Human-AI Teams

April 6th, 2026
Schelble's ARO Agreement to Help Create More Secure Human-AI Teams
University of Tennessee, Knoxville assistant professor Beau Schelble was awarded a cooperative agreement from the United States Army Research Office (ARO) to study how to prevent attacks that undermine team performance in human-AI teams. Credit: University of Tennessee

Right now, large language model artificial intelligences (AIs) like ChatGPT are tools. Like a calculator or a hammer, you use them to complete one task, then put them away until later.

However, AIs are gaining increased independence when contributing to collaborative work that human teammates rely on and work from. These AI teammates can quickly ingest and process large amounts of data, leaving humans to focus on skills AI can't match—like interpreting ambiguous outputs and applying prior experience to new contexts.

"AI presents a great opportunity for teaming because its inherent computational strengths and weaknesses often complement our own," said University of Tennessee, Knoxville assistant professor Beau Schelble, who leads the AI and Robotics for Collaborative Systems (ARCS) Lab in the Department of Industrial and Systems Engineering. "An effective human-AI team should achieve outcomes that either exceed what either could accomplish alone or enable what neither could accomplish independently."

In September of 2025, Schelble was awarded a cooperative agreement from the United States Army Research Office (ARO) to study how to prevent attacks that undermine team performance in human-AI teams by targeting task accuracy, coordination, trust, and situation awareness—and how to respond after such attacks. Schelble is serving as the principal investigator for the cooperative agreement alongside co-investigators from the United States Military Academy Army Cyber Institute, MAJ Allyson Hauptman, and the University of Michigan, Ann Arbor, Professor Lionel Robert.

Schelble has been studying the human-AI teaming space for nearly a decade. He sees the potential for human-AI teams (HATs) to revolutionize decision-making in manufacturing, nuclear energy, disaster recovery, healthcare, and more…as long as the AIs can be trusted to work in the team's favor.

"We still lack awareness of exactly how AI systems work to reach a specific outcome," said Sarah Mendoza, a Ph.D. student in Schelble's lab. "You can ask a human questions to understand their decision-making process, but we can't do the same for many forms of AI."

In addition to existing cybersecurity risks, compromised AI teammates could interfere with team dynamics by spreading misleading information, misassigning responsibility, or creating confusion that undermines coordination.

"Human-AI teams are going to be a common component of the working environment across several industries very soon," said Yayun Tian, who worked as a software engineer in the healthcare industry for three years before pursuing her Ph.D. in Schelble's lab. "When AI teammates are attacked, it is critical to understand how to support HATs' ability to identify and mitigate the attack before significant harm occurs."

All Teammates Have Weaknesses

Right now, if an AI tool outputs false or misleading information (commonly known as "hallucinations"), users typically do not view it as nefarious. They would interpret it as a competency issue: a mismatch between the model's capabilities and what is being asked of it. They may reduce their use of the tool, but they will not dig deeper to find signs of intelligent, malicious intent.

That's a notable part of the problem, since knowing when an adversarial attack began is critical to repairing the resulting damage.

"If a compromised AI teammate gives one person inaccurate information, that teammate will be less effective because they are working from an inaccurate model of their environment and task," Schelble said. "A compromised AI teammate could even get human teammates to argue with one another by feeding them conflicting information."

In such cases, getting the team back on track will take more than simply repairing the AI teammate programmatically or retraining the model entirely.

Just like humans, current AI tools have trust repair strategies—tactics meant to soothe and reassure teammates after an error. When users tell an AI tool that its previous output was false or misleading, the tool outputs an (often convincing) apology.

Given the longer time frame and the greater emotional impact of the 'betrayal,' humans will likely be much less willing to accept such apologies from previously compromised AI teammates.

"That AI teammate is there for a reason; the team's performance is better when it's there," Schelble said. "So, how do you get the team working together effectively again? Our team's research focuses on understanding how human-AI teams can recognize and respond when an AI may be compromised, and how to reintegrate the AI once its security vulnerability has been resolved."

Understanding Compromised AI Teammates

The first part of the three-year ARO cooperative agreement involves interviewing public safety and cybersecurity experts who frequently work with AI systems of some form, including those from organizations such as the University of Tennessee and the City of Knoxville. These experts will help the team identify factors that influence the level of risk posed by a compromised AI teammate and how different attacks may manifest in a team setting.

They will also provide insights into how human teammates may react to compromised AIs—sometimes unintentionally.

"Many of the experts I've spoken to think of AI as a really helpful system that has the potential to reduce their workload," said Mendoza. "But some interviewees are very, very hesitant to integrate AI into their work. So even within a team, you may have different people assuming a competency issue versus the idea that the AI is betraying them."

For the next phase, the cooperative agreement team has programmed a commercially available video game with a team task. Human participants will come to the lab and work together with one of two LLM teammates to accomplish the task. One means well, while the other will try to get in the way.

The researchers will evaluate session data to characterize the factors that contribute to human players' ability to sense when their code-based teammate is working against them. Schelble and the team hope to identify, define, and leverage novel aspects of situation awareness, information-sharing, and shared knowledge present for human-AI teams—at both the team and individual level—that can be augmented to help people identify a compromised AI teammate, then quickly recover from and reverse any actions it has taken.

By the end of the three-year cooperative agreement, the researchers hope to have established the field's first fundamental understanding and guidelines for effectively identifying, preventing, and recovering from compromised AI teammates.

Long-term human-AI teaming success will ultimately rely on human teammates' ability to recognize when something is off, but Schelble and the team hope this research will point toward a viable system and methodology grounded in fundamental science that teams can use to identify problems.

"Like the master caution light on an aircraft flight deck lets you know that there's a problem, we want to give people the tools they need to augment their team's ability to recognize and prevent or recover from an attack," Schelble said. "That way, they can see through a compromised AI and bring the team back to the ground truth as quickly and efficiently as possible."

Provided by University of Tennessee at Knoxville

Citation: Schelble's ARO Agreement to Help Create More Secure Human-AI Teams (2026, April 6) retrieved 6 April 2026 from https://sciencex.com/wire-news/536929894/schelbles-aro-agreement-to-help-create-more-secure-human-ai-team.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.