They won't just follow orders: Robot swarms could gain a startling new kind of autonomy
Lisa Lock
scientific editor
Robert Egan
associate editor
Robot swarms are systems composed of many simple robots that coordinate without central control. Soon, they could be radically transformed by artificial intelligence. A new article published in Science Robotics by researchers from the Université Libre de Bruxelles (Belgium) and the CISPA Helmholtz Center for Information Security (Germany) suggests that foundation models—large AI systems trained on vast amounts of data, familiar to many through applications such as ChatGPT—could fundamentally change how robot swarms are designed, deployed, and operated.
Traditionally, robot control software is manually programmed by experts. This process is time-consuming and often inflexible: Programmers must anticipate many possible situations in advance, yet real-world deployments can present unexpected events, from robot sensor failures in warehouse operations to the unpredictable conditions that arise during earthquake response.
How AI could reshape swarm control
The viewpoint argues that embedding foundation models into control software could enable robot swarms to achieve levels of autonomy, flexibility, and adaptability that have so far been out of reach. For this purpose, each robot would be equipped with onboard foundation models that process sensor inputs, such as camera images or temperature readings, and generate corresponding collective actions.
This could allow swarms to adapt their behavior in real time, deviate from their original tasks when necessary, and interact more naturally with humans through speech or gestures. Consider a robot swarm monitoring a forest that suddenly locates an injured person. Thanks to the foundation model-based control, the swarm could autonomously switch to the more urgent task of providing assistance—not because it was explicitly programmed to do so but because the situation demanded it.
Technical hurdles, risks, and ethics
Before this vision can become a reality, swarm robotics research still needs to overcome hardware limitations and better understand how foundation models can translate the behavior of individual robots into coordinated actions at the swarm level. Security also presents a serious concern. For example, hallucinated outputs, where a foundation model generates plausible but incorrect information, could pose significant reliability issues. The researchers therefore advocate a balanced research approach that considers both the possibilities and the associated risks, incorporating them into a comprehensive ethics-by-design framework.
"Foundation models may lay the foundation for robot swarms that autonomously execute responsible actions that consider how humans would react in a similar situation. At the same time, the probabilistic nature of foundation models raises fundamental questions about the trade-off between autonomy and controllability in autonomous systems," says Dr. Volker Strobel, lead author of the article and researcher at IRIDIA (the artificial intelligence lab at the Université Libre de Bruxelles).









