A New Swarm AI Project Takes on Safety at Scale AI, Honors + Awards, In the News, Research + Innovation / March 20, 2026 Share: Author: Melissa Pappas In homes around the world, artificial intelligence agents are writing emails, booking travel and advising on business ideas. But what happens when those agents leave the digital world and enter the physical world? Penn Engineers are taking on that question through a new three-year, international collaboration on Swarm AI, led by Rahul Mangharam, Professor in Electrical and Systems Engineering (ESE) and principal investigator of the Safe Autonomous Systems Lab (xLAB). The project brings together the University of Pennsylvania, Carnegie Mellon University and the National University of Singapore to study how large teams of physical AI agents can cooperate, compete and act safely. “Most of today’s AI agents live purely in software,” says Mangharam. “We’re moving toward physical AI, systems that don’t just generate answers, but act in the real world. And once AI operates in physical space, it has to deal with real constraints and real consequences.” When AI Agents Have Bodies When shifting the focus to embodied intelligence — robots on the ground, legged platforms, aerial systems and other mobile machines that must sense, decide and act in the real world — a new level of complexity arises. Unlike digital agents, physical agents must obey the laws of physics. They operate with limited energy, imperfect sensing and real-time constraints. They must avoid collisions, respect safety boundaries and coordinate with teammates. The research centers on cooperation and coordination in adversarial games — scenarios in which teams of agents must strategize against opponents while maintaining internal cohesion. How does a group of 10 agents operate against an opponent team of 10? What about 300 versus 1,000? How do you scale decision-making without micromanaging each individual machine? These are not just engineering challenges, they are questions at the intersection of game theory, machine learning and robotics. “A key technical focus is understanding intent,” says Mangharam. “Agents must infer what other agents, human or machine, are trying to achieve and adjust accordingly. They must coordinate without centralized control and respond to dynamic, uncertain environments. This project brings together research in machine learning for multi-agent systems that use game theory to cooperate and compete.” At large scales, this becomes an algorithmic and a systems problem: how to design distributed algorithms that scale to tens, hundreds or even thousands of agents to make consistent, safe decisions in real time. Ph.D. student, Hongrui Zheng (left), and Rahul Mangaharam (right) hold up the drones used in this research. Zheng’s work is central to this collaborative project as part of his dissertation. A simulation of the swarm robots flying in a figure eight through obstacles showcases their ability to interact as both individuals and as a team to accomplish a task. The team’s drones are no bigger than a small tea saucer, able to fit in the palm of a hand and built to be lightweight, transportable and durable. Neurosymbolic AI: Encoding Human Knowledge One distinctive aspect of the project is its emphasis on neurosymbolic AI, which combines neural networks with structured, human-encoded knowledge. “You can’t just throw AI at a problem and expect it to magically figure everything out,” says Mangharam. “There’s always human context — hard-earned domain knowledge, engineering realities, safety rules — that doesn’t live neatly in data and can’t simply be learned from scratch. If we want these systems to work in the real world, we have to teach them the fundamentals we already understand. By building those physical limits, safety boundaries and operational principles directly into the system, we develop Physics-Informed Neural Networks, or PINNs, which give AI the necessary domain knowledge on how the world works, the expectations and the lines you can’t cross.” Safety as a Design Principle, Not an Afterthought Throughout the project, safety is not a secondary metric but a primary objective. Many commercial AI systems have prioritized performance and scale, with safety considerations to be layered on later. Mangharam and his collaborators are taking a different approach: embedding provable safety guarantees into the algorithms themselves. “The real question is: how do we make sure these systems are safe with each other and safe for people?” asks Mangharam. “For us, safety can’t be an afterthought or something we trade for better performance. It has to be built in from the start, with clear guarantees that the system won’t create dangerous situations, whether that’s machines colliding with each other or making decisions that could harm someone. As these networks grow and get more complex, those guarantees matter even more.” A Global Collaboration The project combines complementary strengths across institutions. Carnegie Mellon University will focus heavily on large-scale physical demonstrations of coordinated robot teams. The National University of Singapore will contribute advances on the algorithmic side. At Penn Engineering, the effort is anchored in the PRECISE Center, with a focus on the core foundations for safe, distributed decision-making. Mangharam works alongside faculty collaborators, including Antonio Loquercio, Assistant Professor in ESE, whose research explores control and learning for embodied AI systems, Mingmin Zhao, Assistant Professor in Computer and Information Science (CIS), who develops sensing and perception techniques to infer intent and attention and Linh Phan, Professor in CIS, who studies scalable distributed algorithms for rapidly changing, multi-agent environments. “At the end of the day, this isn’t about building smarter robots just for the sake of it,” says Mangharam. “These swarms might have the potential to respond in real time to natural disasters, transportation challenges, infrastructure failure and beyond. That’s the real promise here: technology that can operate at scale in the real world and make our systems safer, more resilient and more efficient.” Learn more about Managharam’s work on his research website. This project is sponsored by Singapore’s Homeland Technology Agency (HTX) and led in partnership with ST Engineering. Read More Topology Helps Build More Robust Photonic Networks 2025 President’s Innovation Prize Recipient: Sync Labs