From swarms to digital twins: AI’s future in defense is now
StoryJune 11, 2025

Drones that hunt targets without human pilots. Artificial intelligence (AI) systems that predict when military equipment will break down weeks before it actually fails. Computer networks that keep working even when enemies jam communications or cut internet connections. These visions of the future are unfolding right now, as the defense industry moves beyond prototypes to deploy AI systems that military personnel can use in real operations.
The shift toward real-world use of artificial intelligence (AI)-enabled battlefield technology represents a fundamental change in how the military thinks about AI. Instead of trying to replace human decision-makers, these new systems are designed to give warfighters better information faster, handle routine tasks automatically, and keep critical systems running when traditional networks fail.
However, making AI work reliably in changeable military environments carries so many questions: How do you train computer systems when real combat scenarios are too dangerous to practice? How do you ensure autonomous weapons make the right choices in life-or-death situations? Perhaps most importantly, how do you build AI that soldiers and commanders will actually trust?
The solutions put forward by several defense contractors reveal not just what military AI could do someday, but what it’s already doing today on bases and in field operations around the world.
Bringing autonomous intelligence to smaller units
Red Cat (San Juan, Puerto Rico) hopes to fundamentally change how small military units gather intelligence with its Black Widow small uncrewed aerial platform. Working with partner company Palladyne AI through the Red Cat Futures Initiative, the companies recently demonstrated three uncrewed aerial systems (UASs) that conducted autonomous target tracking without human intervention.
“We conducted a three-drone test using Teal 2 and Black Widow doing autonomous collaboration of target tracking on the edge, so that really enables the operators to do other missions while the drones are out there collecting that intel,” says Tommy Brown, vice president of business development and sales at Palladyne AI. (Figure 1.)
The approach enables warfighters to define an area of interest and select available drones, then send them via ATAK [Android Team Awareness Kit] to investigate autonomously. The drones begin tracking whatever they find – people, vehicles, or other targets – and maintain surveillance without further human input.
What makes this autonomy possible is the Black Widow’s onboard computing power, specifically its Qualcomm RB5 processor that enables distributed collaboration without requiring connectivity back to a base station.
Black Widow is “very much like your cellphone” where “the warfighter has access to a variety of different applications that can help with a variety of missions,” says Stan Nowak, vice president of marketing at Red Cat. The platform serves as a hub for multiple AI applications developed by Red Cat Futures Initiative partners, including voice command control and target recognition capabilities. (Figure 1.)
[Figure 1 | Red Cat’s Black Widow is a modular small uncrewed aerial system (sUAS) designed for short-range reconnaissance in electronic warfare environments, featuring integrated AI, high-resolution EO/IR [electro-optical/infrared] sensors, and a field-repairable design. Image via Red Cat.]
The immediate benefit for small units can be substantial. “We are getting capability that used to only be available with much larger drones or other platforms,” Brown says. “That three- or four-person element now has the capability to do all of that just out of a backpack.”
Looking ahead, both companies see potential for multidomain operations where AI systems coordinate across different environments. Such capability means taking on new challenges, including integration with space-based radar systems and maritime platforms.
“There are different AI tools where we are … doing target recognition and tracking on the open sea,” Brown notes. “That’s a different model than you have to use on land.”
From platform control to mission autonomy
While Red Cat and Palladyne focus on small-unit operations, Shield AI (San Diego, California) is tackling a broader challenge: that involved in enabling autonomous systems to execute complete missions rather than just basic functions. The company’s Hivemind platform represents a shift from what the industry calls “platform autonomy” to “mission autonomy.”
“AI-powered autonomy is rapidly expanding beyond the air domain into maritime, space, and missile defense, driven by the need to extend the life of legacy systems and close critical operational gaps,” says Christian Gutierrez, vice president of Hivemind Engineering at Shield AI. “At the heart of this evolution is the shift from platform autonomy, where a system manages basic functions like navigation or propulsion, to mission autonomy, which allows systems to execute complex objectives such as reconnaissance, targeting, or electronic warfare based on real-time data.”
This distinction matters in contested environments where human operators may lose communication with deployed systems. Traditional autonomous platforms can navigate and avoid obstacles, but mission autonomy enables them to make tactical decisions about how to complete their assigned objectives without further human input.
Shield AI’s approach centers on building trust between human operators and autonomous systems through transparency and predictable behavior. “Trust requires more than performance. It demands predictability, safety, and transparent feedback,” Gutierrez explains.
The company has developed real-time mission debrief tools and validation frameworks to help operators understand how autonomous systems make decisions.
The company’s execution of crewed-uncrewed teaming demonstrations – including work with platforms like Firejet and the X-62A VISTA – shows that human-AI collaboration is “real, tested, and building confidence,” Gutierrez says.
Looking across the next 10 years, Gutierrez and Shield AI see the biggest opportunity in coordinated autonomous operations. “Large-scale teaming of autonomous systems and collective intelligence will define the next decade of military capability,” Gutierrez says. “As systems gain edge-level intelligence and can coordinate without centralized control, they’ll operate effectively even when comms and GPS [global positioning system] are denied.”
Shield AI has teamed Hivemind with their MQ-35 V-BAT (vertical takeoff and landing uncrewed aerial system) with Hivemind being the AI pilot, which can make teams of VBATs possible, according to Brandon Tseng, co-founder and president of Shield AI, in a 2024 interview with Military Embedded Systems.
This capability could change military operations by enabling autonomous systems to adapt collectively to battlefield conditions. (Figure 2.)
[Figure 2 | Shield AI’s Hivemind is an AI-enabled autonomy system that leverages coordinated control of uncrewed platforms in contested environments. Image via Shield AI.]
“AI autonomy shifts the focus from tactical ISR [intelligence, surveillance, and reconnaissance] to strategic coordination – enabling systems to adapt to battlefield dynamics and maximize mission outcomes,” Gutierrez adds, calling this “a generational shift in warfare and decision-making.” (Figure 3.)
[Figure 3 | A pilot with DARPA's Air Combat Evolution (ACE) program flies an aircraft integrated with Shield AI's Hivemind. Image courtesy Hivemind.]
Building trust through explainable AI
There is also a generational shift in how AI is perceived and leveraged throughout the military. Raytheon is addressing a fundamental challenge that underlies all military AI applications: ensuring human operators understand and trust AI-powered systems. The defense giant’s approach centers on explainable AI that provides transparency into how systems reach their conclusions.
“Trust is a key aspect to developing human comfort with AI-enhanced systems,” says Dr. Shane Zabel, director of artificial intelligence at Raytheon Intelligence and Space (Dallas, Texas). “Does the human operator understand how the AI-enhanced system will behave across the operational conditions it will be used in, and does the human trust the AI-enhanced system to operate as expected?”
Raytheon’s solution involves AI systems that don’t just provide answers, but explain their reasoning process.
“If the AI-enhanced system not only provides an answer but also provides feedback on how it came to the answer, how accurate the answer is believed to be, and any biases it may have, then humans have more information on which they can establish trust,” Zabel explains.
This transparency becomes even more important as AI capabilities expand to all military domains. Zabel sees edge AI – applications outside cloud or data center deployments – growing across “multiple domains including sea, land, air, space, and cyber” as processing power and data-storage capabilities improve in smaller, lower-power packages.
“Improved device performance in smaller, lower-power applications is a key enabler for enabling newer AI technologies to be applied to edge systems,” he notes, echoing the size, weight, and power (SWaP) considerations that drive much of military technology development.
Beyond combat applications, Raytheon sees notable potential for AI to enhance logistics and supply-chain operations. Zabel points to AI assistants that can help humans increase productivity with certain tasks such as information search and retrieval, document understanding, and language generation, along with visual inspection and optimization tasks.
“These technologies hold the promise to significantly enhance supply-chain and logistics operations capabilities,” he says. “With the AI assisting humans in these areas, we can make our supply chains more resilient, shorten the timelines for maintenance, repair and overhaul operations, optimize the allocation of material and supplies for a given operational tempo, and improve our global operational readiness.”
The company has observed military personnel becoming more comfortable with AI-powered systems as explainability improves, part of what Zabel describes as “a broader societal trend.” This growing acceptance is essential for high-stakes military applications where trust between human and machine can determine mission success or failure.
AI as the foundation for open systems
Integrating AI technology will also rely on open architecture designs that enable interoperability. Wind River (Alameda, California), which is involved in many modular open systems approach (MOSA) initiatives, is positioning AI as the key to making different military systems work better together. The company’s eLxr Pro platform is designed to create AI-ready infrastructure that can adapt quickly to new requirements while supporting the U.S. Department of Defense (DoD) MOSA push.
The challenge Wind River is trying to solve is fundamental: Military systems from different contractors often can’t talk to each other effectively, making upgrades expensive and time-consuming. The company believes AI can change that by serving as a translator and coordinator between different systems.
“eLxr is a purpose-built distribution that can address the realities of the aerospace, defense, and government industry that is finally solving the capability-agnostic roadblocks that have plagued the community, such as security vulnerabilities [and] unrealistic [equipment] end-of-life expectations,” says Dr. Justin Pearson, senior director of architecture and business growth for aerospace and defense at Wind River.
The platform builds on open-source software but adds commercial support to help defense customers deploy secure, reliable systems across different environments. It is designed to support not just today’s AI applications but future developments that Pearson describes as “Agentic AI and Physical AI.”
AI will do more than just help military systems follow open standards – it can accelerate the entire process. “AI systems built with open standards can be modularized and deployed across heterogeneous hardware platforms, aligning well with MOSA’s emphasis on interoperability,” Pearson explains.
Pearson envisions AI serving as “an intelligent glue layer that enables plug-and-play capability across vendors and systems.” Instead of spending months integrating systems from different manufacturers, AI could instead help them work together automatically, reconfiguring components as needed and predicting when parts need maintenance or replacement. Rather than focusing on individual applications like target recognition or autonomous navigation, AI’s biggest impact is coming from speeding up how the military develops and deploys new capabilities, Pearson says.
“The entire acquisition life cycle needs to have complementary AI strategies associated and [must] remain agile enough to react to what information humans end up validating from the AI output itself,” he notes.
In other words, AI’s greatest military value may not be in replacing human decision-makers, but in helping the entire defense system adapt and evolve faster than ever before.