The Non-Emitting Imperative: Why Human-in-the-Loop Requirements Are Creating a Critical Kill Chain Capability Gap
The modern battlespace has become a crucible where milliseconds matter and electromagnetic signatures mean detection, targeting, and destruction. As peer adversaries field increasingly sophisticated electronic warfare (EW) and signals intelligence capabilities, the requirement for human-in-the-loop (HITL) approval in AI-driven kill chain decisions has transformed from a prudent safeguard into a critical vulnerability. In stealth-dependent operations, whether drone swarms penetrating contested airspace or autonomous underwater vehicles (AUVs) conducting undersea warfare, HITL mandates are creating a dangerous capability gap that compromises both survivability and mission effectiveness.
The Kill Chain at the Speed of Relevance
The military kill chain (Observe, Orient, Decide, Act (OODA)) has been fundamentally accelerated by AI and autonomous systems. Recent U.S. Air Force experiments demonstrate this stark reality: during the Shadow Operations Center-Nellis Experiment 3 in June 2025, AI systems generated targeting recommendations in under ten seconds, producing 30 times more options than human-only teams while maintaining comparable accuracy. As former Secretary of the Air Force Frank Kendall noted, "we're going to be in a world where decisions will not be made at human speed; they're going to be made at machine speed."
This acceleration is not merely incremental; it is existential. Hypersonic threats travel at Mach 5+, leaving mere seconds for engagement decisions. Mass drone swarms can saturate defenses with dozens of simultaneous vectors of attack. In these scenarios, the latency introduced by HITL requirements, often minutes or even hours for satellite communications to reach remote operators, creates an unbridgeable temporal gap. RAND's 2019 analysis, reaffirmed in recent discussions, concluded that autonomy replicates manned kill chain steps more efficiently in contested areas where human intervention becomes operationally impractical.
Yet current doctrine often mandates human approval for lethal decisions, forcing autonomous systems to maintain persistent RF or satellite communications links that serve as beacons for adversary detection.
The Emissions Death Trap
The fundamental paradox is this: HITL requirements necessitate communications, and communications generate emissions. In an era where adversaries have weaponized the electromagnetic spectrum, these emissions have become liabilities that compromise the very stealth upon which modern autonomous systems depend.
Drone Swarms Under Fire
The vulnerability is particularly acute for drone swarms, which rely on distributed coordination to overwhelm defenses. At Silent Swarm 2025, an annual U.S. Navy demonstration of advanced EW for small unmanned systems, Northrop Grumman's Tactical Edge Electromagnetic Solutions (TEEMS) successfully geolocated and jammed frequency-agile drone emitters, simultaneously disabling three different radios across a wide frequency range. The demonstration proved that even small, mobile drones become detectable when they emit.
This validates concerns raised in recent research by our adversaries like China on electronic countermeasures against drone swarms, which identifies RF emissions as the primary attack vector. When drone swarms must maintain constant communication with human operators for approval authority, they create a "coordinated emission pattern" that electronic support measures can easily detect, locate, and target. As noted in DroneDesk's 2025 analysis, autonomous swarms operating without constant human loops minimize emissions that could correlate to operator locations, thus improving operational security.
Thales and Autonomous Devices' new drone-based EW solution, unveiled at DSEI 2025, further underscores the threat: compact, agile platforms can now deploy sophisticated electronic attack capabilities that exploit precisely these communication dependencies.
AUVs and the Acoustic Signature
The same principle applies underwater, where acoustic emissions betray AUV positions. Australia's Ghost Shark XL-AUV program, the first production-scale autonomous submarine fleet awarded in September 2025, explicitly designed out the need for real-time human control. Capable of intelligence, surveillance, reconnaissance, and strike missions "thousands of kilometres away from the Australian continent," Ghost Shark operates on pre-mission parameters and onboard AI processing rather than continuous acoustic communication.
The rationale is clear: in undersea warfare, any transmission, whether acoustic or optical, creates a detectable signature that can be exploited by adversary submarine detection networks. MDPI's 2025 research on underwater autonomous operations confirms that proximal policy optimization enables self-sustaining navigation without real-time comms, enhancing stealth in remote or hostile waters. The U.S. Navy's own experiments with AUV mine-hunting have demonstrated that autonomous detection and neutralization without tethered communications is not only feasible but operationally superior in denied environments.
Edge AI: The Enabler of True Autonomy
The technological enabler for non-emitting autonomy is edge AI, processing data directly on the platform rather than transmitting raw information to remote command posts for analysis. As the Center for Strategic and International Studies (CSIS) highlighted in October 2025, "running inference on forward systems allows them to act locally when links are cut," reducing both vulnerability and visibility.
Modern edge processors like NVIDIA's Jetson or Google's Coral enable sophisticated target recognition, mission adaptation, and swarm coordination without external connectivity. BonV Aero's 2025 overview confirms that autonomous drones using onboard analytics can detect anomalies mid-flight without external communications, ideal for high-risk operations in denied environments. This approach reduces dependence on contested networks while limiting electromagnetic exposure that could reveal command node locations.
Ukraine's experience has become a cautionary tale: military officials describe the conflict as a "graveyard for command and control" where large command posts with antennas and generators stand out as high-value targets. When forward units must transmit all data to distant command posts for human decision-making, both the transmitting units and the command nodes become vulnerable.
The Counterargument: Three Fundamental Problems
The push for full autonomy must acknowledge legitimate concerns raised in recent analysis. The Air University's August 2025 article "Ready, Fire, Aim" identifies three fundamental problems that challenge the feasibility of tactical autonomy:
-
Perception Failure: Even advanced autonomous systems routinely misinterpret complex environments, and combat scenarios are inherently uncertain. AI systems have yet to demonstrate reliable sense-making for situational awareness in truly novel situations.
-
Data Limitations: Autonomous systems struggle to generalize beyond training data. War's essential uncertainty means militaries cannot produce comprehensive training data for all possible combat scenarios before hostilities begin.
-
Adversarial Vulnerability: AI systems exhibit "precarious brittleness" when faced with adversarial attacks, from GPS spoofing to sensor deception.
These concerns are valid but addressable through a phased, disciplined approach, what we might term "autonomy with oversight" rather than "autonomy with mandatory HITL." The solution is not to abandon autonomy but to implement robust testing, validation, and fail-safe mechanisms that enable autonomous operation in specific, well-defined scenarios while maintaining strategic human command.
The Strategic Imperative: Contest or Capitulate
The fundamental question is not whether to deploy autonomous systems, but whether to deploy emitting or non-emitting ones. Peer adversaries like China have invested heavily in automated kill chain capabilities and EW systems designed to exploit communication dependencies. As CSBA's research on "Mosaic Warfare" emphasizes, decision-centric operations require AI and autonomous systems to implement strategies faster than human-only organizations can respond.
Mandating HITL in all scenarios cedes the advantage to adversaries who can:
-
Detect and locate our autonomous assets through their emissions
-
Jam or hijack command links, severing human control when needed most
-
Outpace our decision cycles with automated systems unburdened by similar constraints
The U.S. Air Force's M-FAT program, launched in August 2025 to counter swarming drones, explicitly seeks to "blur the line between EW and cyber warfare by developing cyber countermeasures that disrupt small uncrewed aircraft, particularly command and control links." Ironically, this same vulnerability exists in our own forces when HITL requirements force persistent connectivity.
A Disciplined Path Forward
Removing HITL requirements does not mean removing human accountability or strategic oversight. Rather, it means:
-
Mission-Level Human Command: Humans define objectives, rules of engagement, and constraints before launch. AI executes within those parameters without requiring real-time approval for each decision.
-
Tiered Autonomy: Implement graduated autonomy levels based on mission type and threat environment. High-stakes scenarios like nuclear operations retain robust HITL safeguards; time-critical, stealth-dependent missions operate fully autonomously.
-
Emission Management Protocols: Mandate "silent running" modes where autonomous systems operate without emissions for extended periods, only connecting briefly and unpredictably for high-priority updates.
-
Adversarial Resilience: Invest in AI robustness research, diverse sensor fusion, and fail-safe behaviors that prevent catastrophic failure even if systems are compromised.
-
After-Action Accountability: Implement comprehensive data logging and forensic capabilities to ensure autonomous decisions can be reviewed and analyzed post-mission, maintaining accountability without operational latency.
Australia's Ghost Shark program exemplifies this approach. The $1.1 billion production contract awarded to Anduril Australia in September 2025 delivers a system that operates autonomously for long-range missions while maintaining human command authority at the strategic level, precisely the model needed for contested operations.
Conclusion: Adapt or Perish
The capability gap created by rigid HITL requirements is not theoretical. It is measurable in seconds of latency, in detection ranges, and in mission failure rates. As adversaries field increasingly sophisticated EW capabilities and automated kill chains of their own, the choice becomes stark: embrace non-emitting autonomy or accept operational inferiority.
The ethical concerns about autonomous warfare are legitimate and require vigorous debate. But ethics must be balanced against mission effectiveness and survivability. A system that cannot reach its target or is destroyed before engagement due to emission vulnerabilities serves no ethical purpose. It merely represents wasted capability and lost opportunities.
The technology for robust, non-emitting autonomous operations exists today, validated by production programs like Ghost Shark and edge AI demonstrations. The task now is to evolve doctrine and policy to match technological reality, replacing mandatory HITL requirements with intelligent frameworks that enable autonomy where it matters most: in the contested, emission-sensitive battlespaces where future wars will be decided.