The Myth of the Human-in-the-Loop and the Reality of Cognitive Offloading
Artificial Intelligence (AI) is advancing at a remarkable clip, with breakthroughs, developments, and applications emerging so frequently that they can feel almost routine. Companies are rendering benchmarks obsolete quicker than we can devise more accurate ways to measure their capabilities or even define the goalposts. Less than a decade ago, AlphaGo’s victory over Lee Sedol was regarded as a “watershed moment” for AI. Today, achievements of arguably similar magnitude—such as Gemini 2.5 Deep Think earning a gold medal at the International Collegiate Programming Contest in September 2025—are reported with increasing regularity.
However, AI’s significance does not lie in any single breakthrough, nor in speculative visions of artificial general intelligence (AGI) or “singularity” scenarios. Nor does it lie in the promised first-mover advantages that states believe they might earn if they can “dominate” AI first. Like electricity, the railroad, or the telegraph in earlier eras, AI’s real value lies in being an enabling, general-purpose technology—in this case, one that generates efficiencies, makes messy data usable, and builds better tools for human decision makers.
The very promise of AI, however, creates a parallel challenge. The same systems that reduce cognitive load and augment human capacity also risk encouraging overreliance and inappropriate offloading of judgment to machines, a phenomenon known as automation bias. On the battlefield, the stakes of the promise and peril of AI are obvious. Militaries have long sought technologies to chip away at or better navigate Clausewitz’s “fog of war.” AI offers new methods or approaches that appear to do just that: digesting vast streams of sensor data into a “glass battlefield,” sharpening precision through narrower circular error probables (CEPs), and enabling autonomous platforms in the air, on the ground, and on (and under) the oceans to perform intelligence, surveillance, and reconnaissance (ISR) and strike missions. Ongoing military operations in Ukraine and the Middle East have already highlighted both the advantages of adopting these tools, in particular when used in tandem with exquisite, legacy capabilities—and the dangers of failing to do so.
Technological Responses to the Fog of War
History offers analogies. For example, advances in microprocessing and sensor technology enabled increasingly accurate guidance to be embedded in individual munitions, creating “smart bombs,” rather than in the entire platform, allowing for fewer sorties and increased precision. AI represents the next step in the evolution of systems that shift more of the burden from the operator and embed it into the machines themselves, in the hopes of enabling greater speed, precision, and efficiency. There is a throughline from the Norden bombsight of World War II, which served as a mechanical aid for bombardiers to calculate release points, to the laser-guided bombs of the Vietnam War, which allowed operators to “paint” targets with a laser, to Joint Direct Attack Munition (JDAM) kits in the late 1990s that added GPS and inertial guidance to conventional bombs—meaning the operator just had to plug in coordinates; and finally to the automatic target-recognition and computer-vision capabilities being developed today, which allow operators to define the target set and “kill box.”
AI is the natural next step in the evolution of the precision-strike complex, which accelerated during the Gulf Wars, but whose ancestry dates back to experiments conducted in the 1930s and 1940s to improve targeting. Over the course of decades of advances in guidance, sensors, and autonomy, those early ideas have been refined into weapons that can loiter, autonomously search for targets, and carry out single-use strikes—now with AI augmenting sensing, classification, and decision-making support.
Uses of AI extend this trajectory by pushing sensing, targeting, and elements of judgment deeper into the system, reshaping how tasks are organized and executed. Rather than replacing human decision-making outright, whether for aggregating surveillance data or decision support, AI shifts workflows in ways that (in a perfect world) enable both civilian and military users to spend more of their time on higher-level strategic choices (such as ensuring compliance with International Humanitarian Law (IHL)) while offloading more of the technical and tactical minutiae to the system.
Human–Machine Interaction and Automation Bias
The common belief that the primary risks of military AI reside in the technology itself—or in creating ever more autonomous capabilities—is a myth. At first glance, this redistribution of tasks might appear to cede too much to machines, but in reality, it underscores that the real risks flow from how humans use and sometimes over-rely on them. In practice, what we see most frequently, even in cases of the most autonomous capabilities, are human-machine teams, and the dangers that matter most emerge from the frictions of their interaction. Chief among these is the risk of unmitigated automation bias: the tendency to over-rely on an automated system or over-delegate cognitive tasks to a machine, even in the face of contradictory evidence. In national security contexts, this dynamic can be particularly hazardous, increasing the likelihood of accidents, misjudgments, and even inadvertent escalation or unintentional conflict.
Automation bias can endanger the successful deployment of artificial intelligence and autonomous systems in military contexts by eroding the user’s ability to responsibly and effectively operate, trust, and maintain justified confidence in a given capability. It is increasingly recognized as a core problem in international discussions on the responsible military use of AI and autonomy, referenced both in the Political Declaration on Responsible Military Use of AI and Autonomy and in the rolling text of the United Nations Group of Governmental Experts (GGE) on Lethal Autonomous Weapons Systems (LAWS), for example. The central issue is not only whether AI systems themselves are free from bias, but whether the humans who interact with them can exercise appropriate judgment to ensure these systems are employed in line with their design, economy of mission, and in compliance with IHL and the law of armed conflict.
The Human Dimension of Military AI
The notion of a “human-in-the-loop” as protection against the “risks” of AI engenders a false sense of security. Simply placing a human in front of a machine does not automatically mitigate the risks of AI deployment in military contexts, and mandating human control of a system has never been standard practice when delegating to tools and machines in the past, whether in commercial or military domains. In many cases, it can even undermine the goal of reducing cognitive load on decisionmakers operating under stress, thereby increasing the risk of accidents that, in military contexts, are often life or death.
Early aircraft required continuous physical effort from pilots to maintain stability, but the introduction of hydraulic systems and, later, autopilot, evolved the role of the pilot. Rather than manually managing every aspect of flight, pilots increasingly supervise systems that handle stability, navigation, and even takeoff or landing. On U.S. Navy aircraft carriers, for example, F/A-18 pilots are instructed to keep their hands off the control stick during catapult launches, since the fly-by-wire computer is programmed to set the precise attitude and optimum angle of attack needed to clear the deck safely, and manual correction makes errors and crashes more likely. In that case, the system is more trustworthy than even elite pilots. The human is still responsible, but does not exert direct operational control in this case. In fact, doing so is undesirable, with the official Navy F/A-18 Flight Manual explicitly dictating that “the pilot should attempt to remain out of the loop but should closely monitor the catapult sequence.”
U.S. policy has deliberately adopted the approach of ensuring appropriate levels of human judgment when it comes to the governance of military applications of AI. This phrasing encompasses the full spectrum of systems where AI might be applied, including LAWS, and emphasizes that humans are not absolved of responsibility by virtue of a capability being autonomous. After all, just as militaries create rules to assign responsibility when ships run aground or planes crash today, they will create rules to assign responsibility to commanders and operators when there are accidents with autonomous systems.
Militaries may be uniquely positioned to lead in the effective governance and responsible use of AI, as they have the unique ability to exercise significant control over their users through testing, evaluation, verification, and validation (TEVV), as well as organizational policies, doctrine, processes, and rigorous training. They also often possess decades of experience deploying highly automated and complex, and sometimes autonomous systems, in pressurized environments. As a result, militaries could surpass the commercial sector in developing and implementing best practices for managing AI risks. In other words, even if we cannot change the human condition—or build a flawless technical system—we can shape the conditions under which humans and machines operate.
The Benefits of Normalizing AI
AI’s value for militaries is less likely to derive from AGI or an AI-driven “wonder weapon” than its use as a normal general-purpose technology with the potential to generate efficiencies, reduce cognitive load, and extend the trajectory of military and civilian innovation. However, because AI is a normal technology, it is also subject to the same bureaucratic slowdowns, risks of bias, and challenges of human–machine interaction that have long shaped technological adoption, and in particular, the adoption of complex systems that have progressively shielded the humans from large swaths of data and information, and in fact, by design to an extent, facilitate cognitive offloading to machines. The trick is to strike a balance.
The task, then, is to embrace AI as the next step in the ongoing evolution of human–machine teams while ensuring that adoption is neither too slow nor too careless. Policymakers and militaries alike must move beyond the false comfort of “human-in-the-loop” and design systems, organizations, and doctrines that account for bias, foster accountability, and create the conditions for responsible use.