Exploring the Implications of Military Artificial Intelligence for Deterrence
This article was published alongside the PWH Report “The Future of Artificial Intelligence Governance and International Politics.”
There is a great deal of hype, especially among technologists and practitioners within the artificial intelligence (AI) sector, about the potentially transformative implications of military AI capabilities for deterrence and strategic stability. One exemplar of this view is the idea expressed by Dan Hendrycks, Eric Schmidt, and Alexandr Wang that the prospects of a great power rival developing superintelligent AI could create windows of opportunity to conduct a disarming preventive strike. Only through a carefully calibrated strategy of “mutually-assured AI malfunction,” the authors posit, can great powers avoid dangerous instability and escalation dynamics. However, others have offered more a more cautious perspective. Edward Geist, for instance, argues that AI may actually enhance the efficacy of military deception. Rather than enable perfect battlefield awareness (and therefore make counterforce strategies more reliably successful, potentially undermining deterrence), AI may instead act as a “fog of war machine,” thus enabling nuclear deterrence to hold.
The question of the implications of military AI for deterrence does not have one definitive answer. That is because addressing the implications of military development and use of AI for deterrence requires an elaboration of several complex and weighty topics. The purpose of this essay, therefore, is to engage in a brush-clearing and level-setting exercise that defines and scopes the concepts of “military AI” and “deterrence,” and subsequently offers a set of plausible propositions regarding how the former is likely to affect the latter. In sum, there is no one “deterrence” and no one “military AI.” Therefore, how the latter affects the former will depend on how key terms are defined and operationalized, how AI technologies are likely to develop, the strategic context, and other important variables.
Defining and Scoping Deterrence
Defending and scoping deterrence is, in itself, a significant undertaking; this quite cursory review cannot do the deterrence field sufficient justice. The deterrence literature in international politics has evolved over the course of several distinct waves. Writing in 1979, Robert Jervis defined three distinct waves of deterrence scholarship, from the early nuclear scholars of the immediate post-World War II era, to the use of basic game theoretic concepts by scholars in the 1950-60s to craft the building blocks of nuclear strategy, and finally to the third wave of deterrence theorizing in the 1970s that focused on empirically testing deterrence theory, extending the literature beyond the nuclear realm to explore conventional deterrence, and problematizing assumptions about the rationality of decision-making. Scholars like Jeffrey Knopf and Amir Lupovici subsequently identified a fourth wave of deterrence theory in the post-9/11 era centered on the feasibility of deterring terrorism and so-called “rogue” states. More recently, we are now in an emerging fifth wave of deterrence theory focused on new technologies and domains, such as cyberspace and AI.
Deterrence, in Thomas Schelling’s classic formulation, is a coercive strategy that entails the combination of credible threats and assurances to prevent an adversary from taking an undesirable action. It is distinguished from compellence in terms of its relationship to the status quo (while deterrence aims to uphold it, compellent strategies seek to change undesirable behavior that has already commenced); and the role of the application of military force (if force has already been applied, then deterrence has failed, while compellent strategies can accommodate the limited application of force). That said, scholarship on the cultural underpinnings of deterrence, such as work by Dima Adamsky, has noted that the binary treatment of deterrence with respect to the use (or non-use) of force reflects a U.S./Western perspective on deterrence theory, while states like Russia see deterrence as more of a continuum that applies across domains and states of conflict.
Effective deterrence rests on a number of factors that can be elusive and sometimes in tension with one another. As Schelling enumerates, it demands clear communication of the deterrent threat; the threat must affect the adversary’s calculation of the upshot of the cost, benefits, and risks of compliance versus defection, which requires knowing what the adversary values; the threat must be credible, which can be enhanced through efforts to send costly signals of capability and resolve; and finally the threat must contain an element of reassurance, such that the adversary believes its compliance will not result in punishment regardless. Such threats and inducements can be implemented through various mechanisms, including the classic approaches of deterrence by punishment and denial, as well as more contemporary strategies of deterrence such as deterrence by entanglement and normative taboos. Ultimately, the effectiveness of a deterrent threat is in the eye of the beholder—it rests on the perception and understanding of the adversary. In turn, structural factors such as the information asymmetry that defines the anarchic international system, as well as state- and individual-level factors like differences in strategic cultures or individual psychology and (mis)perception can undermine the success of deterrence strategies. This brief summary underscores the many factors that are necessary for deterrence strategies to succeed—which is perhaps why coercion can be so elusive. Extending this logic to the realm of military AI capabilities, we can explore the extent to which AI attenuates or enhances the correlates of deterrence success.
Defining and Scoping Military AI
It is also important to define and scope military AI. AI is not a military capability in itself. Rather, as Michael Horowitz notes, AI is a “general-purpose” technology. It is less like nuclear weapons than it is like electricity—a technology that is integrated with and enables other capabilities, process, and functions. Some AI enthusiasts depict a futuristic vision of AI as a standalone, “superintelligent” capability that is likely to far exceed the intelligence of human beings. But even the believers in this form of AI (should it ever materialize) describe such a superintelligent AI in terms of its implications for other capabilities and issues (one example of this is the “AI 2027” concept, which predicts the trajectory of AI innovation and its impact on things like hacking, coding, bioweapons, robotics, forecasting, and politics). For the purposes of this essay, I scope the discussion to the extant state of technology, though many of the propositions I present could also apply to next-generation AI innovations.
Broadly defined, AI involves the use of computing capabilities to learn and make decisions in ways that simulate human intelligence. While military and intelligence organizations have been leveraging machine learning tools for decades, here I focus on more recent advancements in generative AI, which leverages deep learning, neural networks, and machine learning to enable computers to create human-like content. Large Language Models (LLMs) such as GPT-4, Claude, or Gemini, represent one, but not the only, application of generative AI. These advanced AI tools rely on three underlying components: complex algorithms with billions of parameters that are developed, tested, and refined by engineers; large quantities of quality data against which these algorithms can be trained; and the computing power to process the models.
As a general-purpose technology, AI has a range of potential military applications. Each of these is likely to have distinct deterrence dilemmas and challenges, making it difficult to offer a blanket determination about the implications of “military AI” for deterrence. Current and likely near-term future military applications of AI can be broadly organized into the following categories, though these are not meant to be comprehensive: AI tools that enhance intelligence collection and analysis through the collection and processing of large volumes of data and speed and scale; AI-enabled decision-making support, where AI tools can be used to provide insights, make recommendations, model scenarios, red team various courses of action, or even delegate decision-making to AI agents; the integration of AI capabilities into kinetic weapons, including drones as well as potentially nuclear forces or nuclear command, control, and communications (NC3); and AI-enabled cyber operations, both for offense and defensive applications.
While these all represent distinct applications of AI in a military context, there are also some common attributes or characteristics of military AI in general that have implications for deterrence.
One is that there is a great deal of uncertainty about the military applications of AI. This uncertainty manifests in several ways. One form of uncertainty pertains to uncertainty about the future trajectory of AI development. Recently, advances in generative AI have occurred at a rapid pace, and there is vociferous debate about the rate at with continued progress in AI models will occur. Therefore, there is persistent uncertainty about the future pace of technological development. Another form of uncertainty related to how AI likely will be applied in future warfighting contexts and what are the optimal applications (and risks) of military AI. Avi Goldfarb and Jon Lindsay, for instance, argue that AI will be most useful for military organizations in terms of streamlining some aspects of military organization, planning, and logistics, but will not substitute human judgment in the fog of war. Furthermore, uncertainty may be compounded by differences in strategic culture that affect conceptualizations of both deterrence and the implications of emerging technologies. Uncertainty matters for deterrence because it affects how actors make assessments about the consequences of cooperating or defecting, and how they understand and interpret (or misinterpret) adversary signals.
A second important characteristic of military AI that has implications for deterrence is the role of private actors and, relatedly, the dual-use nature of military AI. Innovations in AI are emerging largely from the private sector, rather than from military organizations. Private actors are at the forefront of the development of advanced AI capabilities, for both civilian and military purposes. Moreover, as a general-purpose technology, AI tools are inherently dual-use in nature. As Jane Vaynman and Tristan Volpe have shown, the dual-use nature of military technologies and the extent of their integration in civilian economies have implications for the viability of efforts to regulate such technologies through formal arms control regimes. There are also related implications for deterrence. For one, the operationalization of deterrence strategies will inevitably depend on the cooperation of the private sector either as instrument of deterrence, or as the object of deterrence. Yet, private sector actors may have different interests and preferences than those of the governments implementing such strategies.
Finally, the opacity of AI has implications for deterrence. Opacity also manifests in various ways. One is in terms of the opacity of the algorithms themselves: the outputs of generative AI models are something of a “black box,” such that it is difficult even for the engineers who developed the models to assess the chain of reasoning or logic that produced a particular decision, prediction, or outcome. Opacity is also relevant in terms of the proprietary nature of AI models and their underlying training data, which makes transparency challenging. Even purportedly “open models” do not provide complete transparency about their training data, parameters/model weights, and the source code. Opacity, which is closely related to the concept of deception, is a fundamental confounder of deterrence in terms of assessing capability, intent, credibility, and so on.
To begin to map out the implications of military AI for deterrence, I disaggregate this question into two aspects. The first focuses on the object of deterrence—in other words, how should we think about the various potential aims of deterring military AI? The second focuses on the means of deterrence—to what extent are military AI capabilities likely to enhance or undermine deterrence strategies, regardless of whether the object is focused on deterring adversary AI per se, or other deterrence goals?
What is the Object of AI Deterrence?
One lens through which to explore the question of military AI and deterrence is to focus on the object of deterrence—the types of behavior and actors who should be deterred/dissuaded. There are several ways to consider this question.
Development versus deployment: Is the objective to deter the development of some future superintelligent AI capability that could be applied for military purposes (along the lines of that suggested by Hendrycks et al.)? Or, should we think about deterrence in terms of shaping behavior around the employment or use of a military AI capability?
There are likely serious impediments to deterring the former. One impediment is there is even more uncertainty about future military AI applications than that which currently exists for contemporary military AI applications. Clearly conveying a deterrent threat is challenging if the deterring state is not able to tangibly conceptualize the characteristics of some future superintelligent AI capability, or define reliable indicators that suggest an adversary is reaching a crucial tipping point in developing such a capability. Additionally, if there is a general consensus that developing such a capability will provide an enormous strategic advantage, there may not be effective deterrent tools that will dissuade such development in ways that affect the adversary’s cost-benefit calculus. This also assumes that development versus non-development are binary states, such that it will be clear once a state has crossed a critical threshold, and it further assumes that development will inevitably go hand-in-hand with the effective adoption and integration of such capabilities into military organizations and processes. That said, strategies that can credibly hold at risk the enabling infrastructure and capabilities that are essential for continued advancements in AI—energy resources needed to power large data centers, the facilities themselves, access to advanced chip technology, and so on—may offer cross-domain mechanisms of shaping an adversary’s calculus.
In contrast, it may be more conceivable to deter specific uses of military AI tools in certain contexts or scenarios, in ways that operationalize traditional deterrence strategies tailored to an AI context. Challenges associated with this approach will likely center on uncertainty about military applications of AI and the opacity of AI (and therefore definitions about what actions might constitute a deliberate violation of a deterrent threat).
General versus immediate deterrence: Is the objective to generally prevent an adversary from taking some unwanted action related to the use of military AI capabilities through the ongoing, persistent, credible threat to retaliate, or is the objective to prevent an adversary from taking an immediate, specific action using military AI in a particular crisis or context? While the literature traditionally sees general deterrence as “easier” to achieve than immediate deterrence, it is plausible that this may be inverted in the realm of emerging technologies. Here, the analogy to cyber deterrence may be illustrative: great powers have struggled to broadly deter the development and employment of malicious cyber capabilities, though deterring specific cyber attacks in certain contexts has been more effective. Given the ubiquity of AI technologies and their dual-use nature for both military and civilian purposes, general deterrence may be a bridge too far. However, states may nevertheless be able to craft discrete deterrence strategies to prevent unwanted applications of military AI in specific scenarios.
Who is the object of deterrence? If effective deterrence ultimately depends on the perception of the adversary, then differences in strategic culture that shape how states perceive and understand deterrence concepts more broadly, as well as AI in particular, will likely complicate deterrence in practice. The United States and China, for instance, have struggled to come up with a shared understanding about what human control means in the context of AI-enabled weapons. Moreover, the object of deterrence may be the private sector firms developing the next generation of military AI capabilities, rather than the adversary government. The literature on cyber deterrence has demonstrated the challenges of deterrence in a strategic context defined by a diversity of actors, only some of which are governments. Similar confounders are likely to be present in an AI context.
Can AI Tools be Used as a Means of Deterrence?
A second lens to explore military AI and deterrence is to assess whether and how military AI tools could be used as a means of deterrence. Across the various applications of military AI, there are a number of common characteristics of AI that have both positive and negative implications for deterrence.
Autonomy: Integrating AI tools into military capabilities can enhance their autonomy. This applies to capabilities like drones, which have been shown to be able to autonomously maneuver and evade adversary countermeasures on the battlefield and close in on targets. Decision-making could be delegated to AI agents, which has applications for the use of military force broadly considered, but is especially relevant for nuclear command, control, and communications (NC3). AI tools could enable cyber defenders to autonomously uncover and mitigate threats or, on the offensive side, automate the steps of the kill chain to gain access to adversary systems, evade detection, and deliver malware. The autonomous nature of AI tools can be a double-edged sword for deterrence. On the one hand, it may increase the credibility of threats if such capabilities can be effectively signaled to the adversary. That said, there are other aspects of military AI that complicate signaling, such as opacity and uncertainty (opacity and incentives for secrecy in particular make signaling capabilities through revealing or brandishing them especially challenging). On the other hand, the autonomy of AI may fundamentally undermine any prospects of effective reassurance if decisions are delegated to AI agents in ways that cannot be easily walked back, or even if the adversary is uncertain about the extent to which they can be revoked.
Speed/Scale: AI tools can radically compress the speed with which large volumes of information are processed and analyzed, and in turn enhance the speed of decision-making during crisis and conflict, or the speed and scale with which kinetic effects are delivered against targets. With respect to cyber operations, AI can increase both the speed and scale of attack (more rapidly identifying vulnerabilities and developing exploits at scale) as well as the speed and scale of defense across the cybersecurity ecosystem. Like autonomy, speed and scale have countervailing implications for deterrence. These characteristics can enhance the credibility of punishment-based deterrence threats, both in terms of the magnitude of the costs inflicted and the challenges of thwarting threats that materialize at the speed of compute. On the other hand, like autonomy, speed is another characteristic that can undermine reassurance if the threatened punishment is inflicted with such speed that it can’t be halted.
Precision/Accuracy/Prediction: More and better information about adversary intent and capabilities, powered by AI, can have a range of benefits for deterrence. Deterrence is more likely to fail, and instability more apt to ensure, under conditions of incomplete information, uncertainty, misperception, and so on. Therefore, to the extent that AI’s information-processing capabilities enable states to draw better inferences about adversary capabilities and intent, and make more accurate predictions about likely future behavior, it will be a boon for more effective deterrence. That said, much of this is conditional on the reliability of AI algorithms and the minimization of bias in the underlying data.
Looking Ahead
Because both “military AI” and “deterrence” are complex and nuanced concepts, AI is not likely to have a uniform effect on the prospects of deterrence. In exploring this topic, researchers should build on and extend the insights from the literature on the challenges of deterrence in cyberspace, as there are a number of overlapping characteristics of these two technologies (such as their broad applications, dual-use nature, role of deception and opacity, among other factors). Looking ahead, there are opportunities for scholars to more systematically interrogate assumptions in the AI field about how this general-purpose technology is likely to affect core outcomes in international security, such as deterrence and stability. As a next step, the field may operationalize and test some of the propositions put forth in this essay, such as the conditions under which denial versus punishment strategies that leverage AI may be more effective. Another critical question that follows from the failure of deterrence is exploring potential triggers and patterns of AI-enabled escalation.