International Security and Artificial General Intelligence

November 24, 2025
By Alex Weisiger

This article was published alongside the PWH Report “The Future of Artificial Intelligence Governance and International Politics.”

The potential emergence of artificial general intelligence (AGI) generates obvious questions for both policymakers and scholars of international security. As a scholar of international security, I am not equipped to speak with confidence about how likely the imminent development of AGI is, so this reflection will focus instead on the likely implications of AGI for international security should it be developed. In brief, I argue that, while AGI would likely imbue the nation developing it with advantages in the military realm, the exact nature of those advantages will be uncertain and easy to overstate, and the first-mover advantages are smaller than one would anticipate if these factors are not taken into account. This conclusion rests on the following general observations about war:  

  • The highly competitive nature of war forces contemporary militaries to respond in ways that limit the opportunity for even a perfect intelligence to achieve easy strategic victories. 
  • Applications of AGI on the battlefield will confront many of the same challenges that make achieving decisive victories in contemporary major war, especially war between great powers, extremely difficult.  

To the extent that policymakers share these conclusions, incentives for secrecy and highly accelerated pursuit of AGI will be relatively limited, and preemptive and preventive conflict associated with the development of AGI should be unlikely. 

At the most general level, there are two primary ways in which AGI might be expected to outperform humans: it could make better (smarter) decisions, and it could (indeed certainly would) make faster decisions. Because there are somewhat different reasons in these two settings why AGI would likely be less revolutionary than one might think absent a consideration of existing political and military realities, I first address the limits of superior intelligence and then discuss the factors that limit the advantages arising from superior speed. 

The Limits of Genius in Competitive Environments 

In any highly competitive environment like war, adversaries quickly learn to avoid predictability, in game-theoretic terms adopting mixed strategies that allow them to limit the likelihood that any one strategy by the adversary will guarantee victory. In these circumstances, even a perfect intelligence can, and will, still lose a significant portion of the time, and even victories will not be costless. For all that historians at times rightly castigate generals for failure to adapt—the standard narrative on the futility of World War I tactics is apt—there do not exist “perfect” strategies or tactics that cannot be defeated. Indeed, if we extrapolate from experiences with contemporary AI, which simultaneously demonstrates impressive capacities for problem solving across a range of domains and exhibits unexpected basic failures of reasoning, there is a significant risk that, especially upon initial deployment, a real-world military AGI would have blind spots that motivated adversaries could identify and exploit. If this point is correct, then an AGI that significantly outperforms humans on average both would have fewer opportunities for dramatic gains and greater risks of unexpected failures than is commonly recognized. 

We can see this dynamic at play at different levels of war, starting at the strategic level. Cases of leaders outwitting adversaries (especially peer competitors) typically involved risks whose ultimate advisability could not be known in advance. To take the most prominent example from the past century, the German conquest of France in 1940 relied on a successful deception about the primary target of the German advance, but also entailed tremendous risks: had French and British generals diagnosed German strategy more quickly and diverted forces being poured into the Low Countries toward the Ardennes, the attacking German army could have been cut off and encircled. In this context, an AGI might be able to identify the best possible gamble, but it could not change the reality that it would be a gamble. 

The most plausible scenario in which AGI might be able to significantly outperform humans would be in the cyber realm, where an advanced artificial intelligence (AI) might be able to identify and exploit cyber vulnerabilities far more effectively than humans have been able to do. (Cyber conflict has imposed significant costs on targets, both in losses from successful attacks and in the form of efforts to mitigate vulnerabilities, and at times have had important real-world effects, but at no point have cyberattacks produced the kind of general disabling of an adversary’s society that motivated past concerns, even in contexts like the Ukraine War in which a capable actor like Russia has every incentive to deploy its full cyber arsenal.) Even here, though, the steps that have been taken to limit the vulnerability of critical systems to human cyberattacks would limit the effectiveness of AGI-directed cyberattacks—there is, for example no reliable way for such an AGI to circumvent air gapping. There is thus a significant gap between the claim that AGI might conduct cyberwar more effectively than current actors can and the expectation that it could render the enemy defenseless. 

Similar dynamics arise on the physical battlefield. In the presence of overwhelming firepower, massing forces in any one part of the battlefield invites their destruction. The combination of dispersion into cover and concealment, combined arms, and mobile defense that trades territory for time creates a situation in which breakthroughs are exceptionally difficult to achieve: all the intelligence in the world cannot reason around a well-prepared defense. An AGI able to accurately simulate extremely complex scenarios would in principle be able to identify the cheapest route through prepared enemy defenses, but the cheapest route would still be far from free. 

The Limits of Speed on the Contemporary Battlefield 

If there are fewer opportunities for genius to achieve decisive victories than a naive projection might expect, there still is the possibility that the speed with which an artificial general intelligence would act could prove decisive. This is the realm of many of the so-called “wonder-weapons” that prognosticators speculate might give the side possessing them a decisive advantage.1 Almost by definition, we cannot predict exactly what such a weapon would look like: typical proposed examples contemplate an AGI that is capable of carrying out a fully disarming cyberattack, that can render the battlefield (or the location of the adversary’s nuclear arsenal) fully transparent, or that can outreason and outexecute human warfighters on the battlefield or in the political realm. 

Any such developments would undoubtedly dramatically change the nature of war, providing a significant advantage to the actor able to deploy them. At the same time, however, they would not eliminate (and potentially might exacerbate) several important constraints on the ability of powerful states to translate military capabilities into desired political outcomes. Given the breadth of possible directions in which AGI might influence war, it is not possible to discuss every relevant obstacle, but similar limitations are likely to exist for other possible innovations emerging from AGI. 

Most obviously, deploying a military AGI would not insulate a country against an adversary’s nuclear capabilities.2 In a world in which hiding nukes is easier than finding them, launching them is easier than defending against them, and a single successful attack has the potential to impose intolerable costs, it is almost impossible to envision a scenario in which rational policymakers would choose to attempt a disarming strike. In short, nuclear weapons will continue to keep direct challenges to the core strategic interests of nuclear-armed states off the table. 

Of course, most countries do not have nuclear weapons, and great powers care about issues beyond their core strategic interests where the knowledge that both sides have nuclear weapons is no guarantee of peace. An AGI that could operate at superhuman speeds on the battlefield would be able to fight in ways that humans cannot achieve, for example through drone swarms that could overwhelm and destroy local defenses or through monitoring that renders the battlefield effectively transparent. From this perspective, the features of modern war that render successful offensives against well-prepared and competent defenses so difficult might no longer pertain.  

In practice, however, as we see in Ukraine, every innovation spurs counter-innovations to limit its effects. As drones grow in importance, both sides invest more and more heavily in both high-tech (e.g. targeted jamming systems) and low-tech (netting) defenses. Any technology that is giving one side a significant advantage becomes the obvious target for the other side, and firepower in peer-to-peer combat is so high that no system would reliably survive long on the battlefield. As a result, optimal incorporation of AGI into a military would be resource intensive, requiring constant replenishment of key platforms. The rate at which these platforms could be resupplied then would constitute a constraint on the effectiveness of a military that has developed and successfully incorporated AGI into battlefield operations. 

In this context, the gap between militaries that have effectively incorporated AI into conventional warfare and those that have not would grow further—the Iraq War over twenty years ago already provided the lesson that militaries unprepared for modern battle should not attempt to fight a conventional war against an elite opponent. The gains relative to an advanced military that does not employ AGI, while real, would be smaller. Indeed, many of the benefits of speed in AI would be available well before the level of AGI. In this context, the benefits to AGI might well be real and significant, but they would not be fundamentally paradigm-breaking in the way that arguments about the destabilizing political consequences of AGI presume. 

Finally, even in the event that an innovator manages to achieve a decisive conventional victory on the battlefield, political constraints on the translation of military power into desired political outcomes would not disappear, and indeed under a world of AGI might become more pronounced. Insufficient military capacity was not the reason that the United States was ultimately unable to impose its preferred form of government on Afghanistan and Iraq. In this context, while AI-informed propaganda certainly can pollute the information space and render targets skeptical of even genuine messages, it is not the case that there exist perfect formulations of messages that would convince policymakers, soldiers, or citizens to act against their core interests. At the same time, the belief that messages aligned with the adversary’s preferences reflect AGI propaganda likely would harden nationalist resistance, exacerbating governance challenges in the event that an AGI-empowered military is able to achieve significant territorial gains. Similarly, the frequency with which nuclear powers have struggled in conflicts with nonnuclear adversaries suggests that even in the highly unlikely event that one country were to establish a pathway to a reliable nuclear monopoly, their ability to translate that advantage into victory on everyday international disputes would remain open to question, while the obstacles to a drive for pure hegemony would remain.  

Implications for Contemporary Politics 

None of the discussion above should be understood as implying that there would not be significant potential military benefits to the development of advanced AI, especially in the longer term. (For example, well-supplied defensive positions that effectively incorporate advanced AI would become even more impregnable as it becomes ever more dangerous to operate in their vicinity.) Instead, the point is simply that there are significant limits to what those gains would allow the first country to deploy AGI in the military realm to accomplish on the basis of that innovation. In this context, first mover advantages will be comparatively limited, and the pursuit of AI in the military realm will be politically destabilizing only in the event that leaders buy into claims that advanced AI will be more immediately impactful than an assessment grounded in an understanding of the broader political landscape would imply.3 Whether or not artificial general intelligence is developed in the coming years, wars of the future will differ from wars of the past, but the lessons (and limits) of past wars will still apply.  

About the author

Alex Weisiger is associate professor of political science at the University of Pennsylvania.