AGI Eschatology and Security Scatology

November 24, 2025
By Jon R. Lindsay

This article was published alongside the PWH Report “The Future of Artificial Intelligence Governance and International Politics.”

Excitement about artificial general intelligence (AGI) is white hot, and that makes me suspicious. Definitions of AGI are all over the map, ranging from achieving human-level intelligence, to exceeding human cognition in all areas, to merely optimizing economically relevant forms of cognition, to creating a genuinely novel form of transhuman superintelligence life. But the intuitive idea is that AI will soon be smarter than the smartest people in the room. 

The AGI Coming is Nigh! 

Reasonable people have begun speculating openly about the coming artificial intelligence (AI) utopia. Demis Hassabis, founder of Google DeepMind and co-recipient of the 2024 Nobel Prize in Chemistry (for using AI to solve the notorious protein folding problem), foresees “an era of radical abundance, a kind of golden era. AGI can solve what I call root-node problems in the world—curing terrible diseases, much healthier and longer lifespans, finding new energy sources. If that all happens, then it should be an era of maximum human flourishing, where we travel to the stars and colonize the galaxy.” 

OpenAI CEO Sam Altman thinks it has already begun: “We are past the event horizon; the takeoff has started. Humanity is close to building digital superintelligence.” He hastens to add: “and at least so far it’s much less weird than it seems like it should be.” But the trajectory is similarly utopic: “In the 2030s, intelligence and energy—ideas, and the ability to make ideas happen—are going to become wildly abundant. These two have been the fundamental limiters on human progress for a long time; with abundant intelligence and energy (and good governance), we can theoretically have anything else.” 

The technical details about how this might happen vary, but the basic idea is that better AI will make better AI. Developers at Anthropic, for example, are already using Claude to help write software for new versions of Claude. If humans can be taken out of the development loop entirely—a big if!—then an AI super-coder might theoretically create AGI. Recursive self-improvement thus leads to a superintelligence explosion that rapidly exceeds human capabilities and understanding.

Excitement about AGI encourages faith-based investment strategies in “hyperscale” datacenters to train better AI models. If AGI can learn everything and do everything, then building it is the only scientific problem that matters, since AGI can then solve all other scientific problems. 

I Looked, and Behold, a Pale Horse 

Indeed, the discourse around AGI has become downright eschatological. AI rhetoric, that is, sometimes sounds like a religious prophecy of the end times. The rhetoric of cyberspace has long had an apocalyptic cant, but there is near hysteria about the immanence of AGI.

Prophetic technologists tell us that in  2027, a godlike machine will arise that outperforms human beings in every cognitive task. The explosion of AGI will eliminate whole categories of jobs, shift the geopolitical balance of power, throw the world into crisis, and spark desperate wars of control and resistance (which is futile, of course). The faithful few who welcome the automated messiah will upload their minds to its cloud to achieve everlasting life. Those left behind will suffer the robot apocalypse. Nerds will be raptured, secular nations destroyed, and the universe given unto transhuman life. 

Those who fear the AGI eschaton—the culmination of the final plan—may cast about for a regulatory katechon—that which restrains the end times. AI futurist Eliezer Yudkowsky writes that the “most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die.” He thus insists, “The moratorium on new large training runs needs to be indefinite and worldwide.” 

In an amusing twist, libertarian investor Peter Thiel inverts the same religious language to warn of a regulatory antichrist. Existential threat narratives of AGI and climate change, according to Thiel, pave the way for global tyranny. This dangerous regulatory eschaton—a liberal antichrist no less—requires a defensive katechon of AI-enabled surveillance states and military competition to balance against the world-tyrannical antichrist. Theil explicitly names Yudkowsky as an antichrist candidate, along with climate activist Greta Thunberg. Never mind that the spectacle of one of the world’s richest men denouncing a young woman advocating for justice might perhaps itself be described as diabolical. 

Visions of AGI competition in anarchy are hardly more reassuring than dystopic visions of robot apocalypse. A popular essay by Leopold Aschenbrenner imagines that AGI will “Provide a decisive and overwhelming military advantage” and “Be able to overthrow the U.S. government.” A report on “AI 2027” from the AI Futures Project likewise provides a speculative smorgasbord of national security concerns, compactly capturing fears that are more widely shared in the defense intellectual community: “Defense officials are seriously considering scenarios that were mere hypotheticals a year earlier. What if AI undermines nuclear deterrence? What if it’s so skilled at cyberwarfare that a six-month AI lead is enough to render an opponent blind and defenseless? What if it could orchestrate propaganda campaigns that beat intelligence agencies at their own game? What if some AIs ‘go rogue?’…They have to continue developing more capable AI, in their eyes, or they will catastrophically lose to China…. If AI progress threatened to overturn nuclear deterrence, could America and China avoid nuclear war? If someone found evidence of AIs going rogue, could the two countries halt research until they better understood the threat? How could such an agreement be monitored and enforced?” 

As the superintelligence explosion continues inexorably along its exponential path, the speculative science fiction of “AI 2027” imagines two alternative endings for 2030: In “Slowdown,” AI safety regulations lead to beneficent AGI that is “unimaginably amazing in almost every way,” but in “Race,” our story culminates in the galactic domination of “wildly superintelligent” AGI and the extinction of humanity (except for a few dumb pets “sitting in office-like environments all day viewing readouts of what’s going on and excitedly approving of everything, since that satisfies some of Agent-4’s drives”). 

The Principle of Explosion 

The heady prospect of AGI even captivated the godfather of geopolitics, Henry Kissinger. His last and posthumously published book was on AI, co-authored with Eric Schmidt, the former CEO of Google and chairman of the U.S. National Security Commission for Artificial Intelligence; Craig Mundie, a former chief research and strategy officer with Microsoft; and Eleanor Runde. In their chapter on security, they assume that AGI will be a decisive factor in politics because it can out-think, out-maneuver, and out-fight human beings. States will thus rush to be the first with AGI, which means “a security dilemma of existential nature awaits” (112). 

Yet, their account is riddled with contradiction. The authors assert that “AI will become a key determinant of order in the world,” (109) and yet “AI is an unselectively destabilizing force; its emergence, if unmanaged, poses as great a risk to its creators as to its users” (124). They tell us that AI will create a world “where no leaders could trust their most solid intelligence,” (115) and yet “machines will possess highly precise capabilities of killing humans with little degree of uncertainty and with limitless impact” (128). Thus, anything could happen: “If the future is a competition to reach a single, perfect, unquestionable dominant intelligence, then it seems likely that humanity will either lose control of an existential race among multiple actors or suffer the exercise of supreme hegemony by a victor unharnessed by traditional checks and balances” (118). 

In short, AI will bring order and chaos at once, creating both ignorance and certainty, anarchy and hegemony, and other divine contradictions. The automated godhead will be artificial yet intelligent, non-living yet new life, stochastic parrot yet bird of prey, profane machine yet sacred transcendence. 

And here precisely is the rhetorical brilliance of the AGI narrative: Ex falso quodlibet, from falsehood follows anything. Logicians know this fallacy as the principle of explosion. If we assume that P and NOT P are both true, then we can also assert that P OR Q must be true for any Q, since P is true by assumption, and then we may conclude that any Q must be true as well, since NOT P is also true by assumption. 

Insofar as AGI narratives are founded on contradiction, anything is possible. I think that this means we ought to be as worried about the logical explosion of AGI hype as we are about superintelligence explosion. 

Automated Bullshit 

One could be forgiven for suspecting that AGI discourse is not just eschatology but also scatology—bullshit in Harry Frankfurt’s sense of speech meant to persuade without interest in truth. While liars care enough about the truth to hide it, bullshitters don’t give a damn. According to Frankfurt, “bullshit is a greater enemy of the truth than lies are.”

The best bullshit is generated by AI itself, of course: LLMs like ChatGPT are bullshit machines generate tokens that plausibly follow a prompt given the training data. The machine does not care one way or another. Just ask any LLM to explain why a superintelligence is inevitable, then ask the same LLM to explain why AGI is bullshit. The machine will mindlessly make up convincing sounding arguments on each side. 

Overreliance on bullshit machines risks making people into better bullshitters as well. As we forgo the habits of literacy and the hard work of critical thought, thinking machines produce unthinking people. Just as cheap calories in fast food encourage an epidemic of obesity, cheap talk and empty attention from our glowing rectangles leaves us feeling stupid and hollow. It is not just our applications that are being enshitified, but our minds as well. Elsewhere, I’ve described this as AI’s Aurochs Intelligence problem

Incidentally, this suggests that the jobs most vulnerable to replacement by AI are jobs most prone to bullshitting. Small wonder that AI has found wide use in advertising, marketing, journalism, education, middle management, and therapy. I’m not saying that all advertisers, marketers, journalists, educators, managers, and therapists are bullshitters, but the good ones are going to have to work a little harder to show that they give a damn. 

This has some obvious implications for national security decision-making, a realm that has been known to wallow in bullshit from time to time. Here’s looking at you staff officers, doctrine writers, and PowerPoint rangers. 

Good policy making or intelligence analysis is a process, not just a product, an ongoing conversation about what matters and why. Performative presentations and slick snake oil sales jobs make it harder to have such conversations in earnest. Overreliance on AI bullshit corrupts the intellectual process. 

The Market for Bullshit 

AI-generation of bullshit is neither the only nor the most serious problem. The bigger problem is motivated bullshit about AGI from special interests in government and industry. This is the bullshit market for bullshit machines. 

Any discussion of AGI must begin by acknowledging serious incentives to overhype advanced technologies. The nerdy term for this is “securitization.” Cyber threat discourse, for example, is often securitized by exaggerating the threat of catastrophic warfare over the more pervasive, but mundane, threats of espionage. 

AI is big business: “Morgan Stanley sees cumulative investment in U.S. data centres reaching $3 trillion by 2029. McKinsey & Co. anticipates the bill will exceed $5 trillion by 2030.” The same AI firms who stand to benefit from AGI are also the ones advocating for safety regulations on it, which will make it conveniently harder for AI upstarts to compete. 

The same defense contractors and acquisition offices who are now building expensive AI wonder weapons, likewise, will also benefit from brandishing the prospect of Chinese or Russian victory in the race for superintelligence. Both dreams of AGI paradise and nightmares of AGI apocalypse are useful for selling AI products and policies. 

There are many reasons to be skeptical about AGI threat narratives, however. The data on AI progress that makes stories of AGI seem so compelling are highly interpreted and contested. Exponential extrapolations like the “AI 2027” may look precise and scientific, but they are full of dubious assumptions about relationships and models. The illusion of precision serves the motivated biases mentioned above for exaggerating the immanence and severity of AGI. Indeed, the politicization of quantitative data is a general problem in national security. 

And motivated misrepresentation is hardly the only problem. 

Giving a Damn 

I hold the unpopular opinion that the chances of AGI appearing in 2027 (or 2030 or 2041) are approximately zero. I do think that AI systems will outperform humans in many specific tasks, as they already are, e.g., in the games of chess and go, but machines have been outperforming human abilities in many areas since as long as there have been machines. 

But if AGI means that a single given system will exceed human cognitive abilities in all areas, I think that the chances of this are vanishingly small. The chances are not precisely zero because quantum mechanics and Stanley Kubrick tell us that there is a small chance that the planet could spontaneously turn into a giant baby. But the likelihood of AGI is very close to zero because AGI narratives are based on a set of fundamental misunderstandings about what and how we are. 

The misunderstandings that plague the discourse of AI are fundamental because they concern deep questions that social scientists and philosophers have been wrangling with for decades, if not millennia, on the nature of intelligence, information, technology, organization, politics, and war. I obviously cannot do any of these issues justice in a short piece like this, and that, too, is part of the problem: Nobody has time to do due diligence with all the assumptions that feed into AGI narratives. 

But the philosopher John Haugeland captures the essence of the problem well: “The trouble with Artificial Intelligence is that computers don’t give a damn.” While Haugeland was writing about an earlier generation of symbolic AI in 1979, modern LLMs are still unable to make meaningful moral and institutional commitments to an embodied, pragmatic, social world. AI supercomputers still don’t give a damn. 

Calling Bullshit 

AI doesn’t give a damn, but war is damn serious business. AI has literally no skin in the game, but humans are resolved to kill and die. AI enables low-cost killing, but AI-enabled war is still a costly bargaining process. 

Indeed, there are contradictions at the heart of military AI that explode the credibility of the AGI narrative. AI generates disembodied data, but data infrastructure relies on—and exploits—land, labor, and capital. AI relies on institutions, but wars occur in anarchy. AI augments decision making, but command decisions are pragmatic and intuitive. AI automates military prediction, but automated warfare must depend even more on human beings as a result. 

The core problem is that technology cannot automate the full military OODA loop (i.e., the decision cycle of observing, orienting, deciding, and acting). We have fancy sensors that can observe more than ever and generate vast amounts of data. We also have automated weapons that can act quickly, precisely, and lethally at a distance, with little human risk. Now, we also have prediction machines (i.e., AI) that can use statistical methods to fill in missing data. Yet, the core function of judgment that guides the entire OODA loop—including things like goals, values, norms, preferences, morals, meanings, etc.—remains as much a human function as it was in the Bronze Age (and that should terrify you).  

What happens when the bullshit foundations of AI hype become too obvious to ignore? We are probably in the mania phase of an AI bubble now, but an AI crash may not be far away. Already, we see troubling warning signs of unbridled media hype, skyrocketing valuations of AI firms, massive capital expenditure on datacenters with exorbitant replacement costs, and glimmerings of consumer backlash. What happens when dreams of future prosperity founded on endless AI returns begin to founder in the next few years? 

A financial crisis triggered by a rapid retreat from AI could be more problematic for global security than any intelligence explosion. Even worse, these same few years will unfold during a hyperpolarized US election season, the maturation of Chinese military preparations for Taiwan, the culmination of ethnic cleansing in Israeli territory, and a smoldering nuclear crisis over Ukraine. I am ultimately far more concerned about the politics of 2027 than the advent of AGI. 

The Real AGI Threat 

To sum up, the sort of AGI I am really worried about is not the advent of uncontrollable superintelligence, but rather the boringly familiar effects of Arrogance, Grift, and Ignorance. Arrogant technologists tell eschatological tales of superintelligence. Grifters amplify the AI prophets in pursuit of AI profits. Ignorance about the true nature of knowledge and politics then covers over the real dangers of AI. Perhaps, even the gods cannot save us from this sort of AGI.  

I am not for a second saying that there is not an important role for AI systems in national security organizations. On the contrary, the future of military automation is as long and complicated as its history. Military information practice has grown more complex over the last century, and AI-enabled weapons and decision aids are further increasing the complexity. Reliance on digital technology worldwide has expanded both the opportunities and liabilities of secret statecraft, and AI just makes intelligence and covert action more complex as well. Yet, the hype around AGI tends to cover over the practical matters of military automation. 

The real national security challenges of AI have much less to do with superintelligence and more to do with normal organizations and politics. The “normal” problems of human-computer interaction in national security deserve more attention than I can give them here, so will I merely list a few. These include overoptimism about automated capability, miscalculation based on great expectations for AI, stumbling into long bloody wars that AI fails to win, baffling organizational complexity to accommodate AI, malinvestment in AI technology over human capacity, growing illiteracy of officers and policymakers overdependent on AI, and perhaps a global financial crisis of AI disappointment. These may not be the end times, but neither are they happy times. 

About the author

Jon R. Lindsay is an associate professor at the School of Cybersecurity and Privacy and the Sam Nunn School of International Affairs at the Georgia Institute of Technology (Georgia Tech). His most recent book is Age of Deception: Cybersecurity as Secret Statecraft (Cornell, 2025).