Dehumanization and Public Support for Emerging Technologies on the Battlefield
This article was published alongside the PWH Report “The Future of Artificial Intelligence Governance and International Politics.” The views expressed in this article are those of the author and do not represent the policies or opinions of the U.S. Government, U.S. Department of War, or U.S. Department of the Army.
In 2024, a coalition of non-governmental organizations seeking to ban the use of lethal autonomous weapons (LAWS), the Campaign to Stop Killer Robots, warned of the dangers of remote platforms in warfare given their capacity to “dehumanize” targets and reduce them “to data points.” This sentiment echoes a body of research that highlights how emerging technologies, such as drones, can dehumanize combatants during war. Scholars assert that “the detached nature of drone warfare” diminishes the “psychological barriers of killing” and “hollow[s] out humanity” in war, enhancing support among the public and military for the lethal use of remote platforms.
Such claims, however, have not been empirically tested. Do emerging technologies shape public support for the use of force by dehumanizing targets? Public perceptions of the narratives elected officials employ to frame populations and the use of violence against them influences foreign policy decision-making. Indeed, when combatants are framed as less than human, political officials have greater latitude to use newer military capabilities in ways that impose unnecessary harms on civilians, introducing a moral hazard. Dehumanization is both a cause and effect of public support for emerging technologies, which can produce distorted understandings of public attitudes toward their use. Investigating dehumanization allows us to understand how emerging technologies can not only result in a scoptically-constructed regime that “others” populations, contributing to what Anne Rogers refers to as “social-techno blindness,” but also implicate the perceived legitimacy of operations.
What is Dehumanization?
Dehumanization is the practice of characterizing others as less than human or not worthy of full humanity. While this practice is disputed as simultaneously “too specific” and “too vague,” it has also informed the construction of specific groups as less than human across history and legitimated a wide range of discriminatory, predatory, and violent actions. These include genocide, dispossession of native lands, racial segregation, and labor exploitation. Dehumanization is the process by which designated populations are reduced to animals or objects, and perceptions of militarized threats or risks are constructed.
Research suggests that dehumanization tends to take two forms—animalistic or mechanistic. Animalistic dehumanization equates humans to animals. It denies humans cognitive characteristics that differentiate them from animals, such as self-control, critical thinking, and rationality. Such sub-humanization characterizes others as “inferior,” creating “psychological distance by generating contempt, disgust, or hatred.” Verbal depictions and stories activate emotions that shape public attitudes toward the use of force, and can be harnessed to legitimate the killing of targeted groups. Mechanistic dehumanization, on the other hand, strips humans of distinguishing features, such as phenotypical characteristics of sex and race. This form of dehumanization equates humans to inanimate objects—data, machines, and robots. Thus, mechanistic dehumanization is also referred to as “digital” or “technological” dehumanization. Like animalistic dehumanization, mechanistic dehumanization results in combatants’ psychological distancing from targets, but through “cold indifference,” which enhances the propensity to kill. Modern technologies, which respatialize war by enhancing the distance between combatants, may exacerbate this trend. Lucy Suchman, for example, claims that drones, especially those enhanced with artificial intelligence (AI), are liable “to prosecute dehumanized targets anywhere in the world at will.”
In sum, animalistic and mechanistic dehumanization are two mechanisms that influence decision-making concerning the use of force, resulting in what Susan Opotow refers to as “moral exclusion.” Such moral exclusion reflects distinct “framing effects,” consisting of rhetoric and symbols, to further remove “human psychology . . . from the act of killing” and contribute to “the manufacture and obliteration of populations as objects of knowledge and targets of war.” These framing effects activate unique micro-foundations—values and beliefs—that shape different outcomes in terms of psychological distancing. Animalistic dehumanization is based on a “hot” or affective micro-foundation; mechanistic dehumanization derives from a “cold” or cognitive micro-foundation. Thus, while both forms of dehumanization are designed to deny targets full humanity, these hot and cold micro-foundations legitimate killing in different ways.
Dehumanization and Modern Warfare
What does dehumanization have to do with emerging technologies? The terrorist attacks of 9/11 ushered in an era of remote warfare characterized by the increased use of armed and networked drones, such as the MQ-9 Reaper, to enhance force protection while surgically removing combatants and reducing collateral damage. Countries and non-state actors acquire remote platforms for many reasons, ranging from counterterrorism applications to status-based considerations. Indeed, drones’ perceived operational effectiveness is an important driver of their global proliferation. Notwithstanding drones’ lack of survivability during interstate conflict, such capabilities perform radical maneuvers, are reusable, and provide persistent overwatch of targets.
However, these drones are semi-autonomous, meaning they are controlled by humans. On the other hand, LAWS or “killer robots,” such as the U.S. Air Force’s “loyal wingman,” incorporate AI to identify, track, and engage targets on their own or without human oversight. Policymakers, practitioners, and war theorists argue that LAWS constitute a shift in the character if not nature of war, referring to how and why countries fight each other, and across all levels (tactical, operational, strategic) and domains (air, land, sea, space) of conflict. In this case, observers privilege a narrow rather than generative form of AI. The latter consists of generative pre-trained transformers, such as those used in Large Language Models like ChatGPT, that stack algorithms to create artificial neural networks that improve probabilistic reasoning to classify objects and forecast outcomes based on presumably representative data. Narrow AI is more function-specific. It is designed for constrained tasks informed by presumably representative data, such as the targeted killing of terrorists.
Together, the integration of semi-autonomous and fully-autonomous drones into countries’ military arsenals has seized the public’s imagination. Regardless of their exact developmental process or speed of onset, these capabilities are celebrated for their novelty, excitement, and revolutionary potential. Heightened casualty aversion and expectations of enhanced precision have contributed to the emergence of semi-autonomous drones as a quintessential weapon of war. Reporting also indicates that LAWS have leveled the playing field between opposing parties during conflicts in Libya, Nagorno-Karabakh, and Ukraine. In Ukraine, for example, Russian forces use Lancet drones to engage preselected targets without human oversight and Ukrainian forces use Punisher drones in tandem with smaller reconnaissance drones to autonomously strike targets. Drones also offer key political dividends. They allow officials to circumvent democratic accountability and public oversight for the use of force. As such, Kenneth Pollack and Daniel Byman argue drones have “become the U.S. tactic of choice in more and more situations, to the point where they are sometimes elevated to the default strategy itself.”
Semi-autonomous and fully-autonomous drones intersect with animalistic and mechanistic forms of dehumanization in four different ways. First, rhetorical-operator warfare combines animalistic dehumanization with a semi-autonomous drone, such that operators, staff, commanders, and policymakers use words to deny targets’ humanity. Second, symbolic-operator warfare integrates mechanistic dehumanization with a semi-autonomous drone wherein the optics used to target combatants strips them of their humanity. Third, rhetorical-agentic warfare brings together animalistic dehumanization with a fully-autonomous drone based on key assumptions that drive the development of LAWS, especially shortening the sensor-to-shooter timeline that allows militaries to maintain lethal overmatch of adversaries. Finally, symbolic-agentic dehumanization combines mechanistic dehumanization with a fully-autonomous drone where the AI encoded in LAWS, as well as the data used to enable them, may be based on racial and gender bias that leads to perverse outcomes, such as target misidentification and targeting of a racialized and gendered “Other.”
Studying Dehumanization and Public Support for Emerging Technologies
What do we know about how dehumanization shapes public attitudes toward emerging technologies? Not much. While research evaluates military operations that undermine targets’ “human dignity,” there is limited evidence of this effect on public attitudes. Scholars also study how drone strikes are informed by racial and gender bias. In previous studies, my co-authors and I find that variation in targets’ skin color and location does not shape higher levels of public support for U.S. drone strikes. These outcomes were similar when respondents were primed to think about variation in targets’ sex, gender role, and race. We found that the intersection of targets’ lived identities does little to motivate higher levels of perceived legitimacy for U.S. drone strikes. However, in both of our studies, public support for drone strikes was the highest among the control groups that did not vary targets’ skin color, location, gender role, and sex. These findings suggest that how much the public knows about a target, or the degree to which it is humanized, can shape attitudes toward U.S. drone strikes.
To empirically test the relationship between dehumanization and public support for U.S. drone strikes, my co-authors and I recently fielded a pre-registered, image- and text-based survey experiment among a representative sample of Americans. Jennifer Erickson suggests that Americans are a reasonable barometer for public opinion on the use of force abroad given the scope and scale of U.S. interventions across the globe. Our survey examines how variation in the way targets of U.S. drone strikes are presented, in words or pictures, as well as the decision-making authority for an operation, an operator or machine, may shape public support. To test the generalizability of our findings cross-nationally and in times of war and peace, we also replicate and run our survey among a representative sample of Israelis, thus assessing how the ongoing war in Gaza may condition public support for emerging technologies that are claimed to dehumanize targets.
What do we anticipate finding? Existing studies offer mixed insights. Emerging technologies provide strategic, operational, and tactical benefits, as well as apparent moral advantages, that enhance their appeal, despite the potential for dehumanization. Some of these findings are informed by text-based surveys. This implies that the way drone operations are discussed, with the targets being rhetorically framed, shapes public attitudes. Research also suggests that several considerations—such as proximity to the battlefield and operator control—can also moderate public support, though these outcomes have much more to do with how technologies are visualized.
The implications of distinct patterns of animalistic and mechanistic dehumanization on public attitudes toward emerging technologies used during war is less clear. Drawing from rational expectation theory, Scott Gartner and Christoper Gelpi argue that “information matters—not presentation style.” Yet, they also recognize that “informational cues influence individual beliefs and emotional states in systemic ways.” This deduction aligns with communications studies showing that variation in observers’ psychological distance from war, measured in terms of rhetoric and symbols, has differential impacts on public attitudes toward war. Informed by our previous research, it is therefore likely that mechanistic dehumanization, framed by images, may have a more profound impact on public support for emerging technologies used during war than animalistic dehumanization, framed by words.
It is equally likely that the type of psychological distancing imposed by animalistic and mechanistic dehumanization is shaped by different micro-foundations that exercise an indirect effect on overall attitudes. Existing research suggests that animalistic dehumanization evokes feelings of contempt and disgust, and that anger, a hot or emotive mechanism, causes observers to relativize a target, which exercises a strong shaping effect on public attitudes toward the use of semi-autonomous drones. Though largely non-causal, research also suggests that fully-autonomous drones elicit cold indifference and “non-human vision” that accounts for observers’ psychological distancing, or their mechanistic dehumanization of targets. Thus, a cold or cognitive mechanism may indirectly shape public attitudes toward this emerging form of violence. Overall, our study promises to shed new light on the dehumanizing potential of emerging technologies in terms of public support for the use of force abroad, which has important implications for policy, strategy, and military modernization.