The Role of “Everyone Else” in the International Effort Toward Responsible AI
This article was published alongside the PWH Report “The Future of Artificial Intelligence Governance and International Politics.”
More than 40,000 people participated in the AI Action Summit in France earlier this year, while roughly 10,000 people participated in the AI for Good Summit at the United Nations this past July. More than 73,000 people attended major technical AI conferences in 2024. These are all just a small fraction of the millions of people working on AI around the world. Much has been written about what world leaders have said about the adoption of AI, what potential regulation might look like, and what “responsible AI” means, but current efforts are insignificant in the face of so much AI development and use around the world.
AI technologies are being rapidly adopted; the limited regulations that exist have done little to keep the harmful uses and impact of AI in check; and the concept of “responsible AI” is being advanced more successfully in the realms of academia, governments, and militaries than in commercial sectors.* Moreover, international relations are in flux; the role of the United States in international deliberations is uncertain; and global tensions are increasing. Rather than focusing solely on political and world leaders, perhaps, it’s time to also turn our attention to those millions of people who spend their normal workdays directly involved in how AI is developed and used.
To be clear, high-level discussions with high-level officials serve a valuable purpose: they signal to the world the direction that countries believe we should take. When countries agree on certain issues, that helps clarify steps forward for everyone else. However, at this point, AI is being developed and deployed at astronomical speed and scale.
With governments unable to keep up with this pace, waiting for global, or even national, action is inefficient at best, and dangerous at worst. The International Center for Advocates Against Discrimination (ICAAD) mapped known AI harms to ten fundamental human rights, and they found that AI not only poses threats to every fundamental human right, but also that there were well documented cases of all but one of these rights being violated for large groups of people.
Addressing AI harms, such as human rights violations, will require government action; however, in the interim, the responsibility of keeping AI safe has nowhere else to fall but on the shoulders of developers and users. In the absence of strong governance, two groups stand out as being most suited to influence the future development: technical organizations and militaries.
The Importance of Technical Organizations
Many technical organizations have already stepped up to improve the likelihood that AI will be developed and used responsibly and safely. The Institute of Electrical and Electronics Engineers Standards Association (IEEE-SA) published Ethically Aligned Design in 2019, and with it, they produced a series of standards that provide guidelines for developing AI ethically and responsibly. The Association for Computing Machinery (ACM) produced a Code of Ethics and Professional Conduct, while the Association for the Advancement of Artificial Intelligence (AAAI) has run a conference on AI, Ethics and Society for the last seven years. Meanwhile, the NIST AI Risk Management Framework is one of the go-to resources for engineers working with AI risk management and related issues. Many other technical organizations and standards institutes around the world are also producing more guidance and recommendations.
Admittedly, these are all efforts that were already underway, even as AI technologies were developed, deployed, and have caused harm. Research even found that when the ACM Code of Ethics was passively adopted in isolation, it was not used. However, there is an important caveat here: none of the efforts listed above, including the ACM Code of Ethics, were passively adopted in isolation. In every case, significant effort went into promoting the resources and providing guidance for how to implement them.
These efforts may not have prevented all AI harm, but when taken together, they are all helping to shift the existing culture of “move fast and break things” to a culture that encourages more ethical and responsible practices. With more public and governmental awareness and support, these efforts could be more effectively adopted by a wider range of developers and commercial organizations.
Additionally, in order to most effectively adopt ethical and responsible AI practices more broadly, even in the absence of governance, there is one other group that should be incorporated into these discussions and efforts: militaries.
The Surprising Role of Militaries in Responsible (and Ethical) AI
Most international discussions around, and governance of, AI tend not to include military uses of AI, either explicitly stating that military uses of AI are not included in general regulatory efforts around AI, or implicitly avoiding military use cases by simply not mentioning them, eschewing such words as “military,” “defense,” or “weapons.” Many civilian developers, understandably, do not want to be associated with military activities, and they do not want their creations to be used in war. However, as mentioned above, “responsible AI” sits mostly within the realms of government and military. For as much criticism is directed toward military use of AI, that space is also where some of the strongest efforts to adopt AI responsibly can be found.
There are a few primary reasons to incorporate military uses of AI into all other discussions of AI development, use, and governance.
First, anyone worried about problems with AI should be thinking about all use cases of this dual-use technology. If the military domain is excluded from AI discussions and governance, then that means that neither the deadliest use cases nor the use cases most likely to violate fundamental human rights are being considered.
Military use of AI already occurs; many militaries are making and initiating plans to scale up AI use as rapidly as possible; and militaries are using existing, civilian and commercial technologies to build many of their AI systems. Civilian developers may not want their algorithms, platforms, or technologies used in war, but simply avoiding interaction with the military will not prevent that from happening. If civilians want a say in how militaries adopt or do not adopt their technologies, then they need to interact with their country’s military leadership.
Second, although times of war may lead to militaries deploying new weapons too quickly and without proper testing, for the most part, militaries are among the most conservative groups in the world when it comes to deploying new systems. Militaries need to know that their systems will work, that the systems will be reliable, that they can trust the systems to function as intended, that they can figure out why something went wrong—essentially, militaries need AI to have all of the qualities that civilians are looking for as well. However, because lives are at stake, militaries are even stricter about how accurate, trustworthy, and transparent the AI systems must be.
Third, most militaries are desperate for technical knowledge and expertise, and they would welcome guidance and recommendations from civilian developers to ensure that AI-enabled systems are developed well. Many ethical issues that arise with the use of civilian and commercial AI, arise as a result of technology being deployed before it’s ready and before sufficient testing has been done to understand how humans will use the systems. Militaries are less likely to allow subpar systems to be deployed, which means that working with militaries, or at least including military use in AI discussions, could help address some (but not all!) ethical AI issues. Again, if civilians are concerned about how a military might use an AI capability, then interacting with the military about this concern is the most effective way to affect this.
Fourth, many military uses of AI have nothing to do with weapons. Many military uses of AI are similar to—or exactly the same as—civilian and commercial uses. Militaries are using AI capabilities in activities such as logistics, HR, payroll, and navigation. Again, militaries are among the most cautious organizations to adopt new technologies, and they often have the highest standards for deployment. Establishing baseline expectations to be in line with military standards could actually raise the bar for civilian- or consumer-based standards.
Bringing It All Together
In the United States, the Department of Defense’s adoption of the Responsible AI Principles led to significant efforts around the responsible development and use of AI within the government and especially within the different branches of the military. Similar efforts were then undertaken and expanded upon by NATO and many individual countries, including the most recent publication of the Global Commission on Responsible AI in the Military Domain.
At a smaller scale, in the United States, standards around Human Readiness Levels (HRLs) were recently adopted by the U.S. Department of Defense. Human error and misuse (intentional and unintentional) account for the majority of problems when deploying technologies. HRLs provide guidelines for how to ensure humans are incorporated into all lifecycle stages of the system, and they can ensure that the system does not advance to the next lifecycle stage until human users have been properly accounted for, regardless of how advanced the technology itself is. This is a practice that could (and should) be adopted by consumer AI developers.
Similarly, the IEEE-SA Lifecycle Framework for AI in defense systems presents a lifecycle framework, designed for AI and autonomy in defense, but that can and should be incorporated into civilian and consumer AI systems. The Lifecycle Framework was based on a set of ethical and technical challenges identified in the development and use of AI in defense, and nine of the ten categories of challenges can translate directly to civilian and commercial use of AI.**
Those are just a few examples of many, many efforts by organizations to develop and implement best practices. With more attention and public awareness placed on these efforts, and with more collaboration between militaries, technical organizations, and their memberships, all of these groups would be better positioned to develop and support responsible AI practices.
*Responsible AI is also a commonly stated goal within industry; however, commercial and profit interests are more likely to override responsibility within industry, though this can and does also happen within governments and military.
**Both of these projects were group efforts led by the author.