U.S.-China AI Cooperation Under Trump 2.0

November 24, 2025
By Kevin Werbach

This article was published alongside the PWH Report “The Future of Artificial Intelligence Governance and International Politics.”

The United States and China, as the world’s two economic, military, and technological superpowers, recognize the paramount importance of artificial intelligence (AI) as the technology that will shape global affairs in the coming decades. Neither can fully address its risks alone. With the United States, under President Donald Trump, aggressively dismissing global governance for AI, the influence of China within international organizations is bound to increase. But the influence of those organizations may decrease. Hard challenges of AI oversight will require direct involvement of both dominant players in AI development and deployment.  

What Kind of Race is AI? 

The Biden administration viewed AI as a race that could be led, though not necessarily won. Through the CHIPS Act and since-rescinded Diffusion Rule, it sought to delay Chinese frontier AI development to the extent possible. At the same time, it worked to promote “Safe, Secure, and Trustworthy” AI development domestically. As [then-U.S. President Joe] Biden’s 2023 Executive Order with that title states: “Harnessing AI for good and realizing its myriad benefits requires mitigating its substantial risks.” If the AI race has no visible endpoint, the best strategy is to take whatever steps can help maintain local advantage, while working to address the challenges of AI that both nations face.  

Interestingly, this vision mirrors the way China appears to think about AI. Chinese leader Xi Jinping has exhorted China to “take the lead and win advantages in the field of artificial intelligence,” rather than adopting a framing of winning or losing the war. For China, where the vision of artificial general intelligence (AGI) is less central in the minds of leading AI developers, AI is a means rather than an end. China’s AI+ framework, released shortly after the Trump administration’s own AI Action Plan in summer 2025, describes AI as “a new focus of international competition and a powerful engine of economic development.” At the same time, China has been more aggressive than the United States in adopting laws governing AI, requirements for deployments of systems such as public-facing AI chatbots, and ethical guidelines for AI. 

The Trump administration has a different vision. While there is more commonality between the two administrations on the substance than it might seem from their radically different style and rhetoric, the basic calculus of Trump 2.0 AI policy is that AI is a race that can and should be won. Its AI Action Plan is subtitled, “Winning the Race,” and it boldly declares that, “Winning the AI race will usher in a new golden age of human flourishing, economic competitiveness, and national security.” 

If winning the AI race is paramount, ensuring the AI we deploy is free from bias, privacy protective, and less-prone to misinformation, is less important, except to the extent it bears on the outcome of the race. Thus, the Trump administration has not only rolled back U.S. efforts on AI governance, it has attacked other countries and multinational organizations seeking coordination in those areas. The global infrastructure of AI Safety Institutes and the efforts under NIST to promote AI governance standards have been narrowed to focus on cybersecurity and alignment topics. These are areas where there is recognition that catastrophic harms might occur, undermining national security. 

At the same time, the Trump approach has led the United States to accept greater commercial cooperation between United States and Chinese AI providers. Hence, the recent deal to transfer the Chinese social media platform TikTok to U.S. control, despite prior Chinese statements ruling out such an option. While some may attribute this entirely to commercial interests, the shift is actually consistent with a broader worldview.  

If AI is a global race to be won, with the winner dominating the world, the siloed configuration of U.S. and Chinese ecosystems is only temporary. All that matters is the endpoint. The Trump administration, while not entirely removing the CHIPS Act restrictions, is more interested in promoting the diffusion of U.S. AI infrastructure, even if utilized by Chinese entities or other nations outside the US sphere of influence. In the words of White House Senior Policy Advisor for AI Sriram Krishnan, a former venture capitalist at the firm Andreessen Horowitz, the race is between two “AI stacks,” and the U.S. objective is “maximizing tokens inferenced by American models running on American hardware all over the world.” This, the Trump administration believes, will give the United States a leg up on innovation and also maintain its leverage to exercise control over ever-more-powerful AI.   

“American AI” is therefore now conceptualized as AI running on US-controlled foundations (principally semiconductors and frontier models), rather than AI from U.S. allies and others who accept substantive commitments to join the US sphere. The new U.S. approach to AI in the second Trump administration changes the landscape for potential U.S.-China cooperation. Out go the efforts to standardize and gain commitments globally from the private sector on AI responsibility and governance, although the European Union and international organizations will continue their efforts in those areas. One area for cooperation remains across the two administrations. Four new opportunities emerge. 

Frontier AI Safety 

Even during the Cold War, the United States and the Soviet Union found shared interests such as preventing accidental nuclear escalation, creating a regime to restrain nuclear proliferation, and limiting chemical and biological weapons. The key opportunity was that both sides, despite being adversaries, agreed that scenarios such as a nuclear launch due to a misunderstanding would have devastating consequences.  

In the AI context, the analogue is frontier AI safety. Increasingly powerful AI systems run the risk of facilitating catastrophic harms, either through accidents or use by threat actors. In late 2024, Biden and Xi signed an agreement committing that only a human could launch a nuclear strike. Beyond this, there are already a number of governmental and non-governmental touch points between the United States and China. For example, the International Dialogues on AI Safety bring together top AI scientists and academics from the two countries, in a series of meetings to address extreme AI risks, including sessions in Beijing and Shanghai. China has also this year established a counterpart to the AI safety institutes that promote technical collaboration among nations. Both the United States and China have a stake in ensuring that advanced AI systems do not act in ways their creators did not intend. 

The challenge is that many other dangerous uses of frontier AI also have military or intelligence value. Use of AI to create zero-day cyber attacks, or to recruit assets through AI’s capability to be hyper-persuasive, or to disrupt enemy AI systems through prompt injection or other adversarial techniques, are all threatening when done by hostile states or groups. Neither the United States nor China is likely to give up their own ability to do so.   

Open Source 

The two countries are now converging on support for open source (technically open weights) AI development. Interestingly, this view so far does not extend to U.S. AI labs. Meta is the only one with a strong commitment to open sourcing its most powerful models, and appears to be stepping back from that position. However, the U.S. frontier AI labs do release less-powerful AI models as open source. In China, the leading AI labs such as DeepSeek are all-in on open source, and the major firms such as Alibaba and Baidu have shifted toward open sourcing their best models, no doubt with the encouragement of the Chinese state.  

A world with more open-source AI will create opportunities for U.S.-China cooperation on standards and licensing questions. Research on thorny issues of how to address risks from powerful open-source models in the hands of threat actors would also be valuable. For example, is it possible to implement “kill switches” or digital fingerprints that would allow tracing of which AI models were used by hostile actors?  

Algorithmic Governance 

A second area where the two sides may find common ground with their current leadership is algorithmic governance. For many in the West, the prospect of governments using AI aggressively for social engineering is terrifying. The European Union’s AI Act, for example, includes social scoring and indiscriminate use of biometric identification by law enforcement among the small list of “prohibited practices” in Article 5. In China, however, algorithmic governance has a long history, and received significant attention as part of the multi-faceted Social Credit initiative. Facial recognition is widely used by the government, and AI has been used creatively to enforce Chinese policies toward the Uyghur Muslim minority in Xinjiang. In the second Trump administration, the erstwhile Department of Government Efficiency (DOGE) aggressively pushed for data consolidation and use of AI to transform the operation of federal agencies. Law enforcement agencies such as Immigration and Customs Enforcement (ICE) are pushing the use of facial recognition and generative AI in seeking to ramp up their activities.   

If the United States finds itself more closely aligned with the Chinese approach to algorithmic governance than that of most other Western democracies, it may create opportunities for coordination between the two countries. These could include pushback on efforts to limit governmental uses along these lines and information sharing between law enforcement.   

Assessments 

A third area of potential collaboration lies in developing and standardizing mechanisms for evaluating AI systems. The United Kingdom and Singapore, in particular, have focused on such measures as a central aspect of their national AI governance strategies. However, the United States (through work under the National Institute of Standards and Technology (NIST) at the Department of Commerce) and China (through chatbot licensing assessments supervised by the Cyberspace Administration of China, and an “algorithm registry” mechanism required by law) have also done significant work.  

Ultimately, to be effective, AI assessments will need to be standardized and able to function across borders, given the global nature of AI platforms. They will involve a combination of governmental and private entities. Cooperation between the United States and China in this area would largely be technical in nature, or would involve approval of standards for local acceptance of assessments.  

Regulatory Innovation 

A final surprising area for collaboration is in institutional design for AI governance. Traditional forms of regulation may not be effective for AI. The speed of change is too fast; AI will be embedded into so many activities, and there are too many uncertainties to adopt rigid rules. Addressing AI’s dangers without overly slowing innovation is also critical, especially when AI development is so critical to global competitiveness. As a result, countries around the world are experimenting with legal and governance approaches to AI.  

The Trump administration is skeptical of broad AI regulation, and Republicans in Congress attempted to pass a ten-year moratorium on state AI laws. However, key figures on the right have floated novel proposals for AI regulatory regimes, including Senator Josh Hawley’s proposals on liability for AI providers, Senator Ted Cruz’s regulatory waiver proposal, and U.S. AI Action Plan lead author Dean Ball’s private governance concept. China has also shown a willingness to explore new ideas, including a scholar’s draft of AI legislation that was put forth as a stalking horse in early 2025. I co-organized an academic workshop on global AI governance at Peking University in May 2025, with scholars from both the United States and China, and found great interest on both sides in new ideas for addressing AI’s novel challenges. 

Conclusion 

The kinds of initiatives that might increase the likelihood of U.S.-China AI dialogue are likely to be familiar ones: finding shared interests even among adversaries and building networks among experts on both sides. And there remains significant technological and economic uncertainty as to whether the bipolar configuration of advanced AI will endure. Overall, though, we seem to be moving into a period where the United States and China seem further apart regarding AI, but may actually be closer than it seems in key areas.  

About the author

Kevin Werbach is the Liem Sioe Liong / First Pacific Company professor and chair of the Department of Legal Studies and Business Ethics at the Wharton School, University of Pennsylvania.