Key Policy and Peer-Reviewed Research

Main Policy Papers

Embodied AI: China’s Big Bet on Smart Robots, November 24, 2025 (with Pavlo Zvenyhorodskyi)

While Washington and most of Silicon Valley focus primarily on scaling large language models (LLMs) like ChatGPT and digital AI applications, Beijing has placed a fundamentally different bet. It believes that true AI dominance will come from systems capable of autonomous operation in the physical world—AI-powered robotics, commonly known as embodied AI.

China’s AI Policy at the Crossroads: Balancing Development and Control in the DeepSeek Era, July 17, 2025 (with Matt Sheehan)

China’s recent policy evolution reflects a fundamental pattern in the CCP’s strategic thinking: when China perceives itself as technologically vulnerable, it leverages technology as an engine for economic growth; when it feels strong, it reasserts control through heavy-handed ideological measures. These competing imperatives of control and growth have shaped Chinese AI policy since top leadership began paying close attention to AI in 2017, evolving cyclically with China’s self-perception of its relative technological capabilities and economic position.

How Some of China’s Top AI Thinkers Built Their Own AI Safety Institute, June 16, 2025 (with Karson Elmgren and Oliver Guest)

The February 2025 launch of the China AI Safety and Development Association (CnAISDA, 中国人工智能发展与安全研究网络)—China’s self-described counterpart to the AI safety institutes (AISIs) that the United Kingdom, United States, and other countries have launched over the last two years—offers a critical data point on the state of China’s rapidly evolving AI safety conversation.

The Closing Window to Win, May 7, 2025 (with Sarosh Nagar and Nitarshan Rajkumar)

The United States and its allies have a closing window to win on AI. Export controls have bought the United States and its allies time to act to win over the global AI market while we still have advantages in both hardware and software. To complement these export controls, we need to run faster and build our innovation lead. Therefore, we also need a strategy of “full-stack diffusion” that will durably embed American leadership in AI across the world and widen our gap ahead of China. That means promoting products across our AI stack in a nuanced and strategic manner, using a combination of financing, export controls, standards-setting, and other mechanisms.

Peer-Reviewed Papers

Advancing Science and Evidence-Based Policy, Science, July 31, 2025 (with Rishi Bommasani et al.)

Policy-makers around the world are grappling with how to govern increasingly powerful artificial intelligence (AI) technology. Some jurisdictions, like the European Union (EU), have made substantial progress enacting regulations to promote responsible AI. Others, like the administration of US President Donald Trump, have prioritized “enhancing America’s dominance in AI.” Although these approaches appear to diverge in their fundamental values and objectives, they share a crucial commonality: Effectively steering outcomes for and through AI will require thoughtful, evidence-based policy development. Though it may seem self-evident that evidence should inform policy, this is far from inevitable in the inherently messy policy process. As a multidisciplinary group of experts on AI policy, we put forward a vision for evidence-based AI policy, aimed at addressing three core questions: (i) How should evidence inform AI policy? (ii) What is the current state of evidence? (iii) How can policy accelerate evidence generation?

In Which Areas of Technical AI Safety Could Geopolitical Rivals Cooperate?, FAccT, June 23, 2025 (with Ben Bucknall and Saad Siddiqui et al.)

International cooperation is common in AI research, including between geopolitical rivals. While many experts advocate for greater international cooperation on AI safety to address shared global risks, some view cooperation on AI with suspicion, arguing that it can pose unacceptable risks to national security. However, the extent to which cooperation on AI safety poses such risks, as well as provides benefits, depends on the specific area of cooperation. In this paper, we consider technical factors that impact the risks of international cooperation on AI safety research, focusing on the degree to which such cooperation can advance dangerous capabilities, result in the sharing of sensitive information, or provide opportunities for harm.

Other Policy Papers

  • “Examining AI Safety as a Global Public Good: Implications, Challenges, and Research Priorities.” Oxford Martin AI Governance Initiative. March 11, 2025 (with Kayla Blomquist and Lis Siegel et al.)

  • “The Future of the AI Summit Series.” Oxford Martin AI Governance Initiative. February 3, 2025 (with Lucia Velasco et al.)

  • “The Future of International Scientific Assessments of AI’s Risks.” Carnegie Endowment for International Peace. August 27, 2024 (with Hadrien Pouget and Claire Dennis et al.)