Because of the positive feedback on our previous report and the fast-paced developments in AI since October 2023, we are pleased to release this updated edition. The original version, published ahead of the UK AI Safety Summit, drew wide interest across policy, research, and media communities. Since then, we have provided briefings to over a dozen organizations and seen increased global engagement with the report’s key insights.
This update builds on the 2023 edition, offering a concise synthesis of emerging trends in technical research, governance, and public discourse related to frontier AI safety. The report is structured into nine interconnected sections, covering areas such as technical safety work, international and domestic governance, lab and industry practices, expert perspectives, and public opinion.
Key takeaways:
- The relevance and quality of Chinese technical research for frontier AI safety has increased substantially, with growing work on frontier issues such as LLM unlearning, misuse risks of AI in biology and chemistry, and evaluating “power-seeking” and “self-awareness” risks of LLMs.
- There have been nearly 15 Chinese technical papers on frontier AI safety per month on average over the past 6 months. The report identifies 11 key research groups who have written a substantial portion of these papers.
- China’s decision to sign the Bletchley Declaration, issue a joint statement on AI governance with France, and pursue an intergovernmental AI dialogue with the US indicates a growing convergence of views on AI safety among major powers compared to early 2023.
- Since 2022, 8 Track 1.5 or 2 dialogues focused on AI have taken place between China and Western countries, with 2 focused on frontier AI safety and governance.
- Chinese national policy and leadership show growing interest in developing large models while balancing risk prevention.
- Unofficial expert drafts of China’s forthcoming national AI law contain provisions on AI safety, such as specialized oversight for foundation models and stipulating value alignment of AGI.
- Local governments in China’s 3 biggest AI hubs have issued policies on AGI or large models, primarily aimed at accelerating development while also including provisions on topics such as international cooperation, ethics, and testing and evaluation.
- Several influential industry associations established projects or committees to research AI safety and security problems, but their focus is primarily on content and data security rather than frontier AI safety.
- In recent months, Chinese experts have discussed several focused AI safety topics, including “red lines” that AI must not cross to avoid “existential risks,” minimum funding levels for AI safety research, and AI’s impact on biosecurity.