In October 2023, China’s proposed “Global AI Governance Initiative” pointed out that the rapid development of global artificial intelligence technology is having a profound impact on economic and social development and human civilizational progress, bringing enormous opportunities to the world. At the same time, AI technology also brings various unpredictable risks and complex challenges. AI governance concerns the fate of all humanity and is a common issue faced by countries around the world.
The Chinese government attaches great importance to anticipating and preventing the potential risks of AI, having released multiple relevant policy documents. International and domestic AI scientists and leaders have begun to pay attention to the existential risks that AI may pose to human society. AI safety and governance have become key issues of our time.
This report focuses on “Frontier Large Models”: large-scale machine learning models capable of performing a wide range of tasks and achieving or exceeding the capabilities of current state-of-the-art existing models. These are currently the most common frontier AI, providing the most opportunities but also bringing new risks.
Given the rapid development of frontier large models, governments, international organizations, enterprises, research institutions, civil society organizations, and individual citizens around the world need to work together to promptly understand these risks and study possible countermeasures.
This report is divided into five main chapters, comprehensively examining trend predictions, risk analysis, safety technologies, and governance solutions for frontier large models, with a summary and outlook. The report aims to promote awareness and discussion of related issues, hoping to contribute to establishing a responsible and inclusive global AI safety and governance system.

