Skip to content

Frontier AI Risk Management Framework (v1.5)

February 2026

The Frontier AI Risk Management Framework (the “Framework”) is a structured set of protocols designed to help general-purpose AI (GPAI) model developers proactively identify, assess, mitigate, and govern severe AI risks. Originally released in July 2025 as v1.0, the Framework was updated to Version 1.5 in February 2026. Key updates in the new version include: 

 

  • Expanded loss of control content: To better implement the core principles of “ensuring ultimate human control” and “proactive prevention and response” to guard against AI technology getting out of control, we refined the loss of control risk scenarios and thresholds; we also strengthened agent oversight protocols and emergency response mechanisms, aiming to provide guidance to help academia and industry to continuously monitor these risks. 
  • Operationalizing risk analysis: To improve the Framework’s operationalizability, we have updated the risk analysis guidance for GPAI model providers. By clarifying the essential modules of this process, such as model evaluation, elicitation, risk modeling and estimates, we aim to make it easier for developers to practically implement risk analysis best practices (see Section 3. Risk Analysis).
  • Enhanced interoperability: We have mapped our risk management measures against leading international and domestic AI risk management guidance, specifically China’s National TC260 AI Safety Governance Framework 2.0 and the EU Code of Practice for General-Purpose AI Models (Safety and Security Chapter). This helps developers adopt safety measures shared by major domestic and international regulatory guidance (see Appendix I and Appendix II).

 

Back To Top