Skip to content

The Future of International Scientific Assessments of AI’s Risks

Aug 2024

On May 17, 2024, Turing Award winner Yoshua Bengio led the release of the interim report of the “International Scientific Report on Advanced AI Safety.” To explore different mechanisms for continuously advancing international scientific assessments of AI risks, the AI Governance Initiative at Oxford Martin School and the Carnegie Endowment for International Peace convened a group of experts at the intersection of AI and international relations in July 2024, resulting in the publication of “The Future of International Scientific Assessments of AI’s Risks” report. 

As a member of the writing team for the “International Scientific Report on Advanced AI Safety” and representing Concordia AI, Kwan Yee Ng contributed to the formation and writing of this policy research report. At the workshop, Kwan Yee Ng emphasized the necessity of global inclusivity and the important role of the United Nations. The following is the report’s executive summary: 

Executive Summary 

Managing the risks of artificial intelligence requires international coordination among diverse actors with varying interests, values, and perceptions. Drawing from experiences with global challenges like climate change, developing a shared, science-based understanding is a crucial first step toward collective action. In this context, the UK government led twenty-eight countries and the European Union (EU) in launching the International Scientific Report on the Safety of Advanced AI. 

How these actors can collaborate to achieve international scientific consensus on AI risks has received little public discussion, despite ongoing quiet diplomacy. The challenge is complex, as AI impacts are harder to measure and predict than climate change and are deeply intertwined with geopolitical tensions and national strategic interests. 

To explore the path forward, the Oxford Martin AI Governance Initiative and the Carnegie Endowment for International Peace convened AI and international relations experts in July. Six major ideas emerged from this discussion 

  • No single institution or process can lead the world toward scientific agreement on AI’s risks. 
  • The UN should consider leaning into its comparative advantages by launching a process to produce periodic scientific reports with deep involvement from member states. 
  • A separate international body should continue producing annual assessments that narrowly focus on the risks of “advanced” AI systems, primarily led by independent scientists.  
  • There are at least three plausible, if imperfect candidates to host the report dedicated to risks from advanced AI.  
  • The two reports should be carefully coordinated to enhance their complementarity without compromising their distinct advantages. 
  • It may be necessary to continue the current UK-led process until other processes become established. 
Authors: Hadrien Pouget*, Claire Dennis*, Jon Bateman, Robert F. Trager, Renan Araujo, Haydn Belfield, Belinda Cleeland, Malou Estier, Gideon Futerman, Oliver Guest, Carlos Ignacio Gutierrez, Vishnu Kannan, Casey Mahoney, Matthijs Maas, Charles Martinet, Jakob Mökander, Kwan Yee Ng, Seán Ó hÉigeartaigh, Aidan Peppin, Konrad Seifert, Scott Singer, Maxime Stauffer, Caleb Withers, and Marta Ziosi
Back To Top