Skip to content

Examining AI Safety as a Global Public Good

As artificial intelligence systems grow more powerful and integrated into society, their safe development presents a critical governance challenge. This report, led by Concordia AI, the Oxford Martin AI Governance Initiative, and Carnegie Endowment for International Peace, examines whether framing AI safety as a “public good” can help address these challenges, drawing on lessons from climate change, nuclear safety, and global health governance. Our analysis identifies three key coordination challenges:

  • Balancing collective responsibility while holding accountable stakeholders that possess disproportionate power in AI development
  • Managing the tension between sharing AI safety best practices while limiting the spread of potentially dangerous capabilities
  • Ensuring safety requirements do not constrain AI’s potential for sustainable development or perpetuate global inequities

Kwan Yee Ng, Head of International AI Governance at Concordia AI, and Brian Tse, Founder and CEO of Concordia AI, along with 14 other scholars from around the world, co-authored this report.

 

 

Authors: Kayla Blomquist, Elisabeth Siegel, Ben Harack, Kwan Yee Ng, Tom David, Brian Tse, Charles Martinet, Matt Sheehan, Scott Singer, Imane Bello, Zakariyau Yusuf, Robert Trager, Fadi Salem, Seán Ó hÉigeartaigh, Jing Zhao, Kai Jia
Back To Top