Skip to content

State of AI Safety in Singapore

July 2025

The State of AI Safety in Singapore report, published in July 2025, presents the first comprehensive analysis of Singapore’s AI safety ecosystem. It demonstrates how smaller, resource-constrained states like Singapore can still play a meaningful role in shaping emerging AI safety norms, even as global discussions often focus on a few countries developing frontier-scale models.

The analysis in the report covers Singapore’s multi-layered domestic governance approach—spanning voluntary frameworks, targeted legislation, national standards, and testing and evaluation efforts—as well as its role in global AI governance through multilateral forums, regional initiatives, and bilateral partnerships. It also explores the local AI assurance market, profiles both domestic and foreign general-purpose AI developers, and maps the landscape of AI safety research across universities, government bodies, and research institutes. Key research themes and public attitudes toward AI risks in Singapore are also highlighted.

We hope this report provides policymakers and industry leaders with valuable insights into Singapore’s pragmatic approach to AI governance and its potential as a regional hub for AI safety testing. AI researchers may also find it useful in identifying opportunities for collaboration with Singapore-based institutions in advancing AI safety science.

This report is intended as a starting point for deeper investigation into specific areas of interest. Given the broad scope, some initiatives are summarized rather than described in full. As the research cut-off was early July 2025, developments after that date are not covered. Readers should treat the findings as a snapshot and consult primary sources for the most up-to-date information.

Update (Errata): The Technical Research section has been revised to include information on the Generative AI Lab (GrAIL) at Nanyang Technological University.

Authors: Jonathan Lee, Jason Zhou, Kwan Yee Ng, and Brian Tse
Back To Top