Skip to content

Concordia AI 2025 Impact Highlights

Throughout 2025, frontier AI capabilities advanced rapidly. But the same capabilities that make these systems so useful also introduce new societal risks. Real-world evidence for several of these risks continues to grow—across malicious use, malfunctions, and systemic threats.

Against this backdrop, Concordia AI’s mission remains as critical as ever: ensuring that AI is developed and deployed safely and in alignment with global interests. We advance this mission through research, advisory work with leading AI companies and policymakers, and promotion of international dialogue.

Below are some of our key accomplishments in 2025. We’ve organized this list according to key axes of our work: international convenings, international research and public engagement, and contributing to China’s domestic AI safety and governance landscape. We end with organizational updates. For previous highlights, see our 20232024 and mid-2025 reports.

International convenings

  • Convening international AI safety dialogues in China, Singapore, and globally
    • Hosted the AI Safety and Governance Forum at the World AI Conference (WAIC). This was Concordia AI’s flagship convening of 2025—bringing together around 30 distinguished experts from around the world, including Turing Award winner Yoshua Bengio; United Nations Under-Secretary-General Amandeep Singh Gill; Shanghai AI Lab Director ZHOU Bowen (周伯文); Special Envoy of the President of France for AI Anne Bouverot; Distinguished Professor of computer science at UC Berkeley Stuart Russell; Peng Cheng Laboratory Director GAO Wen (高文). We had 200+ in-person attendees and 14,000+ livestream views. The Forum was covered my multiple media outlets, including BloombergWiredCaixinIT Times, and Tech Review Africa. We also co-hosted/hosted multiple side events and expert workshops and served as official AI Governance Advisor for WAIC 2025.
    • Co-hosted two international workshops with the Carnegie Endowment for International Peace, the Oxford Martin School AI Governance Initiative, the Oxford China Policy Lab, Tsinghua University Center for International Security and Strategy (CISS), and Tsinghua University Institute for AI International Governance (I-AIIG). The first workshop focused on “AI Safety as a Collective Challenge”, and was held at the French AI Action Summit in January. The second focused on “Early Warning and Crisis Coordination for Advanced AI” and was held at the World AI Conference in Shanghai in July.1
    • Co-hosted the AI Safety Forum at the Beijing Academy of AI Conference 2025 where technical experts from institutions including MIT, Fudan University, Singapore Management University, and Tsinghua University worked to build consensus on AI red lines.
    • Organised an AI Risk Management Workshop on the sidelines of Asia Tech x Singapore (May 2025) with 20+ experts in AI safety — spanning policy, industry, AI assurance and academia — with participants based across Singapore, China, the US, UK, and the EU, with the support of the Infocomm Media Development Authority of Singapore (IMDA).
    • Co-hosted a “Frontier AI in Cybersecurity” workshop with Nanyang Technological University CyberSG R&D Programme Office and UC Berkeley RDI which brought together 25 leaders across government, law enforcement agencies, leading AI labs; and organised the AI Governance in Singapore panel at Lorong AI, on the sidelines of the Singapore International Cybersecurity Week 2025.
    • Co-hosted events at the International Conference on Learning Representations (ICLR) 2025 Singapore: a “Frontier Governance Exchange” with Singapore AI Safety Hub, Lorong AI and Safe AI Forum; an AI safety social attended by 130+ participants and a “Misalignment and Control” workshop with FAR.AI and Singapore AI Safety Hub.

Group photo after the WAIC AI Safety and Governance Forum morning session.

Participants of the 2025 Singapore Conference on AI: International Scientific Exchange on AI Safety. Source: The Singapore Consensus on Global AI Safety Research Priorities.
  • Singapore-related research
    • Published the State of AI Safety in Singapore report, the first comprehensive analysis of Singapore’s AI safety ecosystem, led by our International AI Governance Project Manager Jonathan Lee. He also presented the report at a AI governance in Singapore panel organised by Concordia AI in Singapore, a talk organised by the Singapore AI Safety Hub, and at EAGxSingapore.

Contributing to China’s domestic AI safety and governance landscape

  • Frontier AI safety risk management and best practices:
    • Co-published the Frontier AI Risk Management Framework v1.0 with Shanghai AI Lab. This is China’s first comprehensive framework for managing severe risks from general-purpose AI models.
      • The framework proposes a robust set of protocols designed to support general-purpose AI developers, with comprehensive guidelines for proactively identifying, assessing, mitigating, and governing a set of severe AI risks that pose threats to public safety and national security.
      • The framework outlines a set of unacceptable hazards (red lines) and early warning indicators for escalating safety and security measures (yellow lines) for areas including: cyber offense, biological threats, large-scale persuasion and harmful manipulation, and loss of control risks.
      • The framework was cited in various media outlets, including CaixinIT Times, XinhuaTIMESina, and Sinica Podcast.
    • Signed strategic partnership agreements with several leading Chinese general-purpose AI developers to provide advice on AI safety and risk management best practices.
    • Provided comprehensive advice on compliance with the EU AI Act and General-Purpose AI Code of Practice to leading Chinese general-purpose AI developers. This work included co-hosting a workshop on “EU Code of Practice & Industry Best Practices: Towards a Global Standard for AI Risk Management, Safety and Security” with SaferAI, the Oxford Martin AI Governance Initiative, and the Safe AI Forum.
    • Presented on frontier AI risk management during a closed-door workshop at the China AI Industry Alliance’s 15th Plenum Meeting, in the context of its Disclosure of Practices on the AI Security and Safety Commitments.
    • Presented on risk management for open-weight frontier models at a workshop (“Academic Symposium on AI Industry Development and Legislation”) at Tongji University.

  • Frontier AI risk monitoring and evaluation:
    • Contributed to the “Frontier AI Risk Management Framework in Practice: A Risk Analysis Technical Report” led by Shanghai AI Lab. We assessed critical risks from more than 20 frontier LLMs in the following areas: cyber offense, biological and chemical risks, persuasion and manipulation, uncontrolled autonomous AI R&D, strategic deception and scheming, self-replication, and collusion. The report was covered by Jack Clark’s Import AI.
    • Launched the AI Risk Monitoring Platform designed to track and mitigate frontier AI risks, including cyberoffense, biological threats, chemical threats, and loss of control domains. The platform evaluates 50 frontier LLMs from 15 leading developers across the US, China, and France, using 18 open source benchmarks. Key outputs include a risk index dashboard and a detailed technical report. This project was spearheaded by our AI Safety Research Senior Manager WANG Weibing (王伟冰).
    • The platform received coverage from several major media outlets, including People’s DailySouth China Morning PostXinhua’s Economic Information Daily, and IT Times.

Back To Top