Skip to content

Concordia AI: 2025 Mid-Year Impact Report

Our mission is to ensure that AI is developed and deployed in a way that is safe and aligned with global interests. We advance global AI safety by conducting research, advising leading AI companies and policymakers, and promoting international dialogue. Below are some of our key accomplishments from January to July 2025. (see 2024 highlights here).

Advancing international coordination on AI safety and governance

Research Impact and Engagement

  • International AI Safety Report:
    • Head of International AI Governance Kwan Yee Ng (吴君仪) contributed to the first International AI Safety Report as one of the writers and is continuing on as a writer for the 2026 edition of the International AI Safety Report. Chaired by Turing Award winner Yoshua Bengio, the report is supported by an expert panel representing 30 countries including China as well as experts from the EU and the UN. Concordia AI also provided feedback to the report, including editing the Chinese translation of its summary materials.
  • China AI safety and governance analysis:
    • Published the “State of AI Safety in China 2025” report, covering developments May 2024–June 2025. The report was cited by a number of media outlets including WiredBloomberg, and The People’s Daily (the largest newspaper in China).
    • CEO Brian Tse (谢旻希) authored an op-ed titled “China Is Taking AI Safety Seriously. So Must the U.S.” in Time Magazine; International AI Governance Senior Research Manager Jason Zhou and International AI Governance Part-time Researcher Gabriel Wagner analyzed AI safety implications of China’s April Politburo study session for Stanford DigiChina Forum; interviewed by CGTN on China’s approaches in AI innovation and global governance.
    • Published over 10 new “AI Safety in China” newsletter issues, reaching over 1,400 subscribers across governments, top AI labs, and AI safety institutes.
  • Singapore AI safety and governance analysis:

Multilateral Initiatives

  • Global AI Summit series: Participated in the French AI Action Summit, including:
    • Brian Tse was invited as a Chinese civil society representative to the AI Action Summit in the Grand Palais.
    • Co-hosted the workshop “AI Safety as a Collective Challenge” on the sidelines of the Summit, alongside the Carnegie Endowment for International Peace (CEIP), Oxford Martin AI Governance Initiative (AIGI), Tsinghua University Center for International Security and Strategy (CISS), and Tsinghua Institute for AI International Governance (I-AIIG). During the event, Concordia AI co-published the report “Examining AI Safety as a Global Public Good: Implications, Challenges, and Research Priorities.”
    • Invited to a closed-door seminar hosted by the China AI Safety & Development Association (CnAISDA). During the subsequent public side event, Turing Award Winner Andrew Yao cited Concordia AI’s State of AI Safety in China report series when describing the increase in AI safety research in Chinese institutions.
    • Attended the inaugural conference of International Association of Safe and Ethical AI (IASEAI), where Brian Tse spoke on the panel “Global Perspectives on AI Safety and Ethics.”
    • Brian Tse delivered a presentation at the France-China AI Association (Association d’Intelligence Artificielle France-Chine), which was cited by outlets including Xinhua.
  • United Nations:
    • Provided written inputs and participated in consultations regarding the UN’s Independent International Scientific Panel on AI and Global Dialogue on AI.
    • Brian Tse spoke on the panel “From Principles to Practice—Governing Advanced AI in Action” at the AI for Good Summit 2025.
  • Global AIxBiosecurity governance: We contributed to a number of critical global discussions at the intersection of AI and biosecurity:
  • International expert consensus and statements:
    • Brian Tse participated in the International Dialogues on AI Safety-Shanghai, signing the Shanghai Consensus on Ensuring Alignment and Human Control of Advanced AI Systems, alongside a Nobel laureate, Turing Award winners, and senior policymakers.
    • Brian Tse and Kwan Yee Ng contributed to and signed The Singapore Consensus on Global AI Safety Research Priorities during the Singapore Conference on AI 2025 (SCAI).

Convening AI safety conferences in China, Singapore, and globally

  • World AI Conference Conference (WAIC), Shanghai
    • Hosted the AI Safety and Governance Forum at China’s most influential AI conference.
      • Convened around 30 distinguished experts, including Yoshua Bengio; United Nations Under-Secretary-General Amandeep Singh Gill; Shanghai AI Lab Director ZHOU Bowen (周伯文); Special Envoy of the President of France for AI Anne Bouverot; Distinguished Professor of Computer Science at UC Berkeley Stuart Russell; Peng Cheng Laboratory Director Academician GAO Wen (高文); CEO of the Partnership on AI Rebecca Finlay; Shanghai Artificial Intelligence Strategic Advisory Expert Committee member Academician HE Jifeng (何积丰); and many more leading figures from government, industry, and research.
      • Over 200 audience members joined in person, with over 14,000+ views of the livestream, and media coverage from BloombergWiredCaixinIT Times, and Tech Review Africa.
    • Served as official AI Governance Advisor for WAIC 2025.
    • Co-hosted a number of frontier AI safety workshops on the sidelines of WAIC:
      • Co-hosted a workshop on “Early Warning and Crisis Coordination for Advanced AI” with the Carnegie Endowment for International Peace, Oxford Martin School AI Governance Initiative, Oxford China Policy Lab, Tsinghua University Center for International Security and Strategy (CISS), and Tsinghua University Institute for AI International Governance (I-AIIG).
      • Co-hosted a workshop on “Convergence of AI and Biological Risks Workshop” with the Tianjin University Center for Biosafety Research.
      • Hosted a workshop on “Towards International AI Risk Management Standards.”
      • Co-hosted the “International Workshop on AI Deception Risks and Governance” with Fudan University, Safe AI Forum.
  • Beijing Academy of AI Conference 2025
    • Co-hosted the “AI Safety Forum” with the Beijing Academy of Artificial Intelligence (BAAI) at the BAAI Conference 2025. The forum brought together leading technical experts from institutions including MIT, Fudan University, Singapore Management University, and Tsinghua University to build scientific consensus on technical evaluations for AI “red lines.”
  • Asia Tech x Singapore, 2025
    • Organised the AI Risk Management Workshop, with support from Singapore’s Infocomm Media Development Authority, bringing together 20+ global experts across policy, industry, AI assurance, and academia to explore actionable risk management approaches for AI systems.
  • International Conference on Learning Representations (ICLR 2025), Singapore
    • Co-hosted and participated in a series of events, including:
      • Co-hosted the “Frontier Governance Exchange” with Singapore AI Safety Hub, Lorong AI and Safe AI Forum.
      • Co-convened the “Misalignment and Control Workshop” and a 130+ person AI Safety Social with FAR.AI, the Safe AI Forum, and Singapore AI Safety Hub.
      • Kwan Yee Ng presented on AI Safety in China at FAR.AI’s Singapore Alignment Workshop 2025.

 

Advising leading AI companies and policymakers in China

  • National Standards and Policy Guidance: Concordia AI is a member of key national and industry technical committees, contributing to the development of China’s AI safety standards.
    • National Information Security Standardization Technical Committee (SAC/TC260): As part of SAC/TC260 Special Working Group on Emerging Technology Safety, Concordia AI contributed to the standard for “Classification and Grading Methods for the Security of Artificial Intelligence Applications.”
    • National Information Technology Standardization Technical Committee (SAC/TC28/SC42): As a member of the AI Subcommittee, Concordia AI contributed to the “Artificial intelligence—Risk management capability assessment.”
    • Ministry of Industry and Information Technology AI Standardization Committee (MIIT/TC1): Concordia AI joined the Working Group on AI Safety Governance.
    • Guangdong-Hong Kong-Macao Greater Bay Area local standards: As a member of the Greater Bay Area working group of SAC/TC28/SC42, Concordia AI played a key role in the development of the Shenzhen local standard “Technical Framework for Value Alignment of Pre-trained AI Models.”
  • Frontier AI Safety Risk Management and Best Practices:
    • Co-published the “Frontier AI Risk Management Framework v1.0” with Shanghai AI Lab. It is China’s first comprehensive framework for managing severe risks from general-purpose AI models.
      • We propose a robust set of protocols designed to empower general-purpose AI developers, with comprehensive guidelines for proactively identifying, assessing, mitigating, and governing a set of severe AI risks that pose threats to public safety and national security.
      • The Framework outlines a set of unacceptable outcomes (red lines) and early warning indicators for escalating safety and security measures (yellow lines) for areas including: cyber offense, biological threats, large-scale persuasion and harmful manipulation, and loss of control risks.
    • Signed strategic partnership agreements with several leading Chinese general-purpose AI developers, providing advice on AI safety and risk management best practices.
    • Invited to present on frontier AI risk management during a closed-door workshop at the AI Industry Alliance of China’s 15th Plenum Meeting.
    • Co-hosted a workshop on “EU Code of Practice & Industry Best Practices: Towards a Global Standard for AI Risk Management, Safety and Security” with SaferAI, the Oxford Martin AI Governance Initiative, the Safe AI Forum.
  • Frontier AI Risk Monitoring and Evaluation:
    • Contributed to the Frontier AI Risk Management Framework in Practice: A Risk Analysis Technical Report led by Shanghai AI Lab. We assessed critical risks from more than 20 frontier LLMs in the following areas: cyber offense, biological and chemical risks, persuasion and manipulation, uncontrolled autonomous AI R&D, strategic deception and scheming, self-replication, and collusion.
    • Soft-launched an AI Risk Monitoring Platform designed to track and mitigate frontier AI risks, including cyberoffense, biological threats, chemical threats, and loss-of-control domains. The platform evaluates 34 frontier LLMs from 11 leading developers across the U.S., China, and France, using 18 open-source benchmarks. Key outputs include a risk index dashboard and a detailed technical report.
    • AI Safety Research Manager DUAN Yawen (段雅文) co-authored “Bare Minimum Mitigations for Autonomous AI Development.”
  • AIxBiosecurity Governance:
    • Published Chinese language report “Responsible Innovation in AI x Life Sciences” with Tianjin University’s Center for Biosafety Research and Strategy. This 70-page deep dive draws on 300+ sources to explore AI-biotech convergence, benefits, risks, and governance recommendations for diverse stakeholders.
      • Head of AI Safety and Governance (China) FANG Liang (方亮) presented the report at a biosecurity seminar hosted by China’s National Key Laboratory of Synthetic Biotechnology.
      • Presented the report at the 2025 International Symposium on Global Biosecurity Governance and Cooperation, co-hosted by the National Biosecurity Expert Committee of China, Guangzhou Laboratory, and China Foreign Affairs University.
    • Invited to participate in the “Closed-door Seminar on DNA Synthesis Screening Technology and Policy” held at China Foreign Affairs University.
  • WeChat Newsletter Publications:
    • Released over 59 new posts in our WeChat Official Account, reaching over 4,600 subscribers across China’s AI ecosystem, including policymakers, industry professionals, academic researchers, and the public.

Organizational updates

  • Organizational growth:
    • Following the establishment of our Singapore office, our team expanded from 8 to 12 members, welcoming our first Singapore-based staff member.
  • International partnerships:
  • Branding and Communication:
    • We launched a new English organizational website with refreshed branding to showcase our work. This is complemented by an updated brochure and a dedicated WeChat post to provide an introduction to our mission and activities.

Back To Top