Guiding the governance of AI for a long and flourishing future

Concordia AI is a Beijing-based social enterprise focused on AI safety and governance.

Featured Research

Our new State of AI Safety in China Spring 2024 Report explains developments over the past 6 months across 6 domains: technical safety research, international governance, domestic governance, lab and industry governance, expert views on AI risks, and public opinion on AI. The report is best viewed as a PowerPoint and can also be viewed via the below PDF. 

Key Takeaways

  • The relevance and quality of Chinese technical research for frontier AI safety has increased substantially, with growing work on frontier issues such as LLM unlearning, misuse risks of AI in biology and chemistry, and evaluating “power-seeking” and “self-awareness” risks of LLMs. 
  • There have been nearly 15 Chinese technical papers on frontier AI safety per month on average over the past 6 months. The report identifies 11 key research groups who have written a substantial portion of these papers.
  • China’s decision to sign the Bletchley Declaration, issue a joint statement on AI governance with France, and pursue an intergovernmental AI dialogue with the US indicates a growing convergence of views on AI safety among major powers compared to early 2023. 
  • Since 2022, 8 Track 1.5 or 2 dialogues focused on AI have taken place between China and Western countries, with 2 focused on frontier AI safety and governance.
  • Chinese national policy and leadership show growing interest in developing large models while balancing risk prevention.
  • Unofficial expert drafts of China’s forthcoming national AI law contain provisions on AI safety, such as specialized oversight for foundation models and stipulating value alignment of AGI.
  • Local governments in China’s 3 biggest AI hubs have issued policies on AGI or large models, primarily aimed at accelerating development while also including provisions on topics such as international cooperation, ethics, and testing and evaluation.
  • Several influential industry associations established projects or committees to research AI safety and security problems, but their focus is primarily on content and data security rather than frontier AI safety. 
  • In recent months, Chinese experts have discussed several focused AI safety topics, including “red lines” that AI must not cross to avoid “existential risks,” minimum funding levels for AI safety research, and AI’s impact on biosecurity.

Updated May 15: Added Executive Summary in Slides 2-3.


AI is likely the most transformative technology that has ever been invented. Controlling and steering increasingly advanced AI systems is a critical challenge for our time. Concordia AI aims to ensure that AI is developed and deployed in a way that is safe and aligned with global interests. We provide expert advice on AI safety and governance, support AI safety communities in China, and promote international cooperation on AI safety and governance.

Our Approach

Advising on AI safety and governance

We aim to raise awareness of potential risks from AI and promote best practices to mitigate those risks. We participate in consultations on Chinese government policy, consult for leading AI labs, and collaborate on research reports with Chinese academia.

Supporting technical communities

We aim to create a thriving ecosystem that will drive progress towards safe AI. We convene seminars, run fellowships to train aspiring safety practitioners, and publish educational resources on AI safety for Chinese AI researchers in industry and academia.

Promoting international cooperation

We aim to align strategies for the safe development and deployment of AI globally. We facilitate dialogues between Chinese and international experts and advise multilateral organizations to further technical understanding of AI risks and safety solutions, develop policy ideas, and build trust across communities. 

Impact Highlights

BAAI AI Safety and Alignment Forum

Concordia AI was the official co-host and moderator of the AI Safety and Alignment Forum at the Beijing Academy of AI Conference in 2023, which featured speakers such as Sam Altman, Geoffrey Hinton, and Andrew Yao. The Forum was the first-ever event at a major Chinese AI conference to be focused on AI safety and alignment, and the live event was viewed by over 500 people in person and over 200,000 times online. See media coverage on e.g. Wall Street Journal, WIRED, Forbes, AI Era, and Tencent Technology.

China-Western Exchanges on AI Safety

In 2023, Concordia AI facilitated UC Berkeley Prof. Stuart Russell, MIT Prof. Max Tegmark, and University of Cambridge Prof. David Krueger’s trips to Beijing and coordinated their meetings with Chinese AI organizations such as Tsinghua’s Institute for AI International Governance and facilitated a dialogue with former Baidu President Ya-Qin Zhang. Pre-COVID, we also facilitated more exchanges including inviting Founder of the Centre for the Governance of AI Allan Dafoe and Skype co-founder Jaan Tallinn to speak at the Shanghai World AI Conference.

Global Perspective on AI Governance report

From 2019 to 2021, Concordia AI co-edited the Global Perspective on AI Governance report with the Shanghai Institute of Science for Science (SISS). Each report featured more than 50 articles on AI governance by experts from North America, Europe, Asia, and Latin America. The effort was cited by a committee member at the 2021 Two Sessions, one of China’s most important annual political and legislative gatherings. In 2022, these reports culminated into a book published by Tongji University Press.

Concordia AI Safety and Alignment Fellowship

Concordia AI is currently running China’s first AI Safety & Alignment Fellowship for ~20 machine learning graduate students from China’s top universities. We aim to inspire participants to contribute to AI safety and alignment research by discussing the potential risks and benefits from superintelligence and introducing them to cutting-edge research in the field. The Fellowship curriculum is adapted from the AGI Safety Fundamentals course designed by OpenAI’s Richard Ngo and features a series of online seminars plus a research project component.

Educational resources on AI safety

Concordia AI has worked with Chinese publishers to translate and promote English-language books on AI safety such as Life 3.0, Human Compatible, and most recently, The Alignment Problem for the Chinese audience. We also have a WeChat account (安远AI) where we have published articles related to AI risks and safety including an Alignment Overview Series, an explainer to the Future of Life Institute’s Open Letter to Pause Giant AI Experiments, and a database of AI alignment failures.

Submission to the UN Global Digital Compact

In March 2023, Concordia AI submitted a paper on regulating AI risks to the UN Global Digital Compact. The UN Global Digital Compact is an initiative housed under the Secretary General’s vision, “Our Common Agenda”, and aims at “outlin[ing] shared principles for an open, free and secure digital future for all”. In our paper, we recommended several principles for designing and implementation regulations on AI risks and actions that the UN and other multilateral organizations could take to support the enactment of those principles.

Our Team

Brian Tse

Founder and CEO

Brian is the Founder and CEO of Concordia AI. He is also a Policy Affiliate at the Centre for the Governance of AI. Previously, Brian was Senior Advisor to the Partnership on AI. He co-edited the book Global Perspective on AI Governance published by Tongji University Press. He also served on the program committee of AI safety workshops at AAAI, IJCAI, and ICFEM. Brian has been invited to speak at Stanford, Oxford, Tsinghua, and Peking University on global risk and foresight on advanced AI.

Liang Fang

Senior GOvernance Lead

Liang is a Senior Governance Lead at Concordia AI, where he leads Concordia’s advisory work on AI safety and governance. He was previously a senior technical consultant at Baidu, where he actively promoted the research, communication and implementation of AI ethics and governance. He has participated in the research and formulation of several Chinese government AI and S&T policies.

Kwan Yee Ng

Senior Program Manager

Kwan Yee is a Senior Program Manager at Concordia AI, where she leads projects to promote international cooperation on AI safety and governance. She previously worked with Professor Wang Jisi at the PKU Institute for International and Strategic Studies on numerous research projects and prior to that, she was a research fellow at Oxford University’s Future of Humanity Institute. Kwan Yee received a master’s degree from Peking University as a Yenching Scholar.  

Jason Zhou

Senior Research Manager

Jason is a Senior Research Manager at Concordia AI, where he works on promoting international cooperation on AI safety and governance. He previously worked as a Business Advisory Services Manager at the US-China Business Council’s Beijing Office, researching Chinese ICT industry, data security, cybersecurity, and privacy policies. Jason received a Master’s degree from Tsinghua University as a Schwarzman Scholar, where he wrote a thesis on China-US relations.

Yawen Duan


Yawen is a Technical Program Manager at Concordia AI, where he works on projects to support technical AI safety communities. He is a Future of Life Institute AI Existential Safety PhD Fellow, also an incoming ML PhD student at University of Cambridge focusing on LLM safety and alignment. He has prior experience on AI safety and alignment research at UC Berkeley and David Krueger’s group at the University of Cambridge. His works have been published at ML/CS conferences such as CVPR, ECCV, ICML, ACM FAccT, NeurIPS MLSafety Workshop. Yawen received a MPhil in ML from Cambridge and a BSc from University of Hong Kong.

Yuan Cheng

Senior program manager

Yuan Cheng is a Senior Program Manager at Concordia AI, where she contributes to AI Safety and Governance in China through policy consulting, product solutions and partnerships. Previously, she worked in ByteDance’s Global Public Policy and Corporate Social Responsibility team, leading programs on policy, compliance, and social and environmental impact for international products such as TikTok. She also served for a few international and non-profit organizations in fields of humanitarian aid and community development. Yuan graduated from Fudan University and Leiden University and holds a Bachelor of Law degree and a master’s degree in International Relations and Diplomacy. 

Yunxin Fan

Operations manager

Yunxin is an Operations Manager at Concordia AI, where she oversees the company’s branding strategy and media engagement. Previously, she worked at Dentsu Aegis as Senior Account Manager, focusing on international media strategy for leading tech companies and venture capital firms. She also has previous experience working for Caixin, China’s leading finance & economics journal, and as a consultant for the Economist Intelligence Unit.

Muzhe (Yessi) Li

operations manager

Muzhe is an Operations Manager at Concordia AI, where she manages the company’s finances, human resources and organizational infrastructure. She was formerly a Sequoia Fellow and worked at Genki Forest as a Strategy Analyst and Product Manager. Prior to that, Muzhe was a Technical Product Manager on Didi Mobility’s International Product Team. 

We are a social enterprise

We generate income through consulting and advisory projects for investment companies and tech companies in mainland China, Hong Kong, and Singapore. As an independent institution, we are not affiliated to or funded by any government or political groups. 

Get in touch

© 2024 Concordia AI | All Rights Reserved

Concordia AI (安远AI)是北京谋远咨询有限公司旗下的品牌