Guiding the governance of AI for a long and flourishing future
Concordia AI is a social enterprise focused on AI safety and governance with presence in Beijing and Singapore.
Featured Research
As artificial intelligence systems grow more powerful and integrated into society, their safe development presents a critical governance challenge. This report, led by Concordia AI, the Oxford Martin AI Governance Initiative, and Carnegie Endowment for International Peace, examines whether framing AI safety as a “public good” can help address these challenges, drawing on lessons from climate change, nuclear safety, and global health governance. Our analysis identifies three key coordination challenges:
- Balancing collective responsibility while holding accountable stakeholders that possess disproportionate power in AI development
- Managing the tension between sharing AI safety best practices while limiting the spread of potentially dangerous capabilities
- Ensuring safety requirements do not constrain AI’s potential for sustainable development or perpetuate global inequities
Kwan Yee Ng, Head of International AI Governance at Concordia AI, and Brian Tse, Founder and CEO of Concordia AI, along with 14 other scholars from around the world, co-authored this report.
About
Our Approach
Advising on AI safety and governance
We aim to raise awareness of potential risks from AI and promote best practices to mitigate those risks. We participate in consultations on Chinese government policy, consult for leading AI labs, and collaborate on research reports with Chinese academia.
Supporting technical communities
We aim to create a thriving ecosystem that will drive progress towards safe AI. We convene seminars, run fellowships to train aspiring safety practitioners, and publish educational resources on AI safety for Chinese AI researchers in industry and academia.
Promoting international cooperation
We aim to align strategies for the safe development and deployment of AI globally. We facilitate dialogues between Chinese and international experts and advise multilateral organizations to further technical understanding of AI risks and safety solutions, develop policy ideas, and build trust across communities.
Impact Highlights
BAAI AI Safety and Alignment Forum
Concordia AI was the official co-host and moderator of the AI Safety and Alignment Forum at the Beijing Academy of AI Conference in 2023, which featured speakers such as Sam Altman, Geoffrey Hinton, and Andrew Yao. The Forum was the first-ever event at a major Chinese AI conference to be focused on AI safety and alignment, and the live event was viewed by over 500 people in person and over 200,000 times online. See media coverage on e.g. Wall Street Journal, WIRED, Forbes, AI Era, and Tencent Technology.
China-Western Exchanges on AI Safety
Global Perspective on AI Governance report
Concordia AI Safety and Alignment Fellowship
Concordia AI is currently running China’s first AI Safety & Alignment Fellowship for ~20 machine learning graduate students from China’s top universities. We aim to inspire participants to contribute to AI safety and alignment research by discussing the potential risks and benefits from superintelligence and introducing them to cutting-edge research in the field. The Fellowship curriculum is adapted from the AGI Safety Fundamentals course designed by OpenAI’s Richard Ngo and features a series of online seminars plus a research project component.
Educational resources on AI safety
Concordia AI has worked with Chinese publishers to translate and promote English-language books on AI safety such as Life 3.0, Human Compatible, and most recently, The Alignment Problem for the Chinese audience. We also have a WeChat account (安远AI) where we have published articles related to AI risks and safety including an Alignment Overview Series, an explainer to the Future of Life Institute’s Open Letter to Pause Giant AI Experiments, and a database of AI alignment failures.
Submission to the UN Global Digital Compact
In March 2023, Concordia AI submitted a paper on regulating AI risks to the UN Global Digital Compact. The UN Global Digital Compact is an initiative housed under the Secretary General’s vision, “Our Common Agenda”, and aims at “outlin[ing] shared principles for an open, free and secure digital future for all”. In our paper, we recommended several principles for designing and implementation regulations on AI risks and actions that the UN and other multilateral organizations could take to support the enactment of those principles.
Our Team
Brian Tse
Founder and CEO
Brian is the Founder and CEO of Concordia AI. He is also a Policy Affiliate at the Centre for the Governance of AI. Previously, Brian was Senior Advisor to the Partnership on AI. He co-edited the book Global Perspective on AI Governance published by Tongji University Press. He also served on the program committee of AI safety workshops at AAAI, IJCAI, and ICFEM. Brian has been invited to speak at Stanford, Oxford, Tsinghua, and Peking University on global risk and foresight on advanced AI.
Liang Fang
Senior GOvernance Lead
Liang is a Senior Governance Lead at Concordia AI, where he leads Concordia’s advisory work on AI safety and governance. He was previously a senior technical consultant at Baidu, where he actively promoted the research, communication and implementation of AI ethics and governance. He has participated in the research and formulation of several Chinese government AI and S&T policies.
Kwan Yee Ng
Senior Program Manager
Kwan Yee is a Senior Program Manager at Concordia AI, where she leads projects to promote international cooperation on AI safety and governance. She previously worked with Professor Wang Jisi at the PKU Institute for International and Strategic Studies on numerous research projects and prior to that, she was a research fellow at Oxford University’s Future of Humanity Institute. Kwan Yee received a master’s degree from Peking University as a Yenching Scholar.
Jason Zhou
Senior Research Manager
Jason is a Senior Research Manager at Concordia AI, where he works on promoting international cooperation on AI safety and governance. He previously worked as a Business Advisory Services Manager at the US-China Business Council’s Beijing Office, researching Chinese ICT industry, data security, cybersecurity, and privacy policies. Jason received a Master’s degree from Tsinghua University as a Schwarzman Scholar, where he wrote a thesis on China-US relations.
Yawen Duan
TECHNICAL PROGRAM MANAGER
Yawen is a Technical Program Manager at Concordia AI, where he works on projects to support technical AI safety communities. He is a Future of Life Institute AI Existential Safety PhD Fellow, also an incoming ML PhD student at University of Cambridge focusing on LLM safety and alignment. He has prior experience on AI safety and alignment research at UC Berkeley and David Krueger’s group at the University of Cambridge. His works have been published at ML/CS conferences such as CVPR, ECCV, ICML, ACM FAccT, NeurIPS MLSafety Workshop. Yawen received a MPhil in ML from Cambridge and a BSc from University of Hong Kong.
Yuan Cheng
Senior program manager
Yuan Cheng is a Senior Program Manager at Concordia AI, where she contributes to AI Safety and Governance in China through policy consulting, product solutions and partnerships. Previously, she worked in ByteDance’s Global Public Policy and Corporate Social Responsibility team, leading programs on policy, compliance, and social and environmental impact for international products such as TikTok. She also served for a few international and non-profit organizations in fields of humanitarian aid and community development. Yuan graduated from Fudan University and Leiden University and holds a Bachelor of Law degree and a master’s degree in International Relations and Diplomacy.
Yunxin Fan
Operations manager
Yunxin is an Operations Manager at Concordia AI, where she oversees the company’s branding strategy and media engagement. Previously, she worked at Dentsu Aegis as Senior Account Manager, focusing on international media strategy for leading tech companies and venture capital firms. She also has previous experience working for Caixin, China’s leading finance & economics journal, and as a consultant for the Economist Intelligence Unit.
Muzhe (Yessi) Li
operations manager
Muzhe is an Operations Manager at Concordia AI, where she manages the company’s finances, human resources and organizational infrastructure. She was formerly a Sequoia Fellow and worked at Genki Forest as a Strategy Analyst and Product Manager. Prior to that, Muzhe was a Technical Product Manager on Didi Mobility’s International Product Team.
Xinyuan Tian
Operations manager
Xinyuan is an Operations Manager at Concordia AI, where she manages the company’s finance and human resources. Previously, she worked as a Senior Program Coordinator at Peking University Berggruen Research Center, managing the Center’s programs in frontier science, technology and philosophy, as well as overseeing the Berggruen Fellowship program. Xinyuan holds a master’s degree in Law and Diplomacy from Tufts University.
Jonathan Lee
Project Manager
Jonathan is the AI Governance Project Manager at Concordia AI in Singapore, where he manages research initiatives and projects that foster international collaboration on AI safety and governance. He brings extensive public sector experience from previous roles in foreign affairs and defense in the Singapore government, and has worked with leading AI labs on model fine-tuning. Jonathan holds an MBA from the University of Cambridge and a BSc in International Relations from the London School of Economics.
We are a social enterprise
We generate income through consulting and advisory projects for investment companies and tech companies in mainland China, Hong Kong, and Singapore. As an independent institution, we are not affiliated to or funded by any government or political groups.