Skip to content

Frontier AI Governance Researcher

Location

Based in either our Beijing or Singapore office for at least 75% of the year, with up to 25% remote work flexibility.

Application Deadline

End of day, China time on Oct 31, 2025.
*Rolling applications may be considered after the application deadline if the position has not been filled.

Application Process

Our hiring process has 4 phases:
Written application form (20-30 minutes).
Online written test (1 hour).
2 online interviews (2 hours).
2 day in-person work trial (including reference checks).

Background

Concordia AI partners with leading international stakeholders from academia, industry, and policy to develop and promote robust international governance mechanisms for AI. Examples of our work include Examining AI Safety as a Global Public Good: Implications, Challenges, and Research Priorities and Concordia AI and Shanghai AI Lab’s Frontier AI Risk Management Framework. We also develop policy recommendations for major international initiatives, including the UN AI Advisory Body Interim Report and Global AI Safety Summits. The Frontier AI Governance Researcher will collaborate closely with our China AI Governance Researcher on these policy outputs.

In this role, you will focus on understanding governance solutions for threats posed by general-purpose AI systems and developing policy recommendations to strengthen international coordination. This position offers the opportunity to advance novel governance solutions, become a recognized expert on cutting-edge AI risks, and establish yourself as a thought leader in one of the world’s most consequential policy areas.

Responsibilities

Lead original English-language research and analysis on governance solutions for cutting-edge AI developments, including topics such as international red-lines/risk thresholds, information sharing mechanisms for dangerous AI incidents, and governance of open-weight models.

Contribute to our organizational thought leadership, convening, and policy development by synthesizing the state-of-the-art literature on frontier AI risk management.

Develop concrete policy recommendations for international policy initiatives and major global summits

Qualifications

Required

  • Undergraduate degree in a relevant field such as AI/technology governance, public policy, computer science, international relations, and at least 1-2 years of relevant working experience; or a graduate degree in a relevant field.
  • Experience writing on technical AI safety topics or international governance issues (such as international organizations, standards, etc.).
  • Basic Mandarin reading and listening skills for internal team communication.
  • Experience writing and delivering oral English-language presentations of policy issues.

Preferred

  • Published papers or reports on frontier AI safety in leading think tanks, publications, or AI journals.
  • Established networks with think tanks and international organizations relevant to AI governance.
Back To Top