Background
Joining in as a core member of Concordia AI involves you in cutting-edge AI safety and governance research and advisory practices. You will engage with stakeholders across China – policymakers, think tanks, universities, and AI companies – to deliver high-quality research and translate it into policy recommendations and industry practice, strengthening our research capabilities and industry impact. Our ongoing work includes:
- Producing In-Depth Reports and Action Recommendations
Concordia AI continuously tracks and conducts comprehensive research on frontier topics in AI safety and governance. Through collaboration with leading domestic research institutions and expert scholars, we have authored or co-authored a mass of landmark reports including “Responsible Open Source of Foundation Models”, “Safety and Global Governance of Generative AI Report”, and “Frontier AI Risk Management Framework” and maintained consistent research output. - Participating in Standards Development and Policy Deliberation
Concordia AI exercises policy influence through active participation in the development of national, industrial, and local standards for AI safety and governance across multiple standardization committees including TC260, SAC/TC28, and MIIT/TC1. We have published comprehensive reports such as “National AI Safety Institutes and Their International Networks: Why Establish, How to Operate, and Future Challenges” and kept playing the vital role of non-government organizations in top-level design. - Facilitating and Promoting Industry Best Practices
Concordia AI transforms research and technical conclusions into actionable industry practices through enterprise consultation, industry alliances, best practice sharing, and talent community building. Our consultation services encompass compliance consulting for overseas expansion, frontier AI risk management, automated red-teaming and reinforcement, AI safety frontier insights, and corporate case sharing.
Depending on the candidate’s level of experience, we would also welcome applications for AI Safety and Governance Research Manager roles. A Research Manager will take a greater lead in research projects and report drafting and deliver stronger commitments to general responsibilities such as research agenda setting, content quality control, and external communications.
Responsibilities
- Thematic research and recommendations
Conduct independent research on domestic AI safety governance topics, analyze major technical and policy developments both domestically and internationally, and draft research reports, policy recommendations, and best practices tailored to stakeholder needs, with the aim to promote safe development and deployment of frontier AI. - Standards development and tracking
Track, participate in, and support the research, development, and revision of relevant international, national, industry, and local standards, ensuring their alignment with frontier AI safety and governance requirements while providing professional recommendations to the stakeholders. - High-Quality Content Production
Lead the writing and editing of original high-quality research, optimize policy recommendation presentations, organize internal knowledge and specialized topics, and enhance the team’s overall content output capabilities. - Team Collaboration and Support
Support team initiatives such as enterprise client consulting, AI safety research and evaluation, and major domestic and international forums or closed-door symposiums.
Qualifications
Required
- Alignment with Concordia AI’s mission; high sense of ownership and problem-solving abilities to work both independently and collaboratively.
- Master’s degree or above in public administration, law, technology ethics, cybersecurity, artificial intelligence, or related fields.
- Familiar with international and domestic standard setting processes; ability to translate complex technical and governance requirements into actionable standard suggestions or policy proposals.
- Strong Chinese reading, writing, and communication skills
Preferred
- Experience in AI safety and governance research, or related technical research
- Experience in AI-related standardization
- Ability to communicate and collaborate effectively in both Chinese and English language with Chinese and international AI safety governance stakeholders
Benefits
- Competitive salary in the range of US$76,000–108,000 per year, plus a comprehensive benefits package.
- Flexible working hours; up to 30% remote working quota each year
- 22 days of paid annual leave and 10 days of paid sick leave per year
- Comprehensive social insurance package (“五险一金”) , plus supplemental commercial medical insurance
- A modern and comfortable office space with an annual personal development fund and an allowance for office equipment and supplies
For detailed information, please see: https://mp.weixin.qq.com/s/DezWswOxFAZYcDQpB1Y-mA