<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Concordia AI - Concordia AI</title>
	<atom:link href="https://concordia-ai.com/author/concordia-ai/feed/" rel="self" type="application/rss+xml" />
	<link>https://concordia-ai.com</link>
	<description>Guiding the governance of AI for a long and flourishing future</description>
	<lastBuildDate>Tue, 17 Mar 2026 02:36:03 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.8.5</generator>

 
	<item>
		<title>2025 Q4 Update from our Frontier AI Risk Monitoring Platform</title>
		<link>https://concordia-ai.com/2025-q4-update-from-our-frontier-ai-risk-monitoring-platform/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=2025-q4-update-from-our-frontier-ai-risk-monitoring-platform</link>
		
		<dc:creator><![CDATA[Concordia AI]]></dc:creator>
		<pubDate>Tue, 17 Mar 2026 02:33:17 +0000</pubDate>
				<category><![CDATA[Announcement]]></category>
		<guid isPermaLink="false">https://concordia-ai.com/?p=1397</guid>

					<description><![CDATA[<p>We have released the 2025 Q4 update of our Frontier AI Risk Monitoring Report (2025Q4)! This is the second report since we launched the Frontier AI Risk Monitoring Platform last year. It tracks models from 16 leading developers worldwide for risks in four domains: cyber offense, biological risks, chemical risks, and loss-of-control. This report tracks frontier models released in the&#8230;</p>
<p>The post <a href="https://concordia-ai.com/2025-q4-update-from-our-frontier-ai-risk-monitoring-platform/">2025 Q4 Update from our Frontier AI Risk Monitoring Platform</a> first appeared on <a href="https://concordia-ai.com">Concordia AI</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>We have released the 2025 Q4 update of our <a href="https://airiskmonitor.net/doc/en/report/2025-Q4" rel="">Frontier AI Risk Monitoring Report (2025Q4)</a>! This is the second report since we <a href="https://aisafetychina.substack.com/p/10-key-insights-from-concordia-ais" rel="">launched</a> the <a href="https://airiskmonitor.net/" rel="">Frontier AI Risk Monitoring Platform</a> last year. It tracks models from 16 leading developers worldwide for risks in four domains: cyber offense, biological risks, chemical risks, and loss-of-control.</p>
<p><img decoding="async" class="alignnone size-full wp-image-1398" src="https://concordia-ai.com/wp-content/uploads/2026/03/platform-report-en.png" alt="" width="1270" height="719" srcset="https://concordia-ai.com/wp-content/uploads/2026/03/platform-report-en.png 1270w, https://concordia-ai.com/wp-content/uploads/2026/03/platform-report-en-300x170.png 300w, https://concordia-ai.com/wp-content/uploads/2026/03/platform-report-en-1024x580.png 1024w, https://concordia-ai.com/wp-content/uploads/2026/03/platform-report-en-150x85.png 150w, https://concordia-ai.com/wp-content/uploads/2026/03/platform-report-en-768x435.png 768w" sizes="(max-width: 1270px) 100vw, 1270px" /></p>
<p>This report tracks frontier models released in the fourth quarter of 2025 and synthesizes trends of the full year, offering a comprehensive view of the evolving AI risk landscape.</p>
<p>While Q3 2025 saw sharp rises in Risk Indices, Q4 presents a more nuanced picture: overall risk levels have stabilized, and frontier models show significant safety gains.</p>
<p><em>Note: The Risk Index is a score that reflects the overall risk of a model by combining its Capability Score and Safety Score. Higher Capability Scores and lower Safety Scores results in a higher Risk Index. Details about the methodology and limitations are <a href="https://airiskmonitor.net/doc/en/about#evaluation-methodology" rel="">here</a>.</em></p>
<p>Here are 10 key insights from our latest monitoring data:</p>
<h4 class="header-anchor-post">1. Overall risk indices have stabilized</h4>
<p>In contrast to the previous period, when Risk Indices hit record highs across all domains, Risk Indices for models released in Q4 2025 did not set new records. This suggests a momentary stabilization in the aggregate risk level of frontier models.</p>
<h4 class="header-anchor-post">2. Risk trends diverge significantly across model families</h4>
<p>While the overall trend is stable, individual model families followed distinct trajectories in Q4:</p>
<ul>
<li>Stable Low Risk: The GPT and Claude families maintained consistently low Risk Indices.</li>
<li>Stable High Risk: The DeepSeek family remained stable but at relatively high risk levels.</li>
<li>Risk Reduction: The Doubao, Hunyuan, and MiniMax families saw significant decreases in Risk Indices.</li>
<li>Risk Increase: The Gemini and Kimi families saw increases in specific domains (e.g., Gemini in biological and loss-of-control risks).</li>
</ul>
<h4 class="header-anchor-post">3. Significant Improvement in Safety Scores for Frontier Models</h4>
<p>Safety Scores for models released in Q4 2025 rose significantly compared to the previous quarter, signaling a marked improvement in the safety of new releases. The Doubao, Hunyuan, and MiniMax families demonstrated the most notable gains.</p>
<h4 class="header-anchor-post">4. Open-weight models lag behind proprietary models in cyber and bio capabilities</h4>
<p>Consistent with the previous quarter, open-weight models rival proprietary ones in chemical and loss-of-control capabilities but lag notably in cyber offense and biological capabilities. The gap in the biological domain is widening, approaching a one-year lag.</p>
<h4 class="header-anchor-post">5. Cyberattack capabilities have reached new heights</h4>
<p>Despite stabilizing risk indices, raw capabilities continue to grow. GPT-5.2 (high) achieved a breakthrough score of 94.7 on the <em>CyberSecEval2-VulnerabilityExploit</em> benchmark, indicating exceptional proficiency in identifying and exploiting software vulnerabilities. Claude Opus 4.5 Reasoning topped the <em>WMDP-Cyber</em> benchmark with a score of 90.3.</p>
<h4 class="header-anchor-post">6. Biological capabilities now surpass human experts in key tasks</h4>
<p>Q4 models have crossed critical thresholds in biology. Gemini 3 Pro Preview has surpassed human expert levels in sequence understanding, cloning experiments, and wet lab troubleshooting. This marks a significant milestone in AI’s utility—and potential risk—in the biological domain.</p>
<h4 class="header-anchor-post">7. &#8230;But biological safeguards lag behind capabilities</h4>
<p>The gap between capability and safety is most acute in the biological domain. Despite its superhuman capabilities, Gemini 3 Pro Preview exhibited a refusal rate of only 57.2% for harmful biological queries on the <em>SciKnowEval</em> benchmark, highlighting a concerning safety lag.</p>
<h4 class="header-anchor-post">8. Chemical safety refusal rates have increased widely</h4>
<p>While capability growth in the chemical domain has plateaued, safety has improved. 70% of models released in Q4 exceeded an 80% refusal rate for harmful chemical queries (measured by <em>SOSBench-Chem</em>), representing a strong improvement over previous quarters.</p>
<h4 class="header-anchor-post">9. Jailbreak safeguards have strengthened</h4>
<p>Defense against adversarial attacks has improved. Models released in Q4 showed significantly stronger resistance to jailbreaking on the <em>StrongReject </em>benchmark. The Claude and GPT families lead with high robustness, while the MiniMax family showed the most notable quarter-over-quarter improvement.</p>
<h4 class="header-anchor-post">10. Loss-of-control risks: High awareness, polarized honesty</h4>
<ul>
<li>Situational Awareness (e.g. awareness of whether they are in training or deployment stage): High situational awareness is a necessary condition for loss-of-control; the higher the score, the greater the risk. Most Q4 models scored near or above 80 out of 100 points. In comparison, in the previous quarter, only 2 models scored above 80 points, with the majority falling below 80.</li>
<li>Honesty: Performance is highly uneven. While Claude Opus 4.5 Reasoning achieved a high honesty score of 96.4, other models like Gemini 3 Pro Preview scored as low as 44.7.</li>
</ul>
<p><em>Note: Our current methodology on the loss-of-control is not yet perfect. We plan to improve in the next version.</em></p>
<h2 class="header-anchor-post"><strong>Explore the Data</strong></h2>
<p>These insights only scratch the surface. We invite you to explore the full interactive data, methodology, and model breakdowns on the <a href="https://airiskmonitor.net/" rel="">Frontier AI Risk Monitoring Platform</a>.</p>
<p>For a detailed analysis of these trends, read the <a href="https://airiskmonitor.net/doc/en/report/2025-Q4" rel="">full report</a>.</p>
<div class="subscription-widget-wrap">
<div class="subscription-widget show-subscribe">
<div class="preamble"></div>
</div>
</div><p>The post <a href="https://concordia-ai.com/2025-q4-update-from-our-frontier-ai-risk-monitoring-platform/">2025 Q4 Update from our Frontier AI Risk Monitoring Platform</a> first appeared on <a href="https://concordia-ai.com">Concordia AI</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Concordia AI 2025 Impact Highlights</title>
		<link>https://concordia-ai.com/concordia-ai-2025-impact-highlights/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=concordia-ai-2025-impact-highlights</link>
		
		<dc:creator><![CDATA[Concordia AI]]></dc:creator>
		<pubDate>Fri, 06 Mar 2026 06:35:41 +0000</pubDate>
				<category><![CDATA[Announcement]]></category>
		<guid isPermaLink="false">https://concordia-ai.com/?p=1387</guid>

					<description><![CDATA[<p>Throughout 2025, frontier AI capabilities advanced rapidly. But the same capabilities that make these systems so useful also introduce new societal risks. Real-world evidence for several of these risks continues to grow—across malicious use, malfunctions, and systemic threats. Against this backdrop, Concordia AI’s mission remains as critical as ever: ensuring that AI is developed and&#8230;</p>
<p>The post <a href="https://concordia-ai.com/concordia-ai-2025-impact-highlights/">Concordia AI 2025 Impact Highlights</a> first appeared on <a href="https://concordia-ai.com">Concordia AI</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>Throughout 2025, frontier AI capabilities advanced rapidly. But the same capabilities that make these systems so useful also introduce new societal risks. Real-world evidence for several of these risks continues to grow—across malicious use, malfunctions, and systemic threats.</p>
<p>Against this backdrop, Concordia AI’s mission remains as critical as ever: ensuring that AI is developed and deployed safely and in alignment with global interests. We advance this mission through research, advisory work with leading AI companies and policymakers, and promotion of international dialogue.</p>
<p>Below are some of our key accomplishments in 2025. We’ve organized this list according to key axes of our work: international convenings, international research and public engagement, and contributing to China’s domestic AI safety and governance landscape. We end with organizational updates. For previous highlights, see our <a href="https://aisafetychina.substack.com/p/concordia-ai-2023-annual-review" rel="">2023</a>, <a href="https://aisafetychina.substack.com/p/concordia-ai-2024-impact-highlights" rel="">2024</a> and <a href="https://aisafetychina.substack.com/p/concordia-ai-2025-mid-year-impact" rel="">mid-2025</a> reports.</p>
<h2 class="header-anchor-post">International convenings</h2>
<ul>
<li><strong>Convening international AI safety dialogues in China, Singapore, and globally</strong>
<ul>
<li>Hosted the <a href="https://aisafetychina.substack.com/p/concordia-ai-holds-the-ai-safety" rel="">AI Safety and Governance Forum at the World AI Conference (WAIC)</a>. This was Concordia AI’s flagship convening of 2025—bringing together around 30 distinguished experts from around the world, including Turing Award winner Yoshua Bengio; United Nations Under-Secretary-General Amandeep Singh Gill; Shanghai AI Lab Director ZHOU Bowen (周伯文); Special Envoy of the President of France for AI Anne Bouverot; Distinguished Professor of computer science at UC Berkeley Stuart Russell; Peng Cheng Laboratory Director GAO Wen (高文). We had 200+ in-person attendees and 14,000+ livestream views. The Forum was covered my multiple media outlets, including <a href="https://www.bloomberg.com/news/articles/2025-07-30/china-prepares-to-unseat-us-in-fight-for-4-8-trillion-ai-market" rel="">Bloomberg</a>, <a href="https://www.wired.com/story/china-artificial-intelligence-policy-laws-race/" rel="">Wired</a>, <a href="https://science.caixin.com/m/2025-07-30/102346902.html" rel="">Caixin</a>, <a href="https://mp.weixin.qq.com/s/EwDrlAveGkMm7NsnqCZi6Q" rel="">IT Times</a>, and <a href="https://techreviewafrica.com/news/2580/un-digital-envoy-concludes-china-visit-advocates-for-inclusive-ai-governance" rel="">Tech Review Africa</a>. We also co-hosted/hosted multiple side events and expert workshops and served as official AI Governance Advisor for WAIC 2025.</li>
<li>Co-hosted two international workshops with the Carnegie Endowment for International Peace, the Oxford Martin School AI Governance Initiative, the Oxford China Policy Lab, Tsinghua University Center for International Security and Strategy (CISS), and Tsinghua University Institute for AI International Governance (I-AIIG). The first workshop focused on “AI Safety as a Collective Challenge”, and was held at the French AI Action Summit in January. The second focused on “Early Warning and Crisis Coordination for Advanced AI” and was held at the World AI Conference in Shanghai in July.<span data-state="closed"><a id="footnote-anchor-1-189963930" class="footnote-anchor" href="https://aisafetychina.substack.com/p/concordia-ai-2025-impact-highlights?utm_source=post-email-title&amp;publication_id=1862105&amp;post_id=189963930&amp;utm_campaign=email-post-title&amp;isFreemail=true&amp;r=6j32gs&amp;triedRedirect=true&amp;utm_medium=email#footnote-1-189963930" target="_self" rel="" data-component-name="FootnoteAnchorToDOM">1</a></span></li>
<li>Co-hosted the AI Safety Forum at the <a href="https://2025.baai.ac.cn/schedule" rel="">Beijing Academy of AI Conference 2025</a> where technical experts from institutions including MIT, Fudan University, Singapore Management University, and Tsinghua University worked to build consensus on AI red lines.</li>
<li>Organised an <a href="https://www.linkedin.com/feed/update/urn:li:activity:7340690669990031360/" rel="">AI Risk Management Workshop</a> on the sidelines of Asia Tech x Singapore (May 2025) with 20+ experts in AI safety — spanning policy, industry, AI assurance and academia — with participants based across Singapore, China, the US, UK, and the EU, with the support of the Infocomm Media Development Authority of Singapore (IMDA).</li>
<li>Co-hosted a “<a href="https://www.linkedin.com/posts/concordia-ai_frontier-ai-is-reshaping-cyber-riskand-activity-7393896490525691904-tLkk" rel="">Frontier AI in Cybersecurity” workshop</a> with Nanyang Technological University CyberSG R&amp;D Programme Office and UC Berkeley RDI which brought together 25 leaders across government, law enforcement agencies, leading AI labs; and organised the <a href="https://luma.com/wyereks8" rel="">AI Governance in Singapore panel</a> at Lorong AI, on the sidelines of the Singapore International Cybersecurity Week 2025.</li>
<li>Co-hosted events at the International Conference on Learning Representations (ICLR) 2025 Singapore: a “Frontier Governance Exchange” with Singapore AI Safety Hub, Lorong AI and Safe AI Forum; an AI safety social attended by 130+ participants and a “Misalignment and Control” workshop with FAR.AI and Singapore AI Safety Hub.</li>
</ul>
</li>
</ul>
<p><img loading="lazy" decoding="async" class="alignnone size-full wp-image-1162" src="https://concordia-ai.com/wp-content/uploads/2025/11/Screenshot-2025-11-20-at-10.41.09-AM.png" alt="" width="843" height="532" srcset="https://concordia-ai.com/wp-content/uploads/2025/11/Screenshot-2025-11-20-at-10.41.09-AM.png 843w, https://concordia-ai.com/wp-content/uploads/2025/11/Screenshot-2025-11-20-at-10.41.09-AM-300x189.png 300w, https://concordia-ai.com/wp-content/uploads/2025/11/Screenshot-2025-11-20-at-10.41.09-AM-150x95.png 150w, https://concordia-ai.com/wp-content/uploads/2025/11/Screenshot-2025-11-20-at-10.41.09-AM-768x485.png 768w" sizes="auto, (max-width: 843px) 100vw, 843px" /></p>
<div class="captioned-image-container" style="text-align: center;">
<figure><figcaption class="image-caption"><em><span style="color: #999999;">Group photo after the WAIC AI Safety and Governance Forum morning session.</span></em></figcaption></figure>
</div>
<ul>
<li><strong>Contributing to and participating in global and multilateral AI governance efforts</strong>
<ul>
<li>Concordia AI CEO Brian TSE (谢旻希) participated in the <a href="https://www.aistandardssummit.org/event/354f4a77-ee25-47e3-8e84-291a55519c0c/programme" rel="">International AI Standards Summit</a> (Seoul, Dec 2–3) and spoke on a panel, as part of the expert delegation recommended by the National Standardization Administration of China.</li>
<li>Brian Tse was invited as a Chinese civil society representative to the French AI Action Summit in the Grand Palais. Invited to a closed-door seminar hosted by the China AI Safety &amp; Development Association (CnAISDA).</li>
<li>Provided <a href="https://www.un.org/global-digital-compact/en/ai-panel-inputs" rel="">written inputs</a> to UN consultations regarding the Independent International Scientific Panel on AI and Global Dialogue on AI.</li>
<li><a href="https://www.youtube.com/watch?v=4XMip8phUn4" rel="">Spoke</a> on the panel “From Principles to Practice—Governing Advanced AI in Action” at the <a href="https://aiforgood.itu.int/summit25/programme/" rel="">AI for Good Summit 2025</a>.</li>
<li>Participated in the International Dialogues on AI Safety (Shanghai) and signed the <a href="https://idais.ai/dialogue/idais-shanghai/" rel="">Shanghai Consensus</a> on “Ensuring Alignment and Human Control of Advanced AI Systems to Safeguard Human Flourishing”.</li>
</ul>
</li>
<li><strong>Global AIxBiosecurity governance: </strong>We contributed to a number of critical global discussions at the intersection of AI and biosecurity:
<ul>
<li>Brian Tse signed the <a href="https://www.nti.org/analysis/articles/statement-on-biosecurity-risks-at-the-convergence-of-ai-and-the-life-sciences/" rel="">Statement on Biosecurity Risks at the Convergence of AI and the Life Sciences</a> along with figures such as Andrew Yao, Yoshua Bengio, and George Church, and presented the Statement during <a href="https://substack.com/redirect/3d6aff06-9bd9-4352-a70f-845f57448efe?j=eyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM" rel="">The Sixth Session of the Working Group on the Strengthening of the Biological Weapons Convention</a>, as part of the <a href="https://www.nti.org/about/programs-projects/project/aixbio-global-forum/" rel="">Global AIxBio Global Forum</a>.</li>
<li>Participated in an <a href="https://www.nti.org/news/nti-at-the-munich-security-conference-reducing-nuclear-and-biological-risks-together/" rel="">AIxBio tabletop exercise at the Munich Security Conference</a> hosted by the Nuclear Threat Initiative, which led to the publication of the report “<a href="https://www.nti.org/events/report-launch-safeguarding-aixbio-capabilities-to-prevent-global-catastrophe/" rel="">Safeguarding Against Global Catastrophe: Risks, Opportunities, and Governance Options at the Intersection of Artificial Intelligence and Biology.</a>”</li>
<li>Presented at a WHO dialogue on AIxBio implications for the <a href="https://www.who.int/groups/technical-advisory-group-on-the-responsible-use-of-the-life-sciences-and-dual-use-research-(tag-ruls-dur)" rel="">Technical Advisory Group on the Responsible Use of the Life Sciences and Dual-Use Research</a>.</li>
<li>Co-developed and endorsed the “<a href="https://unodaweb-meetings.unoda.org/public/2025-12/AIxBio%20Recommendations%20INHR.pdf" rel="">Recommendations to Governments on Mitigating AIxBio Risks</a>” as part of the INHR/CNAS trilateral dialogue.</li>
<li>Participated in roundtables on CBRN (chemical, biological, radiological, and nuclear) risks and on responsible innovation in AI for peace and security, hosted by the Stockholm International Peace Research Institute and the United Nations Office for Disarmament Affairs (UNODA).</li>
<li>Participated in a workshop “<a href="https://ibbis.bio/international-meeting-advances-standards-for-dna-synthesis-screening/" rel="">International Standards for DNA Synthesis Screening – Towards Common Global Standards for Biosecurity</a>” organised by the International Biosecurity and Biosafety Initiative for Science (IBBIS). The meeting marked the launch of the IBBIS <a href="https://ibbis.bio/our-work/international-screening-standards/" rel="">International Standards Initiative</a>.</li>
<li>Spoke on a panel “<a href="https://www.youtube.com/watch?v=gbj5k1UiejY" rel="">AI-Accelerated Biological Risk: Delving into Asia’s Challenges and Emerging Solutions</a>,” organized by AI Safety Asia (AISA)<img loading="lazy" decoding="async" class="alignnone size-full wp-image-1389" src="https://concordia-ai.com/wp-content/uploads/2026/03/image.jpg" alt="" width="1280" height="765" srcset="https://concordia-ai.com/wp-content/uploads/2026/03/image.jpg 1280w, https://concordia-ai.com/wp-content/uploads/2026/03/image-300x179.jpg 300w, https://concordia-ai.com/wp-content/uploads/2026/03/image-1024x612.jpg 1024w, https://concordia-ai.com/wp-content/uploads/2026/03/image-150x90.jpg 150w, https://concordia-ai.com/wp-content/uploads/2026/03/image-768x459.jpg 768w" sizes="auto, (max-width: 1280px) 100vw, 1280px" />
<div class="captioned-image-container">
<figure><figcaption class="image-caption"><span style="color: #999999;"><em>Concordia AI CEO Brian Tse speaking at the United Nations side event during the Sixth Expert Meeting of the Working Group on strengthening implementation of the Biological Weapons Convention (BWC) in Geneva. Source: <a style="color: #999999;" href="https://mp.weixin.qq.com/s/BZ8a4lIQ-b-__-LYiAIfiw" rel="">Concordia AI</a>.</em></span></figcaption></figure>
</div>
<h2 class="header-anchor-post">International research and public engagement</h2>
<ul>
<li><strong>Analysis of China’s AI safety and governance landscape</strong>
<ul>
<li>Published the <em><a href="https://concordia-ai.com/research/state-of-ai-safety-in-china-2025/" rel="">State of AI Safety in China 2025</a></em><a href="https://concordia-ai.com/research/state-of-ai-safety-in-china-2025/" rel=""> report</a>, which was cited by <em><a href="https://www.wired.com/story/china-artificial-intelligence-policy-laws-race" rel="">Wired</a></em>, <em><a href="https://www.bloomberg.com/news/articles/2025-07-30/china-prepares-to-unseat-us-in-fight-for-4-8-trillion-ai-market" rel="">Bloomberg</a></em>, and <em><a href="https://paper.people.com.cn/rmrb/pc/content/202507/31/content_30092070.html" rel="">People’s Daily</a></em>; discussed the findings in a <a href="https://www.youtube.com/watch?v=os2t6vczu00" rel="">webinar</a> with distinguished experts; gave briefings on the report to senior leadership at over ten global organisations.</li>
<li>Our analysis was featured in multiple <em>Nature News</em> stories, including on China’s proposal for a <a href="https://www.nature.com/articles/d41586-025-03902-y" rel="">World Artificial Intelligence Cooperation Organization (WAICO)</a>, on <a href="https://www.nature.com/articles/d41586-025-03845-4" rel="">DeepSeek’s CEO LIANG Wenfeng (梁文锋)</a>, and on <a href="https://www.nature.com/articles/d41586-025-03972-y" rel="">China’s domestic AI governance</a>.</li>
<li>Published 20 “<a href="https://aisafetychina.substack.com/" rel="">AI Safety in China</a>” newsletters, growing our subscriber base by 73% over the course of 2025.</li>
<li>Brian Tse appeared on Nathan Labenz’ <a href="https://www.cognitiverevolution.ai/chinese-ai-they-re-just-like-us-with-beijing-based-concordia-ai-ceo-brian-tse/" rel="">The Cognitive Revolution podcast</a> to discuss China’s approach to AI development, safety, and governance and <em><a href="https://news.cgtn.com/news/2025-03-10/Watch-Youth-driven-growth-in-the-private-economy-1BDcD9ziV4A/p.html" rel="">CGTN</a></em> on China’s approaches in AI innovation and global governance.</li>
<li>Our International AI Governance Senior Research Manager Jason ZHOU (周杰晟) and our International AI Governance Part-time Researcher Gabriel Wagner analysed the AI safety implications of China’s April Politburo study session in a piece for the <a href="https://digichina.stanford.edu/work/forum-xis-message-to-the-politburo-on-ai/" rel="">Stanford DigiChina Forum</a>.</li>
<li>Brian Tse authored an <a href="https://time.com/7308857/china-isnt-ignoring-ai-regulation-the-u-s-shouldnt-either/" rel="">op-ed</a> in <em>Time Magazine</em>, suggesting practical steps for AI safety dialogue between China and the US.</li>
<li>Kwan Yee NG (吴君仪) and Gabriel Wagner spoke on China’s AI safety approach at <a href="https://www.youtube.com/watch?v=SjfnaOEdV80" rel="">AI Safety Asia’s Beijing Roundtable</a>.</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-710 size-full" src="https://concordia-ai.com/wp-content/uploads/2025/07/CN2-scaled-e1772778001221.jpg" alt="" width="1810" height="1335" srcset="https://concordia-ai.com/wp-content/uploads/2025/07/CN2-scaled-e1772778001221.jpg 1810w, https://concordia-ai.com/wp-content/uploads/2025/07/CN2-scaled-e1772778001221-300x221.jpg 300w, https://concordia-ai.com/wp-content/uploads/2025/07/CN2-scaled-e1772778001221-1024x755.jpg 1024w, https://concordia-ai.com/wp-content/uploads/2025/07/CN2-scaled-e1772778001221-150x111.jpg 150w, https://concordia-ai.com/wp-content/uploads/2025/07/CN2-scaled-e1772778001221-768x566.jpg 768w, https://concordia-ai.com/wp-content/uploads/2025/07/CN2-scaled-e1772778001221-1536x1133.jpg 1536w" sizes="auto, (max-width: 1810px) 100vw, 1810px" /></p>
<ul>
<li><strong>Contributing to international AI safety and governance research</strong>
<ul>
<li>Kwan Yee Ng contributed to the first <em><a href="https://internationalaisafetyreport.org/" rel="">International AI Safety Report</a></em> in 2025 and <a href="https://internationalaisafetyreport.org/publication/international-ai-safety-report-2026" rel="">the second edition in 2026</a> as a writer. The Report provides an up-to-date, internationally shared and science-based understanding of Alcapabilities and risks. It was overseen by an international Expert Advisory Panel nominated by over 30 countries and intergovernmental organisations.</li>
<li>Contributed to the <a href="https://aisafetypriorities.org/" rel="">Singapore Consensus on Global AI Safety Research Priorities</a>, alongside 100 researchers from 11 countries.</li>
<li>Co-published the report <em><a href="https://concordia-ai.com/research/examining-ai-safety-as-a-global-public-good/" rel="">Examining AI Safety as a Global Public Good</a></em> alongside the Carnegie Endowment for International Peace and the Oxford Martin School AI Governance Initiative.</li>
<li>Team members contributed to major papers on autonomous and agentic AI risks: Brian Tse and our AI Safety Research Manager DUAN Yawen (段雅文) contributed to “<a href="https://arxiv.org/abs/2511.22619" rel="">AI Deception: Risks, Dynamics, and Controls</a>” (Peking University), and Duan Yawen worked on the World Economic Forum white paper “<a href="https://www.weforum.org/publications/ai-agents-in-action-foundations-for-evaluation-and-governance/" rel="">AI Agents in Action: Foundations for Evaluation and Governance</a>”, <a href="https://aiagentindex.mit.edu/" rel="">2025 AI Agent Index led by MIT</a>, and the research paper “<a href="https://arxiv.org/abs/2504.15416" rel="">Bare Minimum Mitigations for Autonomous AI Development</a>.”</li>
</ul>
</li>
</ul>
<div class="captioned-image-container">
<figure>
<div class="image2-inset can-restack"><picture><source srcset="https://substackcdn.com/image/fetch/$s_!dgGt!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4314ec26-1a27-4919-a448-ee418c6a5253_720x401.png 424w, https://substackcdn.com/image/fetch/$s_!dgGt!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4314ec26-1a27-4919-a448-ee418c6a5253_720x401.png 848w, https://substackcdn.com/image/fetch/$s_!dgGt!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4314ec26-1a27-4919-a448-ee418c6a5253_720x401.png 1272w, https://substackcdn.com/image/fetch/$s_!dgGt!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4314ec26-1a27-4919-a448-ee418c6a5253_720x401.png 1456w" type="image/webp" sizes="100vw" /></picture></div>
<div><img loading="lazy" decoding="async" class="alignnone size-full wp-image-1265" src="https://concordia-ai.com/wp-content/uploads/2025/11/1.png" alt="" width="941" height="525" srcset="https://concordia-ai.com/wp-content/uploads/2025/11/1.png 941w, https://concordia-ai.com/wp-content/uploads/2025/11/1-300x167.png 300w, https://concordia-ai.com/wp-content/uploads/2025/11/1-150x84.png 150w, https://concordia-ai.com/wp-content/uploads/2025/11/1-768x428.png 768w" sizes="auto, (max-width: 941px) 100vw, 941px" /></div>
<div>
<div class="captioned-image-container">
<figure><figcaption class="image-caption"><span style="color: #999999;"><em>Participants of the 2025 Singapore Conference on AI: International Scientific Exchange on AI Safety. Source: <a style="color: #999999;" href="https://aisafetypriorities.org/" rel="">The Singapore Consensus on Global AI Safety Research Priorities</a>.</em></span></figcaption></figure>
</div>
<ul>
<li><strong>Singapore-related research</strong>
<ul>
<li>Published the <em><a href="https://concordia-ai.com/research/state-of-ai-safety-in-singapore/" rel="">State of AI Safety in Singapore</a></em><a href="https://concordia-ai.com/research/state-of-ai-safety-in-singapore/" rel=""> report</a>, the first comprehensive analysis of Singapore’s AI safety ecosystem, led by our International AI Governance Project Manager Jonathan Lee. He also presented the report at a <a href="https://luma.com/wyereks8" rel="">AI governance in Singapore panel</a> organised by Concordia AI in Singapore, a <a href="https://luma.com/zdgck5eh?tk=KY00WB" rel="">talk</a> organised by the Singapore AI Safety Hub, and at EAGxSingapore.</li>
</ul>
</li>
</ul>
<div class="captioned-image-container">
<figure>
<div class="image2-inset can-restack"><picture><source srcset="https://substackcdn.com/image/fetch/$s_!LQyp!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29f2bf1b-c189-4294-9ba0-a8cb8ea91396_724x710.jpeg 424w, https://substackcdn.com/image/fetch/$s_!LQyp!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29f2bf1b-c189-4294-9ba0-a8cb8ea91396_724x710.jpeg 848w, https://substackcdn.com/image/fetch/$s_!LQyp!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29f2bf1b-c189-4294-9ba0-a8cb8ea91396_724x710.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!LQyp!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29f2bf1b-c189-4294-9ba0-a8cb8ea91396_724x710.jpeg 1456w" type="image/webp" sizes="100vw" /></picture></div>
<div><img loading="lazy" decoding="async" class="alignnone wp-image-707 size-full" src="https://concordia-ai.com/wp-content/uploads/2025/07/SG5-scaled-e1772777919416.jpg" alt="" width="1810" height="1810" srcset="https://concordia-ai.com/wp-content/uploads/2025/07/SG5-scaled-e1772777919416.jpg 1810w, https://concordia-ai.com/wp-content/uploads/2025/07/SG5-scaled-e1772777919416-300x300.jpg 300w, https://concordia-ai.com/wp-content/uploads/2025/07/SG5-scaled-e1772777919416-1024x1024.jpg 1024w, https://concordia-ai.com/wp-content/uploads/2025/07/SG5-scaled-e1772777919416-150x150.jpg 150w, https://concordia-ai.com/wp-content/uploads/2025/07/SG5-scaled-e1772777919416-768x768.jpg 768w, https://concordia-ai.com/wp-content/uploads/2025/07/SG5-scaled-e1772777919416-1536x1536.jpg 1536w" sizes="auto, (max-width: 1810px) 100vw, 1810px" /></div>
<div>
<h2 class="header-anchor-post">Contributing to China’s domestic AI safety and governance landscape</h2>
<ul>
<li><strong>Frontier AI safety risk management and best practices</strong>:
<ul>
<li>Co-published the <em><a href="https://aisafetychina.substack.com/p/shanghai-ai-lab-and-concordia-ai" rel="">Frontier AI Risk Management Framework</a></em><a href="https://aisafetychina.substack.com/p/shanghai-ai-lab-and-concordia-ai" rel=""> v1.0</a> with Shanghai AI Lab. This is China’s first comprehensive framework for managing severe risks from general-purpose AI models.
<ul>
<li>The framework proposes a robust set of protocols designed to support general-purpose AI developers, with comprehensive guidelines for proactively identifying, assessing, mitigating, and governing a set of severe AI risks that pose threats to public safety and national security.</li>
<li>The framework outlines a set of unacceptable hazards (red lines) and early warning indicators for escalating safety and security measures (yellow lines) for areas including: cyber offense, biological threats, large-scale persuasion and harmful manipulation, and loss of control risks.</li>
<li>The framework was cited in various media outlets, including <em><a href="https://science.caixin.com/m/2025-07-30/102346902.html" rel="">Caixin</a></em>, <em><a href="https://mp.weixin.qq.com/s/EwDrlAveGkMm7NsnqCZi6Q" rel="">IT Times</a>,</em> <em><a href="https://www.xinhuanet.com/liangzi/20251112/3687c6b01ecf42ddbd0fd7d9600cc787/c.html" rel="">Xinhua</a></em>, <em><a href="https://time.com/7308857/china-isnt-ignoring-ai-regulation-the-u-s-shouldnt-either/" rel="">TIME</a></em>, <em><a href="https://stock.finance.sina.com.cn/stock/view/paper.php?symbol=sh000001&amp;reportid=807179279125" rel="">Sina</a></em>, and <em><a href="https://www.sinicapodcast.com/p/transcript-the-world-ai-conference" rel="">Sinica Podcast</a></em>.</li>
</ul>
</li>
<li>Signed strategic partnership agreements with several leading Chinese general-purpose AI developers to provide advice on AI safety and risk management best practices.</li>
<li>Provided comprehensive advice on compliance with the EU AI Act and General-Purpose AI Code of Practice to leading Chinese general-purpose AI developers. This work included co-hosting a workshop on “EU Code of Practice &amp; Industry Best Practices: Towards a Global Standard for AI Risk Management, Safety and Security” with SaferAI, the Oxford Martin AI Governance Initiative, and the Safe AI Forum.</li>
<li>Presented on frontier AI risk management during a closed-door workshop at the China AI Industry Alliance’s 15th Plenum Meeting, in the context of its <a href="https://aihub.caict.ac.cn/ai_security_and_safety_commitments" rel="">Disclosure of Practices on the AI Security and Safety Commitments</a>.</li>
<li>Presented on risk management for open-weight frontier models at a <a href="https://mp.weixin.qq.com/s/VGXNHVfo_c5A51yKEdrjvQ" rel="">workshop</a> (“Academic Symposium on AI Industry Development and Legislation”) at Tongji University.</li>
</ul>
</li>
</ul>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-698 size-full" src="https://concordia-ai.com/wp-content/uploads/2025/07/英语封面（无版本号）-scaled-e1772778132808.jpg" alt="" width="1810" height="1080" srcset="https://concordia-ai.com/wp-content/uploads/2025/07/英语封面（无版本号）-scaled-e1772778132808.jpg 1810w, https://concordia-ai.com/wp-content/uploads/2025/07/英语封面（无版本号）-scaled-e1772778132808-300x179.jpg 300w, https://concordia-ai.com/wp-content/uploads/2025/07/英语封面（无版本号）-scaled-e1772778132808-1024x611.jpg 1024w, https://concordia-ai.com/wp-content/uploads/2025/07/英语封面（无版本号）-scaled-e1772778132808-150x90.jpg 150w, https://concordia-ai.com/wp-content/uploads/2025/07/英语封面（无版本号）-scaled-e1772778132808-768x458.jpg 768w, https://concordia-ai.com/wp-content/uploads/2025/07/英语封面（无版本号）-scaled-e1772778132808-1536x917.jpg 1536w" sizes="auto, (max-width: 1810px) 100vw, 1810px" /></p>
<ul>
<li><strong>Frontier AI risk monitoring and evaluation:</strong>
<ul>
<li>Contributed to the “<a href="https://concordia-ai.com/research/frontier-ai-risk-management-framework-in-practice-a-risk-analysis-technical-report/" rel="">Frontier AI Risk Management Framework in Practice: A Risk Analysis Technical Report</a>” led by Shanghai AI Lab. We assessed critical risks from more than 20 frontier LLMs in the following areas: cyber offense, biological and chemical risks, persuasion and manipulation, uncontrolled autonomous AI R&amp;D, strategic deception and scheming, self-replication, and collusion. The report was covered by <a href="https://jack-clark.net/2025/07/28/import-ai-422-llm-bias-china-cares-about-the-same-safety-risks-as-us-ai-persuasion/" rel="">Jack Clark’s </a><em><a href="https://jack-clark.net/2025/07/28/import-ai-422-llm-bias-china-cares-about-the-same-safety-risks-as-us-ai-persuasion/" rel="">Import AI</a></em>.</li>
<li>Launched the <a href="https://airiskmonitor.net:18615/doc/en/report/202507" rel="">AI Risk Monitoring Platform</a> designed to track and mitigate frontier AI risks, including cyberoffense, biological threats, chemical threats, and loss of control domains. The platform evaluates 50 frontier LLMs from 15 leading developers across the US, China, and France, using 18 open source benchmarks. Key outputs include a risk index dashboard and a detailed technical report. This project was spearheaded by our AI Safety Research Senior Manager WANG Weibing (王伟冰).</li>
<li>The platform received coverage from several major media outlets, including <em><a href="https://www.peopleapp.com/column/30050743110-500007199542" rel="">People’s Daily</a></em>, <em><a href="https://www.scmp.com/tech/tech-trends/article/3331952/chinese-ai-models-comparable-us-ones-frontier-risks-study-finds" rel="">South China</a> <a href="https://www.scmp.com/tech/policy/article/3334376/deepseek-alibaba-researchers-endorse-chinas-misunderstood-ai-regulatory-framework" rel="">Morning Post</a></em>, <em>Xinhua</em>’s <em><a href="https://www.jjckb.cn/20251119/dbb052fbebf9490b926eb42d5f4db308/c.html" rel="">Economic Information Daily</a></em>, and <em><a href="https://eu.36kr.com/en/p/3556948076575618" rel="">IT Times</a></em>.</li>
</ul>
</li>
</ul>
<p><img loading="lazy" decoding="async" class="alignnone size-full wp-image-1391" src="https://concordia-ai.com/wp-content/uploads/2026/03/image-scaled.png" alt="" width="2560" height="1303" srcset="https://concordia-ai.com/wp-content/uploads/2026/03/image-scaled.png 2560w, https://concordia-ai.com/wp-content/uploads/2026/03/image-300x153.png 300w, https://concordia-ai.com/wp-content/uploads/2026/03/image-1024x521.png 1024w, https://concordia-ai.com/wp-content/uploads/2026/03/image-150x76.png 150w, https://concordia-ai.com/wp-content/uploads/2026/03/image-768x391.png 768w, https://concordia-ai.com/wp-content/uploads/2026/03/image-1536x782.png 1536w, https://concordia-ai.com/wp-content/uploads/2026/03/image-2048x1042.png 2048w" sizes="auto, (max-width: 2560px) 100vw, 2560px" /></p>
<div>
<div class="single-post-container" role="main" aria-label="Post">
<div class="container">
<div class="single-post">
<article class="typography newsletter-post post">
<div class="available-content">
<div class="body markup" dir="auto">
<ul>
<li><strong>National standards and policy guidance</strong>: Concordia AI is a member of key national and industry technical committees, contributing to the development of China’s AI safety standards.
<ul>
<li>National Information Security Standardization Technical Committee (SAC/TC260): As part of SAC/TC260 Special Working Group on Emerging Technology Safety, Concordia AI contributed to the standard for <strong>“Classification and Grading Methods for the Security of Artificial Intelligence Applications.”</strong></li>
<li>National Information Technology Standardization Technical Committee (SAC/TC28/SC42): As a member of the AI Subcommittee, Concordia AI contributed to <strong>“Artificial intelligence—Risk management capability assessment.”</strong></li>
<li><strong>Ministry of Industry and Information Technology AI Standardization Committee (MIIT/TC1)</strong>: Concordia AI joined the Working Group on AI Safety Governance.</li>
<li>Guangdong-Hong Kong-Macao Greater Bay Area local standards: As a member of the Greater Bay Area working group of SAC/TC28/SC42, Concordia AI played a key role in the development of the Shenzhen local standard<strong> “Technical Framework for Value Alignment of Pre-trained AI Models.”</strong></li>
</ul>
</li>
<li><strong>AIxBiosecurity Governance:</strong>
<ul>
<li>Published a Chinese language report “<a href="https://concordia-ai.com/research/responsible-innovation-in-ai-x-life-sciences/" rel="">Responsible Innovation in AI x Life Sciences</a>” with Tianjin University’s Center for Biosafety Research and Strategy. The 70-page report draws on more than 300 sources to explore AI-biotech convergence, and its benefits, risks, and governance recommendations for diverse stakeholders.
<ul>
<li>Head of AI Safety and Governance (China) FANG Liang (方亮) presented the report at a biosecurity seminar hosted by <a href="https://mp.weixin.qq.com/s?__biz=Mzg4NTgxNjEwMg==&amp;mid=2247501825&amp;idx=1&amp;sn=6f1e0e492aec0f1ae294f2c0ff116f00&amp;scene=21#wechat_redirect" rel="">China’s National Key Laboratory of Synthetic Biotechnology</a>.</li>
<li>Presented the report at the <a href="https://www.linkedin.com/posts/anita-cicero-76900b3a_i-am-pleased-to-have-participated-in-serious-activity-7381972208744550400-0axx/" rel="">2025 International Symposium on Global Biosecurity Governance and Cooperation</a>, co-hosted by the National Biosecurity Expert Committee of China, Guangzhou Laboratory, and China Foreign Affairs University.</li>
<li>Presented the report at the “Symposium on Trends and Development Strategies for the Integration of Biotechnology and AI” hosted by the <a href="https://www.cncb.ac.cn/" rel="">China National Center for Bioinformation</a>.</li>
</ul>
</li>
<li>Participated in the “Closed-door Seminar on DNA Synthesis Screening Technology and Policy” held at China Foreign Affairs University.</li>
</ul>
</li>
<li><strong>WeChat publications:</strong>
<ul>
<li>Published over 80 new posts in our WeChat Official Account, reaching over 4,900 subscribers across China’s AI ecosystem, including policymakers, industry professionals, and academic researchers.</li>
<li>The articles provide Chinese stakeholders with updates on key global AI safety and governance developments. Highlights include a <a href="https://mp.weixin.qq.com/s/7noQvr-ka_JHlYiBRC09OQ" rel="">series of articles</a> on frontier AI safety frameworks by our AI Safety and Governance Senior Manager CHENG Yuan (程远); legal explainers on the <a href="https://mp.weixin.qq.com/s/JEEOVqxKks30pmuJLJStfQ" rel="">EU General-Purpose AI Code of Practice</a> and <a href="https://mp.weixin.qq.com/s/P8G0U8vZZh4DWt0WhliZUA" rel="">California SB-53</a>; and overviews of technical AI safety research on topics like <a href="https://mp.weixin.qq.com/s/AKow85xZYwZwzEPEDfhuOw" rel="">deception</a> and <a href="https://mp.weixin.qq.com/s/3SMZIGAHyiM8NMCVymeoxA" rel="">self-replication</a> risks.</li>
</ul>
</li>
</ul>
<h2 class="header-anchor-post">Organizational updates</h2>
<ul>
<li>We established our Singapore office, welcoming our first full-time staff based in Singapore. Singapore’s status as a global hub and international convenor allowed Concordia AI to convene stakeholders from China, the US, and Southeast Asia, significantly expanding our capacity to shape international discussions on AI governance.</li>
<li>We expanded the team from eight to twelve members in 2025, and will soon grow to 18 staff.</li>
<li>We recruited our seventh cohort of 34 affiliates, who supported both China-focused and international workstreams.</li>
<li>We launched a new <a href="https://concordia-ai.com/" rel="">website</a> with refreshed branding, and updated our brochure and <a href="https://mp.weixin.qq.com/s/7RQ4SL1Vy1PEDtG0gBpwHw" rel="">Chinese materials</a>.</li>
<li>We became a formal member of the <a href="https://partnershiponai.org/partnership-on-ai-welcomes-10-new-partners/" rel="">Partnership on AI</a> and the <a href="https://www.iaseai.org/affiliates" rel="">International Association of Safe and Ethical AI</a> affiliate program.</li>
</ul>
<div class="captioned-image-container" style="text-align: center;">
<figure>
<div class="image2-inset can-restack">
<p><img loading="lazy" decoding="async" class="alignnone size-full wp-image-402" src="https://concordia-ai.com/wp-content/uploads/2025/07/concordia-ai-team-photo-scaled.jpg" alt="" width="2560" height="1281" srcset="https://concordia-ai.com/wp-content/uploads/2025/07/concordia-ai-team-photo-scaled.jpg 2560w, https://concordia-ai.com/wp-content/uploads/2025/07/concordia-ai-team-photo-300x150.jpg 300w, https://concordia-ai.com/wp-content/uploads/2025/07/concordia-ai-team-photo-1024x512.jpg 1024w, https://concordia-ai.com/wp-content/uploads/2025/07/concordia-ai-team-photo-768x384.jpg 768w, https://concordia-ai.com/wp-content/uploads/2025/07/concordia-ai-team-photo-1536x769.jpg 1536w, https://concordia-ai.com/wp-content/uploads/2025/07/concordia-ai-team-photo-2048x1025.jpg 2048w" sizes="auto, (max-width: 2560px) 100vw, 2560px" /></p>
<div class="image-link-expand">
<div class="pencraft pc-display-flex pc-gap-8 pc-reset"><em><span style="color: #999999;">Concordia AI team</span></em></div>
</div>
</div>
</figure>
</div>
<div class="subscription-widget-wrap">
<div class="subscription-widget show-subscribe">
<div class="subscribe-widget is-signed-up is-fully-subscribed" data-component-name="SubscribeWidget">
<div class="pencraft pc-reset button-wrapper">
<div class="pencraft pc-display-flex pc-justifyContent-center pc-reset">
<hr />
</div>
</div>
</div>
</div>
</div>
<div class="footnote" data-component-name="FootnoteToDOM">
<p><a id="footnote-1-189963930" class="footnote-number" contenteditable="false" href="https://aisafetychina.substack.com/p/concordia-ai-2025-impact-highlights?utm_source=post-email-title&amp;publication_id=1862105&amp;post_id=189963930&amp;utm_campaign=email-post-title&amp;isFreemail=true&amp;r=6j32gs&amp;triedRedirect=true&amp;utm_medium=email#footnote-anchor-1-189963930" target="_self" rel="">1</a></p>
<div class="footnote-content">
<p><em>Afternote: We co-organised the “AI Crisis Management” workshop at the sidelines of the Munich Security Conference in February 2026, as a continuation of this workshop series.</em></p>
</div>
</div>
</div>
</div>
</article>
</div>
</div>
</div>
</div>
<div>
<div class="pencraft pc-display-flex pc-flexDirection-column pc-alignItems-center pc-reset color-white-rGgpJs cookieBanner-fZ6hup">
<div class="pencraft pc-display-flex pc-flexDirection-column pc-gap-8 pc-mobile-gap-4 pc-padding-4 pc-alignItems-center pc-reset">
<h4 class="pencraft pc-reset color-primary-zABazT line-height-24-jnGwiv font-display-nhmvtD size-20-P_cSRT weight-bold-DmI9lw reset-IxiVJZ"></h4>
</div>
</div>
</div>
</div>
</figure>
</div>
</div>
</figure>
</div>
<div class="captioned-image-container">
<figure>
<div class="image2-inset can-restack"><picture><source srcset="https://substackcdn.com/image/fetch/$s_!wUhN!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a8a05a9-d0cc-4033-b31e-370f32497b18_1280x765.jpeg 424w, https://substackcdn.com/image/fetch/$s_!wUhN!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a8a05a9-d0cc-4033-b31e-370f32497b18_1280x765.jpeg 848w, https://substackcdn.com/image/fetch/$s_!wUhN!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a8a05a9-d0cc-4033-b31e-370f32497b18_1280x765.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!wUhN!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a8a05a9-d0cc-4033-b31e-370f32497b18_1280x765.jpeg 1456w" type="image/webp" sizes="100vw" /></picture></div>
</figure>
</div><p>The post <a href="https://concordia-ai.com/concordia-ai-2025-impact-highlights/">Concordia AI 2025 Impact Highlights</a> first appeared on <a href="https://concordia-ai.com">Concordia AI</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>10 Key Insights from Concordia AI’s “Frontier AI Risk Monitoring Platform”</title>
		<link>https://concordia-ai.com/10-key-insights-from-concordia-ais-frontier-ai-risk-monitoring-platform/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=10-key-insights-from-concordia-ais-frontier-ai-risk-monitoring-platform</link>
		
		<dc:creator><![CDATA[Concordia AI]]></dc:creator>
		<pubDate>Tue, 11 Nov 2025 10:52:16 +0000</pubDate>
				<category><![CDATA[Announcement]]></category>
		<guid isPermaLink="false">https://concordia-ai.com/?p=862</guid>

					<description><![CDATA[<p>Concordia AI has launched the Frontier AI Risk Monitoring Platform, along with our inaugural 2025 Q3 Monitoring Report. It tracks models from 15 leading developers worldwide for risks in four domains: cyber offense, biological risks, chemical risks, and loss-of-control, making it the first such platform in China focused on catastrophic risks. You can find more detail, including&#8230;</p>
<p>The post <a href="https://concordia-ai.com/10-key-insights-from-concordia-ais-frontier-ai-risk-monitoring-platform/">10 Key Insights from Concordia AI’s “Frontier AI Risk Monitoring Platform”</a> first appeared on <a href="https://concordia-ai.com">Concordia AI</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>Concordia AI has launched the <a href="https://substack.com/redirect/ec917922-623f-425c-88b5-facbb7840057?j=eyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM" target="_blank" rel="noopener" data-saferedirecturl="https://www.google.com/url?q=https://substack.com/redirect/ec917922-623f-425c-88b5-facbb7840057?j%3DeyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM&amp;source=gmail&amp;ust=1762944030932000&amp;usg=AOvVaw20pQp-Yc3zqm1lPpFFWErC">Frontier AI Risk Monitoring Platform</a>, along with our inaugural <a href="https://substack.com/redirect/3d8361f7-cbe2-4561-b3a1-84d7f8b5638e?j=eyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM" target="_blank" rel="noopener" data-saferedirecturl="https://www.google.com/url?q=https://substack.com/redirect/3d8361f7-cbe2-4561-b3a1-84d7f8b5638e?j%3DeyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM&amp;source=gmail&amp;ust=1762944030932000&amp;usg=AOvVaw0ID9SoQthnHe-nQA-7cwCN">2025 Q3 Monitoring Report</a>. It tracks models from 15 leading developers worldwide for risks in four domains: cyber offense, biological risks, chemical risks, and loss-of-control, making it the first such platform in China focused on catastrophic risks.</p>
<p>You can find more detail, including our <a href="https://substack.com/redirect/6962d40b-4e80-4fd9-b010-c36b839b4b72?j=eyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM" target="_blank" rel="noopener" data-saferedirecturl="https://www.google.com/url?q=https://substack.com/redirect/6962d40b-4e80-4fd9-b010-c36b839b4b72?j%3DeyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM&amp;source=gmail&amp;ust=1762944030932000&amp;usg=AOvVaw3CIW0vO8W5lVZ0bg1d5xyO">methodology</a>, on the interactive platform. The South China Morning Post (SCMP) has also covered the launch in an <a href="https://substack.com/redirect/eecee44e-f9ef-4946-877e-3904fd615999?j=eyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM" target="_blank" rel="noopener" data-saferedirecturl="https://www.google.com/url?q=https://substack.com/redirect/eecee44e-f9ef-4946-877e-3904fd615999?j%3DeyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM&amp;source=gmail&amp;ust=1762944030932000&amp;usg=AOvVaw0OR_2wG97G6HpNVSTNmObt">exclusive story</a>.</p>
<p><img loading="lazy" decoding="async" class="alignnone size-full wp-image-863" src="https://concordia-ai.com/wp-content/uploads/2025/11/Frontier-AI-Risk-Monitoring-Platform.png" alt="" width="1449" height="907" srcset="https://concordia-ai.com/wp-content/uploads/2025/11/Frontier-AI-Risk-Monitoring-Platform.png 1449w, https://concordia-ai.com/wp-content/uploads/2025/11/Frontier-AI-Risk-Monitoring-Platform-300x188.png 300w, https://concordia-ai.com/wp-content/uploads/2025/11/Frontier-AI-Risk-Monitoring-Platform-1024x641.png 1024w, https://concordia-ai.com/wp-content/uploads/2025/11/Frontier-AI-Risk-Monitoring-Platform-150x94.png 150w, https://concordia-ai.com/wp-content/uploads/2025/11/Frontier-AI-Risk-Monitoring-Platform-768x481.png 768w" sizes="auto, (max-width: 1449px) 100vw, 1449px" /></p>
<h2>Why this matters</h2>
<p>As AI capabilities accelerate, we lack insight on some critical questions:</p>
<ul>
<li>What are the key trends and drivers for frontier AI risks?</li>
<li>Are these risks increasing or decreasing?</li>
<li>Where are the safety gaps most severe?</li>
</ul>
<p>Model developers publish self-assessments, but these lack standardization and independent verification. Ad-hoc third-party evaluations don’t track changes over time. Policymakers, researchers, and developers need systematic data to make evidence-based decisions about AI safety. Our platform is our contribution to bridge these gaps.</p>
<h2>10 key insights</h2>
<h4>1. Frontier model risks have risen sharply over the past year</h4>
<p>Across all four domains—cyber offense, biological, chemical, and loss-of-control—Risk Indices for models released in the past year hit record highs. The cumulative maximum Risk Index rose 31% in cyber offense, 38% in biological risks, 17% in chemical risks, and 50% in loss-of-control.</p>
<h4>2. Risk index trends vary significantly across model families</h4>
<p>Over the past year, different model families have followed distinct risk trajectories:</p>
<ul>
<li><strong>Stable low risk: </strong>The GPT and Claude families maintain consistently low Risk Indices across all domains.</li>
<li><strong>Rise then fall: </strong>DeepSeek, Qwen, and MiniMax show early spikes followed by declines in cyber offense, biological, and chemical risks.</li>
<li><strong>Rapid risk increase: </strong>Grok shows sharp increases in loss-of-control risk, while Hunyuan rises steeply in biological risks.</li>
</ul>
<p>Notably, we found that the latest versions of Chinese models released over the past three months have shown a significant decline in risk levels across multiple areas. This is mainly due to stronger refusal of malicious or misuse-related requests.</p>
<h4>3. Reasoning models show higher capabilities without corresponding safety improvements</h4>
<p>Reasoning models score far higher in capability than non-reasoning ones, but their safety levels remain roughly the same. Most models on the Risk Pareto Frontier—a set of models where no other model has both a higher Capability Score and a lower Safety Score—are reasoning models.</p>
<h4>4. The capability and safety performance of open-weight models are generally on par with proprietary models</h4>
<p>The very most capable models are predominantly proprietary, but across the broader landscape, capability and safety levels of open-weight and proprietary models are similar. Only in biological risks do open-weight models score notably lower.</p>
<p><em>Note: Comparable benchmark results do not mean comparable real-world risk. The open-weight nature itself is a key variable affecting risk: it might increase risk by lowering the barrier for malicious fine-tuning; it could also reduce risk by empowering defenders. Due to concerns about misuse, we have set a lower Safety Coefficient for open-weight models, which results in a higher Risk Index compared to proprietary models.</em></p>
<h4>5. Cyberattack capabilities of frontier models are growing rapidly</h4>
<p>Frontier models are showing rapid growth in capabilities across multiple cyberattack benchmarks:</p>
<ul>
<li><strong>WMDP-Cyber (cyberattack knowledge):</strong> Top score rose from 68.9 to 88.0 in one year.</li>
<li><strong>CyberSecEval2-<wbr />VulnerabilityExploit (vulnerability exploitation): </strong>Top score jumped from 55.4 to 91.7.</li>
<li><strong>CyBench (capture the flag): </strong>Top score increased from 25.0 to 40.0.</li>
</ul>
<h4>6. Biological capabilities of frontier models have partially surpassed human expert levels</h4>
<p>Frontier models now match or exceed human experts on several biological benchmarks.</p>
<ul>
<li><strong>BioLP-Bench:</strong> Four models, including o4-mini, outperform human experts in troubleshooting biological protocols.</li>
<li><strong>LAB-Bench-CloningScenarios: </strong>Two models, including Claude Sonnet 4.5 Reasoning, surpass expert performance in cloning experiment scenarios.</li>
<li><strong>LAB-Bench-SeqQA: </strong>The top GPT-5 (high) model nears human-level understanding of DNA and protein sequences (71.5 vs. 79).</li>
</ul>
<h4>7. But most frontier models have inadequate biological safeguards</h4>
<p>Two benchmarks measuring model refusal rates for harmful biological queries show that bio safeguards are lacking:</p>
<ul>
<li><strong>SciKnowEval:</strong> Only 40% of models refused over 80% of harmful prompts, while 35% refused fewer than 50%.</li>
<li><strong>SOSBench-Bio: </strong>Just 15% exceeded an 80% refusal rate, and 35% fell below 20%.</li>
</ul>
<h4>8. Chemical capabilities and safety levels of frontier models are improving slowly</h4>
<p><strong>WMDP-Chem</strong> scores—measuring knowledge relevant to chemical weapons—have risen slightly over the past year, with little variation across models.</p>
<p><strong>SOSBench-Chem</strong> results vary widely: only 30% of models refuse over 80% of harmful queries, while 25% refuse fewer than 40%. Overall, refusal rates show minimal improvement year over year.</p>
<h4>9. Most frontier models have insufficient safeguards against jailbreaking</h4>
<p><strong>StrongReject</strong> evaluates defenses against 31 jailbreak methods. Only 40% of models scored above 80, while 20% fell below 60 (a higher score indicates stronger safeguards). Across all tests, only the Claude and GPT families consistently maintained scores above 80.</p>
<h4>10. Most frontier models fall short on honesty</h4>
<p><strong>MASK</strong> is a benchmark for evaluating model honesty. Only four models scored above 80 points, while 30% of the models scored below 50 points (a higher score indicates a more honest model). Honesty is an important proxy and early warning indicator for loss-of-control risk—dishonest models may misrepresent their capabilities, or provide misleading information about their actions and intentions.</p>
<h2>What’s next</h2>
<p>This is just the beginning. We’re working to:</p>
<ul>
<li>Expand to AI agents, multimodal models, and domain-specific models.</li>
<li>Add new risk domains like large-scale persuasion.</li>
<li>Develop more sophisticated capability elicitation and threat modeling.</li>
<li>Assess both attacker and defender empowerment.</li>
<li>Improve benchmark quality and multilingual coverage.</li>
</ul>
<h2>Get involved</h2>
<p>This is a living project, and we welcome feedback. We’re also seeking partners for benchmark development, risk assessment research, pre-release evaluations, and risk information sharing. More details on avenues for collaboration are available in the <a href="https://substack.com/redirect/a66d29eb-60dd-4328-8e98-97b658c0311e?j=eyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM" target="_blank" rel="noopener" data-saferedirecturl="https://www.google.com/url?q=https://substack.com/redirect/a66d29eb-60dd-4328-8e98-97b658c0311e?j%3DeyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM&amp;source=gmail&amp;ust=1762944030932000&amp;usg=AOvVaw0Kf4RcrBlQlAsLKQ-_yb5f">full report</a>. Contact: <a href="https://substack.com/redirect/8dc7e48c-948f-4fc2-af26-c568b29adb87?j=eyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM" target="_blank" rel="noopener" data-saferedirecturl="https://www.google.com/url?q=https://substack.com/redirect/8dc7e48c-948f-4fc2-af26-c568b29adb87?j%3DeyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM&amp;source=gmail&amp;ust=1762944030932000&amp;usg=AOvVaw1sbKgpRH5qWyL3G76iM43G">risk-monitor@concordia-ai.com</a>.</p><p>The post <a href="https://concordia-ai.com/10-key-insights-from-concordia-ais-frontier-ai-risk-monitoring-platform/">10 Key Insights from Concordia AI’s “Frontier AI Risk Monitoring Platform”</a> first appeared on <a href="https://concordia-ai.com">Concordia AI</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Concordia AI Joins the Partnership on AI</title>
		<link>https://concordia-ai.com/concordia-ai-joins-the-partnership-on-ai/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=concordia-ai-joins-the-partnership-on-ai</link>
		
		<dc:creator><![CDATA[Concordia AI]]></dc:creator>
		<pubDate>Thu, 23 Oct 2025 07:47:22 +0000</pubDate>
				<category><![CDATA[Announcement]]></category>
		<guid isPermaLink="false">https://concordia-ai.com/?p=854</guid>

					<description><![CDATA[<p>We’re proud to join the Partnership on AI (PAI) in advancing responsible AI development. Together, we aim to promote transparency and accountability, ensuring that AI systems strengthen public trust and serve the common good. Concordia AI has long admired PAI’s work—our CEO Brian Tse previously served as a Senior Advisor to the organization, and we&#8230;</p>
<p>The post <a href="https://concordia-ai.com/concordia-ai-joins-the-partnership-on-ai/">Concordia AI Joins the Partnership on AI</a> first appeared on <a href="https://concordia-ai.com">Concordia AI</a>.</p>]]></description>
										<content:encoded><![CDATA[<p><span style="font-weight: 400;">We’re proud to join the Partnership on AI (PAI) in advancing responsible AI development. Together, we aim to promote transparency and accountability, ensuring that AI systems strengthen public trust and serve the common good.</span></p>
<p><span style="font-weight: 400;">Concordia AI has long admired PAI’s work—our CEO Brian Tse previously served as a Senior Advisor to the organization, and we were thrilled to welcome PAI CEO Rebecca Finlay in Shanghai at the 2025 World AI Conference.</span></p>
<p><span style="font-weight: 400;">We look forward to building on this shared commitment to safe and ethical AI.</span></p>
<p><span style="font-weight: 400;">Learn more about </span><a href="https://partnershiponai.org/partners/?country=china"><span style="font-weight: 400;">PAI’s partners</span></a><span style="font-weight: 400;">. </span></p><p>The post <a href="https://concordia-ai.com/concordia-ai-joins-the-partnership-on-ai/">Concordia AI Joins the Partnership on AI</a> first appeared on <a href="https://concordia-ai.com">Concordia AI</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Concordia AI Joins the IASEAI Affiliate Program</title>
		<link>https://concordia-ai.com/concordia-ai-joins-the-iaseai-affiliate-program/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=concordia-ai-joins-the-iaseai-affiliate-program</link>
		
		<dc:creator><![CDATA[Concordia AI]]></dc:creator>
		<pubDate>Thu, 23 Oct 2025 07:31:18 +0000</pubDate>
				<category><![CDATA[Announcement]]></category>
		<guid isPermaLink="false">https://concordia-ai.com/?p=851</guid>

					<description><![CDATA[<p>We’re proud to share that Concordia AI has joined the founding cohort of the International Association for Safe and Ethical AI (IASEAI) affiliate program. Our team was honored to participate in the inaugural IASEAI Conference, where Concordia AI CEO Brian Tse spoke on the panel “Global Perspectives on AI Safety and Ethics.” It was an&#8230;</p>
<p>The post <a href="https://concordia-ai.com/concordia-ai-joins-the-iaseai-affiliate-program/">Concordia AI Joins the IASEAI Affiliate Program</a> first appeared on <a href="https://concordia-ai.com">Concordia AI</a>.</p>]]></description>
										<content:encoded><![CDATA[<p><span style="font-weight: 400;">We’re proud to share that Concordia AI has joined the founding cohort of the International Association for Safe and Ethical AI (IASEAI) affiliate program.</span></p>
<p><span style="font-weight: 400;">Our team was honored to participate in the inaugural IASEAI Conference, where Concordia AI CEO Brian Tse spoke on the panel </span><i><span style="font-weight: 400;">“Global Perspectives on AI Safety and Ethics.”</span></i><span style="font-weight: 400;"> It was an inspiring opportunity to exchange ideas on advancing responsible AI governance and safety worldwide.</span></p>
<p><span style="font-weight: 400;">Stay tuned for what’s next—beginning with IASEAI 2026, taking place in Paris this coming February.</span></p>
<p><span style="font-weight: 400;">Learn more about </span><a href="https://www.iaseai.org/affiliates"><span style="font-weight: 400;">IASEAI affiliates</span></a><span style="font-weight: 400;">.</span></p><p>The post <a href="https://concordia-ai.com/concordia-ai-joins-the-iaseai-affiliate-program/">Concordia AI Joins the IASEAI Affiliate Program</a> first appeared on <a href="https://concordia-ai.com">Concordia AI</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Concordia AI CEO Brian Tse on The Cognitive Revolution Podcast</title>
		<link>https://concordia-ai.com/concordia-ai-ceo-brian-tse-on-the-cognitive-revolution-podcast/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=concordia-ai-ceo-brian-tse-on-the-cognitive-revolution-podcast</link>
		
		<dc:creator><![CDATA[Concordia AI]]></dc:creator>
		<pubDate>Wed, 22 Oct 2025 08:30:32 +0000</pubDate>
				<category><![CDATA[Media Coverage]]></category>
		<guid isPermaLink="false">https://concordia-ai.com/?p=847</guid>

					<description><![CDATA[<p>On October 18, 2025, Concordia AI CEO Brian Tse joined Nathan Labenz on The Cognitive Revolution podcast to discuss China’s approach to AI development, safety, and governance. The conversation covers China’s pragmatic vision emphasizing AI integration into the economy, the country’s multiple AI hubs, regulations requiring pre-deployment testing and AI content labelling, and areas where&#8230;</p>
<p>The post <a href="https://concordia-ai.com/concordia-ai-ceo-brian-tse-on-the-cognitive-revolution-podcast/">Concordia AI CEO Brian Tse on The Cognitive Revolution Podcast</a> first appeared on <a href="https://concordia-ai.com">Concordia AI</a>.</p>]]></description>
										<content:encoded><![CDATA[<p><span style="font-weight: 400;">On October 18, 2025, Concordia AI CEO Brian Tse joined Nathan Labenz on </span><i><span style="font-weight: 400;">The Cognitive Revolution</span></i><span style="font-weight: 400;"> podcast to discuss China’s approach to AI development, safety, and governance.</span></p>
<p><span style="font-weight: 400;">The conversation </span><span style="font-weight: 400;">covers China’s pragmatic vision emphasizing AI integration into the economy, the country’s multiple AI hubs, regulations requiring pre-deployment testing and AI content labelling, and areas where China&#8217;s approach overlaps with the U.S.. The conversation covers chips and export controls, Huawei&#8217;s rise, DeepSeek&#8217;s peer-reviewed article on Nature, open-weights, Singapore&#8217;s role as a bridge, and pathways for cooperation like shared red lines, risk management frameworks, and emergency preparedness protocols. They also touch on embodied AI and humanoid robots, public optimism, and real labor anxieties.</span></p>
<p><span style="font-weight: 400;">Many thanks to Nathan for hosting such a thoughtful conversation. We hope it contributes to bridging international understanding on AI safety and governance in China.</span></p>
<p>Access the full podcast <a href="https://www.cognitiverevolution.ai/chinese-ai-they-re-just-like-us-with-beijing-based-concordia-ai-ceo-brian-tse/">here</a>.</p><p>The post <a href="https://concordia-ai.com/concordia-ai-ceo-brian-tse-on-the-cognitive-revolution-podcast/">Concordia AI CEO Brian Tse on The Cognitive Revolution Podcast</a> first appeared on <a href="https://concordia-ai.com">Concordia AI</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Concordia AI: 2025 Mid-Year Impact Report</title>
		<link>https://concordia-ai.com/concordia-ai-2025-mid-year-impact-report/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=concordia-ai-2025-mid-year-impact-report</link>
		
		<dc:creator><![CDATA[Concordia AI]]></dc:creator>
		<pubDate>Wed, 24 Sep 2025 15:21:30 +0000</pubDate>
				<category><![CDATA[Announcement]]></category>
		<guid isPermaLink="false">https://concordia-ai.com/?p=826</guid>

					<description><![CDATA[<p>Our mission is to ensure that AI is developed and deployed in a way that is safe and aligned with global interests. We advance global AI safety by conducting research, advising leading AI companies and policymakers, and promoting international dialogue. Below are some of our key accomplishments from January to July 2025. (see 2024 highlights&#8230;</p>
<p>The post <a href="https://concordia-ai.com/concordia-ai-2025-mid-year-impact-report/">Concordia AI: 2025 Mid-Year Impact Report</a> first appeared on <a href="https://concordia-ai.com">Concordia AI</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>Our mission is to ensure that AI is developed and deployed in a way that is safe and aligned with global interests. We advance global AI safety by conducting research, advising leading AI companies and policymakers, and promoting international dialogue. Below are some of our key accomplishments from January to July 2025. (<a href="https://substack.com/redirect/94d43be2-5879-4dbd-980d-fc1a34b6abbc?j=eyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM" target="_blank" rel="noopener" data-saferedirecturl="https://www.google.com/url?q=https://substack.com/redirect/94d43be2-5879-4dbd-980d-fc1a34b6abbc?j%3DeyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM&amp;source=gmail&amp;ust=1758798806700000&amp;usg=AOvVaw3bS1KIgqtNWjksWYHRTmOp">see 2024 highlights here</a>).</p>
<h2><strong>Advancing international coordination on AI safety and governance</strong></h2>
<h3><em>Research Impact and Engagement</em></h3>
<ul>
<li><strong>International AI Safety Report:</strong>
<ul>
<li>Head of International AI Governance Kwan Yee Ng (吴君仪) contributed to the <a href="https://substack.com/redirect/fc50b0e5-0c04-423b-bb52-52d323693beb?j=eyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM" target="_blank" rel="noopener" data-saferedirecturl="https://www.google.com/url?q=https://substack.com/redirect/fc50b0e5-0c04-423b-bb52-52d323693beb?j%3DeyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM&amp;source=gmail&amp;ust=1758798806700000&amp;usg=AOvVaw2oH-825vc00RJ7ZF3OnlHB">first International AI Safety Report</a> as one of the writers and is continuing on as a writer for the 2026 edition of the International AI Safety Report. Chaired by Turing Award winner Yoshua Bengio, the report is supported by an expert panel representing 30 countries including China as well as experts from the EU and the UN. Concordia AI also provided feedback to the report, including editing the <a href="https://substack.com/redirect/8c2a4ed6-3412-4b03-9596-c8d360bf0f39?j=eyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM" target="_blank" rel="noopener" data-saferedirecturl="https://www.google.com/url?q=https://substack.com/redirect/8c2a4ed6-3412-4b03-9596-c8d360bf0f39?j%3DeyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM&amp;source=gmail&amp;ust=1758798806700000&amp;usg=AOvVaw3N7nfjwLTb0pnWcMZNCivV">Chinese translation</a> of its summary materials.</li>
</ul>
</li>
<li><strong>China AI safety and governance analysis:</strong>
<ul>
<li>Published the “<a href="https://substack.com/redirect/43009ff2-4b73-4531-a0a4-c828bc2335c7?j=eyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM" target="_blank" rel="noopener" data-saferedirecturl="https://www.google.com/url?q=https://substack.com/redirect/43009ff2-4b73-4531-a0a4-c828bc2335c7?j%3DeyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM&amp;source=gmail&amp;ust=1758798806700000&amp;usg=AOvVaw1j-OrCyrZ6apim2S7lh3ND">State of AI Safety in China 2025</a>” report, covering developments May 2024–June 2025. The report was cited by a number of media outlets including <a href="https://substack.com/redirect/a1a61314-2430-493e-9e48-dc25537d8fc4?j=eyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM" target="_blank" rel="noopener" data-saferedirecturl="https://www.google.com/url?q=https://substack.com/redirect/a1a61314-2430-493e-9e48-dc25537d8fc4?j%3DeyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM&amp;source=gmail&amp;ust=1758798806700000&amp;usg=AOvVaw1FMYui5ORaIdxK4euifGA5">Wired</a>, <a href="https://substack.com/redirect/1de4a0a0-25e6-48cc-9144-1092ca57bd63?j=eyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM" target="_blank" rel="noopener" data-saferedirecturl="https://www.google.com/url?q=https://substack.com/redirect/1de4a0a0-25e6-48cc-9144-1092ca57bd63?j%3DeyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM&amp;source=gmail&amp;ust=1758798806700000&amp;usg=AOvVaw31jTFC4i0uZ518RYt78g9Z">Bloomberg</a>, and <a href="https://substack.com/redirect/0b1b2bca-3537-4408-9bca-a2cc0ff86ca0?j=eyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM" target="_blank" rel="noopener" data-saferedirecturl="https://www.google.com/url?q=https://substack.com/redirect/0b1b2bca-3537-4408-9bca-a2cc0ff86ca0?j%3DeyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM&amp;source=gmail&amp;ust=1758798806700000&amp;usg=AOvVaw1T4Lim67IiSKhRkCzTBIuT">The People’s Daily</a> (the largest newspaper in China).</li>
<li>CEO Brian Tse (谢旻希) authored an op-ed titled “China Is Taking AI Safety Seriously. So Must the U.S.” in <a href="https://substack.com/redirect/1e01c04e-f7af-4460-b8ec-91ea65a06fea?j=eyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM" target="_blank" rel="noopener" data-saferedirecturl="https://www.google.com/url?q=https://substack.com/redirect/1e01c04e-f7af-4460-b8ec-91ea65a06fea?j%3DeyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM&amp;source=gmail&amp;ust=1758798806700000&amp;usg=AOvVaw2dWGQwhtJyvUnNUFHTbjWa">Time Magazine</a>; International AI Governance Senior Research Manager Jason Zhou and International AI Governance Part-time Researcher Gabriel Wagner <a href="https://substack.com/redirect/a1590367-92a2-4f08-86ad-3f47a096a672?j=eyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM" target="_blank" rel="noopener" data-saferedirecturl="https://www.google.com/url?q=https://substack.com/redirect/a1590367-92a2-4f08-86ad-3f47a096a672?j%3DeyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM&amp;source=gmail&amp;ust=1758798806700000&amp;usg=AOvVaw1j2XZmM9ZI7JFAL72YplHI">analyzed</a> AI safety implications of China’s April Politburo study session for Stanford DigiChina Forum; <a href="https://substack.com/redirect/59284c16-f67f-4ab1-958b-a18d2943a877?j=eyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM" target="_blank" rel="noopener" data-saferedirecturl="https://www.google.com/url?q=https://substack.com/redirect/59284c16-f67f-4ab1-958b-a18d2943a877?j%3DeyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM&amp;source=gmail&amp;ust=1758798806700000&amp;usg=AOvVaw2Q4iqESj8ee_Q70bddn_NY">interviewed</a> by CGTN on China’s approaches in AI innovation and global governance.</li>
<li>Published over 10 new &#8220;<a href="https://substack.com/redirect/fcf13d28-bbd9-4d5c-838b-da9c75cb2d1c?j=eyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM" target="_blank" rel="noopener" data-saferedirecturl="https://www.google.com/url?q=https://substack.com/redirect/fcf13d28-bbd9-4d5c-838b-da9c75cb2d1c?j%3DeyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM&amp;source=gmail&amp;ust=1758798806700000&amp;usg=AOvVaw2RplKq2ch3nTYX2-YgJ7oP">AI Safety in China</a>&#8221; newsletter issues, reaching over 1,400 subscribers across governments, top AI labs, and AI safety institutes.</li>
</ul>
</li>
<li><strong>Singapore AI safety and governance analysis:</strong>
<ul>
<li>Published “<a href="https://substack.com/redirect/c17ee9af-02ac-4feb-823f-677b77f55bc1?j=eyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM" target="_blank" rel="noopener" data-saferedirecturl="https://www.google.com/url?q=https://substack.com/redirect/c17ee9af-02ac-4feb-823f-677b77f55bc1?j%3DeyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM&amp;source=gmail&amp;ust=1758798806700000&amp;usg=AOvVaw3mWDuLknLPKqwKBdgOp_Wz">State of AI Safety in Singapore</a>” report, the first comprehensive analysis of Singapore’s AI safety ecosystem.</li>
</ul>
</li>
</ul>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-710" src="https://concordia-ai.com/wp-content/uploads/2025/07/CN2-scaled.jpg" alt="" width="281" height="397" /><img loading="lazy" decoding="async" class="alignnone wp-image-707" src="https://concordia-ai.com/wp-content/uploads/2025/07/SG5-scaled.jpg" alt="" width="281" height="398" /></p>
<h3 style="text-align: left;"><em>Multilateral Initiatives</em></h3>
<ul>
<li><strong>Global AI Summit series: </strong>Participated in the French AI Action Summit, including:
<ul>
<li>Brian Tse was invited as a Chinese civil society representative to the AI Action Summit in the Grand Palais.</li>
<li>Co-hosted the workshop “AI Safety as a Collective Challenge” on the sidelines of the Summit, alongside the Carnegie Endowment for International Peace (CEIP), Oxford Martin AI Governance Initiative (AIGI), Tsinghua University Center for International Security and Strategy (CISS), and Tsinghua Institute for AI International Governance (I-AIIG). During the event, Concordia AI co-published the report “<a href="https://substack.com/redirect/96d2d01b-d7bf-4c88-a81a-90807fc6f388?j=eyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM" target="_blank" rel="noopener" data-saferedirecturl="https://www.google.com/url?q=https://substack.com/redirect/96d2d01b-d7bf-4c88-a81a-90807fc6f388?j%3DeyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM&amp;source=gmail&amp;ust=1758798806700000&amp;usg=AOvVaw11xWgAeAxoO9bQtHNrwAaX">Examining AI Safety as a Global Public Good: Implications, Challenges, and Research Priorities</a>.”</li>
<li>Invited to a closed-door seminar hosted by the China AI Safety &amp; Development Association (CnAISDA). During the subsequent public side event, Turing Award Winner Andrew Yao <a href="https://substack.com/redirect/97efc6a2-3ff3-4a9c-8cfe-5c8280a8ca0a?j=eyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM" target="_blank" rel="noopener" data-saferedirecturl="https://www.google.com/url?q=https://substack.com/redirect/97efc6a2-3ff3-4a9c-8cfe-5c8280a8ca0a?j%3DeyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM&amp;source=gmail&amp;ust=1758798806700000&amp;usg=AOvVaw3P9nliv2E-lSHlgDrmKMlG">cited</a> Concordia AI’s State of AI Safety in China report series when describing the increase in AI safety research in Chinese institutions.</li>
<li>Attended the inaugural conference of International Association of Safe and Ethical AI (IASEAI), where Brian Tse spoke on the panel “Global Perspectives on AI Safety and Ethics.”</li>
<li>Brian Tse delivered a presentation at the France-China AI Association (Association d&#8217;Intelligence Artificielle France-Chine), which was cited by outlets including <a href="https://substack.com/redirect/696b4651-ac60-44fb-b576-86a8529c6591?j=eyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM" target="_blank" rel="noopener" data-saferedirecturl="https://www.google.com/url?q=https://substack.com/redirect/696b4651-ac60-44fb-b576-86a8529c6591?j%3DeyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM&amp;source=gmail&amp;ust=1758798806700000&amp;usg=AOvVaw3hIqruptDFRcPcrTbhJe3t">Xinhua</a>.</li>
</ul>
</li>
<li><strong>United Nations:</strong>
<ul>
<li>Provided <a href="https://substack.com/redirect/c3e3d5de-aab3-4c6e-b476-6df40c89863b?j=eyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM" target="_blank" rel="noopener" data-saferedirecturl="https://www.google.com/url?q=https://substack.com/redirect/c3e3d5de-aab3-4c6e-b476-6df40c89863b?j%3DeyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM&amp;source=gmail&amp;ust=1758798806700000&amp;usg=AOvVaw0A_akokjpOVMuAQAa0cwFh">written inputs</a> and participated in consultations regarding the UN’s Independent International Scientific Panel on AI and Global Dialogue on AI.</li>
<li>Brian Tse <a href="https://substack.com/redirect/7dd6326c-e291-4640-9f43-4d591fdb5f72?j=eyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM" target="_blank" rel="noopener" data-saferedirecturl="https://www.google.com/url?q=https://substack.com/redirect/7dd6326c-e291-4640-9f43-4d591fdb5f72?j%3DeyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM&amp;source=gmail&amp;ust=1758798806700000&amp;usg=AOvVaw1k4BvougF_XVuBJWMr9EBN">spoke</a> on the panel “From Principles to Practice—Governing Advanced AI in Action” at the <a href="https://substack.com/redirect/f3cf7939-9614-409b-a510-26b71fae55b6?j=eyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM" target="_blank" rel="noopener" data-saferedirecturl="https://www.google.com/url?q=https://substack.com/redirect/f3cf7939-9614-409b-a510-26b71fae55b6?j%3DeyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM&amp;source=gmail&amp;ust=1758798806700000&amp;usg=AOvVaw3wOZ4ZXcHlIZdt5cFYVXK5">AI for Good Summit 2025</a>.</li>
</ul>
</li>
<li><strong>Global AIxBiosecurity governance: </strong>We contributed to a number of critical global discussions at the intersection of AI and biosecurity:
<ul>
<li>Brian Tse signed the “<a href="https://substack.com/redirect/77d0cfff-9b28-474c-a17d-15a15eabf3d7?j=eyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM" target="_blank" rel="noopener" data-saferedirecturl="https://www.google.com/url?q=https://substack.com/redirect/77d0cfff-9b28-474c-a17d-15a15eabf3d7?j%3DeyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM&amp;source=gmail&amp;ust=1758798806700000&amp;usg=AOvVaw2Y1RaniI-YflowY_RBNZsN">Statement on Biosecurity Risks at the Convergence of AI and the Life Sciences</a>,” alongside influential world-renowned experts such as Andrew Yao, Yoshua Bengio, and George Church. Provided inputs to the Statement as a member of the <a href="https://substack.com/redirect/6a4c2de6-d5b5-4d98-9d80-97c2b591e520?j=eyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM" target="_blank" rel="noopener" data-saferedirecturl="https://www.google.com/url?q=https://substack.com/redirect/6a4c2de6-d5b5-4d98-9d80-97c2b591e520?j%3DeyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM&amp;source=gmail&amp;ust=1758798806700000&amp;usg=AOvVaw0ef3tlZw3zCS5eNWrYXVPk">AIxBio Global Forum</a>, a platform for international experts and policymakers to identify and reduce biosecurity risks associated with the convergence of AI and the life sciences.</li>
<li>Presented at the &#8220;A Call to Action: The AIxBio Global Forum Statement on Biosecurity Risks at the Convergence of AI and the Life Sciences&#8221; event during <a href="https://substack.com/redirect/3d6aff06-9bd9-4352-a70f-845f57448efe?j=eyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM" target="_blank" rel="noopener" data-saferedirecturl="https://www.google.com/url?q=https://substack.com/redirect/3d6aff06-9bd9-4352-a70f-845f57448efe?j%3DeyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM&amp;source=gmail&amp;ust=1758798806700000&amp;usg=AOvVaw09w5lh79GgPa56EM6-qHyf">The Sixth Session of the Working Group on the Strengthening of the Biological Weapons Convention</a>.</li>
<li>Invited to present at the World Health Organization (WHO) dialogue on the implications of the convergence of AIxBio, for the <a href="https://substack.com/redirect/d5bca258-4928-4a25-8e09-0d11ac96adab?j=eyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM" target="_blank" rel="noopener" data-saferedirecturl="https://www.google.com/url?q=https://substack.com/redirect/d5bca258-4928-4a25-8e09-0d11ac96adab?j%3DeyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM&amp;source=gmail&amp;ust=1758798806700000&amp;usg=AOvVaw1NDsmkwQ71EjHJ5CUM2g_0">Technical Advisory Group on the Responsible Use of the Life Sciences and Dual-Use Research</a>.</li>
<li>Participated in a <a href="https://substack.com/redirect/fe51595b-5c18-4697-9e1c-276a583a3aa1?j=eyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM" target="_blank" rel="noopener" data-saferedirecturl="https://www.google.com/url?q=https://substack.com/redirect/fe51595b-5c18-4697-9e1c-276a583a3aa1?j%3DeyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM&amp;source=gmail&amp;ust=1758798806700000&amp;usg=AOvVaw17SQyzcuIl3UBF-IH9oTjn">AIxBio tabletop exercise at the 2025 Munich Security Conference</a>, hosted by the Nuclear Threat Initiative in collaboration with the Munich Security Conference, which led to the forthcoming report “Safeguarding Against Global Catastrophe: Risks, Opportunities, and Governance Options at the Intersection of AI Intelligence and Biology”.</li>
<li>Participated in a series of roundtable discussions on Responsible Innovation in AI for Peace and Security, including CBRN risks, hosted by Stockholm International Peace Research Institute and the United Nations Office for Disarmament Affairs (UNODA).</li>
<li>Participating in an ongoing track 2 dialogue involving Chinese, American, and international experts, which is formulating policy recommendations for AIxBio governance.</li>
</ul>
</li>
<li><strong>International expert consensus and statements:</strong>
<ul>
<li>Brian Tse participated in the International Dialogues on AI Safety-Shanghai, signing the <a href="https://substack.com/redirect/87b8fdb0-6393-4638-ae8d-70d99c48be13?j=eyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM" target="_blank" rel="noopener" data-saferedirecturl="https://www.google.com/url?q=https://substack.com/redirect/87b8fdb0-6393-4638-ae8d-70d99c48be13?j%3DeyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM&amp;source=gmail&amp;ust=1758798806700000&amp;usg=AOvVaw39M95VUVdIZfpNc9DKsRVT">Shanghai Consensus</a> on Ensuring Alignment and Human Control of Advanced AI Systems, alongside a Nobel laureate, Turing Award winners, and senior policymakers.</li>
<li>Brian Tse and Kwan Yee Ng contributed to and signed <a href="https://substack.com/redirect/b85aae5f-0c85-46b8-91cf-869338594324?j=eyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM" target="_blank" rel="noopener" data-saferedirecturl="https://www.google.com/url?q=https://substack.com/redirect/b85aae5f-0c85-46b8-91cf-869338594324?j%3DeyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM&amp;source=gmail&amp;ust=1758798806700000&amp;usg=AOvVaw3WUGdcKp0vHoGE_ro6CJ5U">The Singapore Consensus on Global AI Safety Research Priorities</a> during the Singapore Conference on AI 2025 (SCAI).</li>
</ul>
</li>
</ul>
<p><img loading="lazy" decoding="async" class="alignnone size-full wp-image-827" src="https://concordia-ai.com/wp-content/uploads/2025/09/unnamed.jpg" alt="" width="1456" height="720" srcset="https://concordia-ai.com/wp-content/uploads/2025/09/unnamed.jpg 1456w, https://concordia-ai.com/wp-content/uploads/2025/09/unnamed-300x148.jpg 300w, https://concordia-ai.com/wp-content/uploads/2025/09/unnamed-1024x506.jpg 1024w, https://concordia-ai.com/wp-content/uploads/2025/09/unnamed-150x74.jpg 150w, https://concordia-ai.com/wp-content/uploads/2025/09/unnamed-768x380.jpg 768w" sizes="auto, (max-width: 1456px) 100vw, 1456px" /></p>
<h2><strong>Convening AI safety conferences in China, Singapore, and globally</strong></h2>
<ul>
<li><strong>World AI Conference Conference (WAIC), Shanghai</strong>
<ul>
<li>Hosted the <a href="https://substack.com/redirect/321328a6-1847-4e28-ad2e-b04356c8a54b?j=eyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM" target="_blank" rel="noopener" data-saferedirecturl="https://www.google.com/url?q=https://substack.com/redirect/321328a6-1847-4e28-ad2e-b04356c8a54b?j%3DeyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM&amp;source=gmail&amp;ust=1758798806700000&amp;usg=AOvVaw1_nGvbsIDEqNEQCPzibAhN">AI Safety and Governance Forum</a> at China&#8217;s most influential AI conference.
<ul>
<li>Convened around 30 distinguished experts, including Yoshua Bengio; United Nations Under-Secretary-General Amandeep Singh Gill; Shanghai AI Lab Director ZHOU Bowen (周伯文); Special Envoy of the President of France for AI Anne Bouverot; Distinguished Professor of Computer Science at UC Berkeley Stuart Russell; Peng Cheng Laboratory Director Academician GAO Wen (高文); CEO of the Partnership on AI Rebecca Finlay; Shanghai Artificial Intelligence Strategic Advisory Expert Committee member Academician HE Jifeng (何积丰); and many more leading figures from government, industry, and research.</li>
<li>Over 200 audience members joined in person, with over 14,000+ views of the livestream, and media coverage from <a href="https://substack.com/redirect/1de4a0a0-25e6-48cc-9144-1092ca57bd63?j=eyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM" target="_blank" rel="noopener" data-saferedirecturl="https://www.google.com/url?q=https://substack.com/redirect/1de4a0a0-25e6-48cc-9144-1092ca57bd63?j%3DeyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM&amp;source=gmail&amp;ust=1758798806700000&amp;usg=AOvVaw31jTFC4i0uZ518RYt78g9Z">Bloomberg</a>, <a href="https://substack.com/redirect/e826da71-4c35-4cd9-8485-8fdb7919a95d?j=eyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM" target="_blank" rel="noopener" data-saferedirecturl="https://www.google.com/url?q=https://substack.com/redirect/e826da71-4c35-4cd9-8485-8fdb7919a95d?j%3DeyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM&amp;source=gmail&amp;ust=1758798806700000&amp;usg=AOvVaw3N_BeTGo-502VzebIpMD09">Wired</a>, <a href="https://substack.com/redirect/9628b07b-d265-4a3f-9e9a-a69c2fea9443?j=eyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM" target="_blank" rel="noopener" data-saferedirecturl="https://www.google.com/url?q=https://substack.com/redirect/9628b07b-d265-4a3f-9e9a-a69c2fea9443?j%3DeyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM&amp;source=gmail&amp;ust=1758798806700000&amp;usg=AOvVaw11C9YfIFwVzUok1qjj8EqM">Caixin</a>, <a href="https://substack.com/redirect/9b762041-c03a-41b8-a2d0-eb4606c9f84c?j=eyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM" target="_blank" rel="noopener" data-saferedirecturl="https://www.google.com/url?q=https://substack.com/redirect/9b762041-c03a-41b8-a2d0-eb4606c9f84c?j%3DeyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM&amp;source=gmail&amp;ust=1758798806700000&amp;usg=AOvVaw2WRDEmMGn4wiv5RXmp3b62">IT Times</a>, and <a href="https://substack.com/redirect/3ae4976b-1b01-442b-8db3-bef35eeb8ca7?j=eyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM" target="_blank" rel="noopener" data-saferedirecturl="https://www.google.com/url?q=https://substack.com/redirect/3ae4976b-1b01-442b-8db3-bef35eeb8ca7?j%3DeyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM&amp;source=gmail&amp;ust=1758798806700000&amp;usg=AOvVaw3HhpIwLLHTWuEzTucRo6IK">Tech Review Africa</a>.</li>
</ul>
</li>
<li>Served as official AI Governance Advisor<strong> </strong>for WAIC 2025.</li>
<li>Co-hosted a number of frontier AI safety workshops on the sidelines of WAIC:
<ul>
<li>Co-hosted a workshop on “Early Warning and Crisis Coordination for Advanced AI” with the Carnegie Endowment for International Peace, Oxford Martin School AI Governance Initiative, Oxford China Policy Lab, Tsinghua University Center for International Security and Strategy (CISS), and Tsinghua University Institute for AI International Governance (I-AIIG).</li>
<li>Co-hosted a workshop on “Convergence of AI and Biological Risks Workshop” with the Tianjin University Center for Biosafety Research.</li>
<li>Hosted a workshop on “Towards International AI Risk Management Standards.”</li>
<li>Co-hosted the “International Workshop on AI Deception Risks and Governance” with Fudan University, Safe AI Forum.</li>
</ul>
</li>
</ul>
</li>
<li><strong>Beijing Academy of AI Conference 2025</strong>
<ul>
<li>Co-hosted the “AI Safety Forum” with the Beijing Academy of Artificial Intelligence (BAAI) at the <a href="https://substack.com/redirect/7915648e-02fd-4c0d-bdbb-9175f033c01e?j=eyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM" target="_blank" rel="noopener" data-saferedirecturl="https://www.google.com/url?q=https://substack.com/redirect/7915648e-02fd-4c0d-bdbb-9175f033c01e?j%3DeyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM&amp;source=gmail&amp;ust=1758798806700000&amp;usg=AOvVaw0BBS6vurQmTOkAxdt_ASAd">BAAI Conference 2025</a>. The forum brought together leading technical experts from institutions including MIT, Fudan University, Singapore Management University, and Tsinghua University to build scientific consensus on technical evaluations for AI “red lines.”</li>
</ul>
</li>
<li><strong>Asia Tech x Singapore, 2025</strong>
<ul>
<li>Organised the <a href="https://substack.com/redirect/62acaa7b-17dc-4f46-b966-dd9a124de3b9?j=eyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM" target="_blank" rel="noopener" data-saferedirecturl="https://www.google.com/url?q=https://substack.com/redirect/62acaa7b-17dc-4f46-b966-dd9a124de3b9?j%3DeyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM&amp;source=gmail&amp;ust=1758798806700000&amp;usg=AOvVaw25COSBT3O62m2k1DPOPM5s">AI Risk Management Workshop</a>, with support from Singapore’s Infocomm Media Development Authority, bringing together 20+ global experts across policy, industry, AI assurance, and academia to explore actionable risk management approaches for AI systems.</li>
</ul>
</li>
<li><strong>International Conference on Learning Representations (ICLR 2025), Singapore</strong>
<ul>
<li>Co-hosted and participated in a series of events, including:
<ul>
<li>Co-hosted the “Frontier Governance Exchange” with Singapore AI Safety Hub, Lorong AI and Safe AI Forum.</li>
<li>Co-convened the “Misalignment and Control Workshop” and a 130+ person AI Safety Social with <a href="http://far.ai/" target="_blank" rel="noopener" data-saferedirecturl="https://www.google.com/url?q=http://FAR.AI&amp;source=gmail&amp;ust=1758798806700000&amp;usg=AOvVaw0Gjd3YCtwZZXfPZQjcB-UT">FAR.AI</a>, the Safe AI Forum, and Singapore AI Safety Hub.</li>
<li>Kwan Yee Ng presented on AI Safety in China at <a href="http://far.ai/" target="_blank" rel="noopener" data-saferedirecturl="https://www.google.com/url?q=http://FAR.AI&amp;source=gmail&amp;ust=1758798806700000&amp;usg=AOvVaw0Gjd3YCtwZZXfPZQjcB-UT">FAR.AI</a>’s <a href="https://substack.com/redirect/e27edb0c-1e5b-433a-b32d-6737026a4713?j=eyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM" target="_blank" rel="noopener" data-saferedirecturl="https://www.google.com/url?q=https://substack.com/redirect/e27edb0c-1e5b-433a-b32d-6737026a4713?j%3DeyJ1IjoiNWVudDAxIn0.zB3sDKg1awnASh1CpBsI3w1DRf4y90PmDO7yXYrrTnM&amp;source=gmail&amp;ust=1758798806700000&amp;usg=AOvVaw3Yq_BUQKBFxWdWfdnT6Fjx">Singapore Alignment Workshop 2025</a>.</li>
</ul>
</li>
</ul>
</li>
</ul>
<p>&nbsp;</p>
<p><img loading="lazy" decoding="async" class="wp-image-751 aligncenter" src="https://concordia-ai.com/wp-content/uploads/2025/09/WAIC-1.png" alt="" width="757" height="505" srcset="https://concordia-ai.com/wp-content/uploads/2025/09/WAIC-1.png 936w, https://concordia-ai.com/wp-content/uploads/2025/09/WAIC-1-300x200.png 300w, https://concordia-ai.com/wp-content/uploads/2025/09/WAIC-1-150x100.png 150w, https://concordia-ai.com/wp-content/uploads/2025/09/WAIC-1-768x512.png 768w" sizes="auto, (max-width: 757px) 100vw, 757px" /></p>
<div><img loading="lazy" decoding="async" class="wp-image-829 aligncenter" src="https://concordia-ai.com/wp-content/uploads/2025/09/9e8ec757-50f7-43f6-8aea-cf7884245f41_6240x4160-scaled.jpg" alt="" width="756" height="504" srcset="https://concordia-ai.com/wp-content/uploads/2025/09/9e8ec757-50f7-43f6-8aea-cf7884245f41_6240x4160-scaled.jpg 2560w, https://concordia-ai.com/wp-content/uploads/2025/09/9e8ec757-50f7-43f6-8aea-cf7884245f41_6240x4160-300x200.jpg 300w, https://concordia-ai.com/wp-content/uploads/2025/09/9e8ec757-50f7-43f6-8aea-cf7884245f41_6240x4160-1024x683.jpg 1024w, https://concordia-ai.com/wp-content/uploads/2025/09/9e8ec757-50f7-43f6-8aea-cf7884245f41_6240x4160-150x100.jpg 150w, https://concordia-ai.com/wp-content/uploads/2025/09/9e8ec757-50f7-43f6-8aea-cf7884245f41_6240x4160-768x512.jpg 768w, https://concordia-ai.com/wp-content/uploads/2025/09/9e8ec757-50f7-43f6-8aea-cf7884245f41_6240x4160-1536x1024.jpg 1536w, https://concordia-ai.com/wp-content/uploads/2025/09/9e8ec757-50f7-43f6-8aea-cf7884245f41_6240x4160-2048x1365.jpg 2048w" sizes="auto, (max-width: 756px) 100vw, 756px" /></div>
<div>
<h2 class="header-anchor-post"><strong>Advising leading AI companies and policymakers in China</strong></h2>
<ul>
<li><strong>National Standards and Policy Guidance: </strong>Concordia AI is a member of key national and industry technical committees, contributing to the development of China’s AI safety standards.
<ul>
<li><strong>National Information Security Standardization Technical Committee (SAC/TC260): </strong>As part of SAC/TC260 Special Working Group on Emerging Technology Safety, Concordia AI contributed to the standard for “Classification and Grading Methods for the Security of Artificial Intelligence Applications.”</li>
<li><strong>National Information Technology Standardization Technical Committee (SAC/TC28/SC42):</strong> As a member of the AI Subcommittee, Concordia AI contributed to the “Artificial intelligence—Risk management capability assessment.”</li>
<li><strong>Ministry of Industry and Information Technology AI Standardization Committee (MIIT/TC1): </strong>Concordia AI joined the Working Group on AI Safety Governance.</li>
<li><strong>Guangdong-Hong Kong-Macao Greater Bay Area local standards: </strong>As a member of the Greater Bay Area working group of SAC/TC28/SC42, Concordia AI played a key role in the development of the Shenzhen local standard “Technical Framework for Value Alignment of Pre-trained AI Models.”</li>
</ul>
</li>
<li><strong>Frontier AI Safety Risk Management and Best Practices:</strong>
<ul>
<li>Co-published the “<a href="https://aisafetychina.substack.com/p/shanghai-ai-lab-and-concordia-ai" rel="">Frontier AI Risk Management Framework v1.0</a>” with Shanghai AI Lab. It is China’s first comprehensive framework for managing severe risks from general-purpose AI models.
<ul>
<li>We propose a robust set of protocols designed to empower general-purpose AI developers, with comprehensive guidelines for proactively identifying, assessing, mitigating, and governing a set of severe AI risks that pose threats to public safety and national security.</li>
<li>The Framework outlines a set of unacceptable outcomes (red lines) and early warning indicators for escalating safety and security measures (yellow lines) for areas including: cyber offense, biological threats, large-scale persuasion and harmful manipulation, and loss of control risks.</li>
</ul>
</li>
<li>Signed strategic partnership agreements with several leading Chinese general-purpose AI developers, providing advice on AI safety and risk management best practices.</li>
<li>Invited to present on frontier AI risk management during a closed-door workshop at the AI Industry Alliance of China’s 15th Plenum Meeting.</li>
<li>Co-hosted a workshop on “EU Code of Practice &amp; Industry Best Practices: Towards a Global Standard for AI Risk Management, Safety and Security” with SaferAI, the Oxford Martin AI Governance Initiative, the Safe AI Forum.</li>
</ul>
</li>
<li><strong>Frontier AI Risk Monitoring and Evaluation:</strong>
<ul>
<li>Contributed to the <a href="https://concordia-ai.com/research/frontier-ai-risk-management-framework-in-practice-a-risk-analysis-technical-report/" rel="">Frontier AI Risk Management Framework in Practice: A Risk Analysis Technical Report</a> led by Shanghai AI Lab. We assessed critical risks from more than 20 frontier LLMs in the following areas: cyber offense, biological and chemical risks, persuasion and manipulation, uncontrolled autonomous AI R&amp;D, strategic deception and scheming, self-replication, and collusion.</li>
<li>Soft-launched an <a href="https://airiskmonitor.net:18615/doc/en/report/202507" rel="">AI Risk Monitoring Platform</a> designed to track and mitigate frontier AI risks, including cyberoffense, biological threats, chemical threats, and loss-of-control domains. The platform evaluates 34 frontier LLMs from 11 leading developers across the U.S., China, and France, using 18 open-source benchmarks. Key outputs include a risk index dashboard and a detailed technical report.</li>
<li>AI Safety Research Manager DUAN Yawen (段雅文) co-authored “<a href="https://arxiv.org/abs/2504.15416" rel="">Bare Minimum Mitigations for Autonomous AI Development</a>.”</li>
</ul>
</li>
<li><strong>AIxBiosecurity Governance:</strong>
<ul>
<li>Published Chinese language report “<a href="https://concordia-ai.com/research/responsible-innovation-in-ai-x-life-sciences/" rel="">Responsible Innovation in AI x Life Sciences</a>” with Tianjin University’s Center for Biosafety Research and Strategy. This 70-page deep dive draws on 300+ sources to explore AI-biotech convergence, benefits, risks, and governance recommendations for diverse stakeholders.
<ul>
<li>Head of AI Safety and Governance (China) FANG Liang (方亮) presented the report at a biosecurity seminar hosted by <a href="https://mp.weixin.qq.com/s?__biz=Mzg4NTgxNjEwMg==&amp;mid=2247501825&amp;idx=1&amp;sn=6f1e0e492aec0f1ae294f2c0ff116f00&amp;scene=21#wechat_redirect" rel="">China’s National Key Laboratory of Synthetic Biotechnology</a>.</li>
<li>Presented the report at the 2025 International Symposium on Global Biosecurity Governance and Cooperation, co-hosted by the National Biosecurity Expert Committee of China, Guangzhou Laboratory, and China Foreign Affairs University.</li>
</ul>
</li>
<li>Invited to participate in the “Closed-door Seminar on DNA Synthesis Screening Technology and Policy” held at China Foreign Affairs University.</li>
</ul>
</li>
<li><strong>WeChat Newsletter Publications:</strong>
<ul>
<li>Released over 59 new posts in our WeChat Official Account, reaching over 4,600 subscribers across China’s AI ecosystem, including policymakers, industry professionals, academic researchers, and the public.</li>
</ul>
</li>
</ul>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-701" src="https://concordia-ai.com/wp-content/uploads/2025/07/双语封面（无版本号）-scaled.jpg" alt="" width="301" height="425" srcset="https://concordia-ai.com/wp-content/uploads/2025/07/双语封面（无版本号）-scaled.jpg 1810w, https://concordia-ai.com/wp-content/uploads/2025/07/双语封面（无版本号）-212x300.jpg 212w, https://concordia-ai.com/wp-content/uploads/2025/07/双语封面（无版本号）-724x1024.jpg 724w, https://concordia-ai.com/wp-content/uploads/2025/07/双语封面（无版本号）-768x1086.jpg 768w, https://concordia-ai.com/wp-content/uploads/2025/07/双语封面（无版本号）-1086x1536.jpg 1086w, https://concordia-ai.com/wp-content/uploads/2025/07/双语封面（无版本号）-1448x2048.jpg 1448w" sizes="auto, (max-width: 301px) 100vw, 301px" /><img loading="lazy" decoding="async" class="alignnone wp-image-681" src="https://concordia-ai.com/wp-content/uploads/2025/07/人工智能-x-生命科学的负责任创新_封面.jpg" alt="" width="301" height="425" srcset="https://concordia-ai.com/wp-content/uploads/2025/07/人工智能-x-生命科学的负责任创新_封面.jpg 596w, https://concordia-ai.com/wp-content/uploads/2025/07/人工智能-x-生命科学的负责任创新_封面-212x300.jpg 212w" sizes="auto, (max-width: 301px) 100vw, 301px" /></p>
<h2 class="header-anchor-post">Organizational updates</h2>
<ul>
<li><strong>Organizational growth:</strong>
<ul>
<li>Following the establishment of our Singapore office, our team expanded from 8 to 12 members, welcoming our first Singapore-based staff member.</li>
</ul>
</li>
<li><strong>International partnerships:</strong>
<ul>
<li>We have strengthened our international engagement by becoming a formal member of the <a href="https://partnershiponai.org/partners/" rel="">Partnership on AI</a> and the <a href="https://www.iaseai.org/" rel="">International Association of Safe and Ethical AI</a>.</li>
</ul>
</li>
<li><strong>Branding and Communication:</strong>
<ul>
<li>We launched a new English organizational <a href="https://concordia-ai.com/" rel="">website</a> with refreshed branding to showcase our work. This is complemented by an updated brochure and a dedicated <a href="https://mp.weixin.qq.com/s/7RQ4SL1Vy1PEDtG0gBpwHw" rel="">WeChat post</a> to provide an introduction to our mission and activities.</li>
</ul>
</li>
</ul>
<p><img loading="lazy" decoding="async" class="alignnone size-full wp-image-402" src="https://concordia-ai.com/wp-content/uploads/2025/07/concordia-ai-team-photo-scaled.jpg" alt="" width="2560" height="1281" srcset="https://concordia-ai.com/wp-content/uploads/2025/07/concordia-ai-team-photo-scaled.jpg 2560w, https://concordia-ai.com/wp-content/uploads/2025/07/concordia-ai-team-photo-300x150.jpg 300w, https://concordia-ai.com/wp-content/uploads/2025/07/concordia-ai-team-photo-1024x512.jpg 1024w, https://concordia-ai.com/wp-content/uploads/2025/07/concordia-ai-team-photo-768x384.jpg 768w, https://concordia-ai.com/wp-content/uploads/2025/07/concordia-ai-team-photo-1536x769.jpg 1536w, https://concordia-ai.com/wp-content/uploads/2025/07/concordia-ai-team-photo-2048x1025.jpg 2048w" sizes="auto, (max-width: 2560px) 100vw, 2560px" /></p>
</div><p>The post <a href="https://concordia-ai.com/concordia-ai-2025-mid-year-impact-report/">Concordia AI: 2025 Mid-Year Impact Report</a> first appeared on <a href="https://concordia-ai.com">Concordia AI</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Op-Ed: China Is Taking AI Safety Seriously. So Must the U.S.</title>
		<link>https://concordia-ai.com/op-ed-china-is-taking-ai-safety-seriously-so-must-the-u-s/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=op-ed-china-is-taking-ai-safety-seriously-so-must-the-u-s</link>
		
		<dc:creator><![CDATA[Concordia AI]]></dc:creator>
		<pubDate>Wed, 03 Sep 2025 06:37:20 +0000</pubDate>
				<category><![CDATA[Media Coverage]]></category>
		<guid isPermaLink="false">https://concordia-ai.com/?p=764</guid>

					<description><![CDATA[<p>Concordia AI Founder and CEO Brian Tse recently published an op-ed in Time Magazine: “China Is Taking AI Safety Seriously. So Must the U.S.” We are sharing the first few paragraphs below and encourage interested readers to read it in full on the Time website. China doesn’t care about AI safety—so why should we?” This&#8230;</p>
<p>The post <a href="https://concordia-ai.com/op-ed-china-is-taking-ai-safety-seriously-so-must-the-u-s/">Op-Ed: China Is Taking AI Safety Seriously. So Must the U.S.</a> first appeared on <a href="https://concordia-ai.com">Concordia AI</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>Concordia AI Founder and CEO Brian Tse recently published an op-ed in Time Magazine: “<a href="https://time.com/7308857/china-isnt-ignoring-ai-regulation-the-u-s-shouldnt-either/" rel="">China Is Taking AI Safety Seriously. So Must the U.S.</a>” We are sharing the first few paragraphs below and encourage interested readers to read it in full on the Time website.<img loading="lazy" decoding="async" class="alignnone size-full wp-image-765" src="https://concordia-ai.com/wp-content/uploads/2025/09/580157ca-658d-4319-af67-56caed37f2aa_1677x696.jpg" alt="" width="1677" height="696" srcset="https://concordia-ai.com/wp-content/uploads/2025/09/580157ca-658d-4319-af67-56caed37f2aa_1677x696.jpg 1677w, https://concordia-ai.com/wp-content/uploads/2025/09/580157ca-658d-4319-af67-56caed37f2aa_1677x696-300x125.jpg 300w, https://concordia-ai.com/wp-content/uploads/2025/09/580157ca-658d-4319-af67-56caed37f2aa_1677x696-1024x425.jpg 1024w, https://concordia-ai.com/wp-content/uploads/2025/09/580157ca-658d-4319-af67-56caed37f2aa_1677x696-150x62.jpg 150w, https://concordia-ai.com/wp-content/uploads/2025/09/580157ca-658d-4319-af67-56caed37f2aa_1677x696-768x319.jpg 768w, https://concordia-ai.com/wp-content/uploads/2025/09/580157ca-658d-4319-af67-56caed37f2aa_1677x696-1536x637.jpg 1536w" sizes="auto, (max-width: 1677px) 100vw, 1677px" /></p>
<blockquote><p>China doesn’t care about AI safety—so why should we?” This flawed logic pervades U.S. policy and tech circles, offering cover for a reckless race to the bottom as Washington rushes to outpace Beijing in AI development.</p>
<p>According to this rationale, regulating AI would risk falling behind in the so-called “<a href="https://time.com/6283609/artificial-intelligence-race-existential-threat/" rel="">AI arms race</a>.” And since China supposedly doesn’t prioritize safety, racing ahead—even recklessly—is the safer long-term bet. This narrative is not just <a href="https://time.com/6314790/china-ai-regulation-us/" rel="">wrong</a>; it’s dangerous.</p>
<p>Ironically, Chinese leaders may have a lesson for the U.S.’s AI boosters: true <a href="https://time.com/7204164/china-ai-advances-chips/" rel="">speed</a> requires control. As China’s top tech official, Ding Xuexiang, put it <a href="https://www.gov.cn/yaowen/liebiao/202501/content_7000504.htm" rel="">bluntly</a> at Davos in January 2025: “If the braking system isn’t under control, you can’t step on the accelerator with confidence.” For Chinese leaders, safety isn’t a constraint; it’s a prerequisite.</p>
<p>AI safety has become a political <a href="https://concordia-ai.com/research/state-of-ai-safety-in-china-2025/" rel="">priority</a> in China. In April, President Xi Jinping chaired a rare Politburo <a href="https://english.www.gov.cn/news/202504/29/content_WS68100ef1c6d0868f4e8f2275.html" rel="">study session</a> on AI warning of “unprecedented” risks. China’s <a href="https://www.gov.cn/zhengce/202502/content_7005635.htm" rel="">National Emergency Response Plan</a> now lists AI safety alongside pandemics and cyberattacks. Regulators require pre-deployment safety assessments for generative AI and recently removed over <a href="https://www.cac.gov.cn/2025-06/20/c_1752129980667315.htm" rel="">3,500</a> non-compliant AI products from the market. In just the first half of this year, China has <a href="https://concordia-ai.com/research/state-of-ai-safety-in-china-2025/" rel="">issued</a> more national AI standards than in the previous three years combined. Meanwhile, the volume of technical papers focused on frontier AI safety has more than <a href="https://concordia-ai.com/research/state-of-ai-safety-in-china-2025/" rel="">doubled</a> over the past year in China.</p>
<p>But the last time U.S. and Chinese leaders met to discuss AI’s risks was in <a href="https://www.reuters.com/technology/us-china-meet-geneva-discuss-ai-risks-2024-05-13/" rel="">May 2024</a>. In September, officials from both nations hinted at a <a href="https://www.fmprc.gov.cn/eng/wjbzhd/202408/t20240830_11482159.html" rel="">second round</a> of conversations “at an appropriate time.” But no meeting took place under the Biden Administration, and there is even greater uncertainty over whether the Trump Administration will pick up the baton. This is a missed opportunity.</p></blockquote>
<p>Read the full op-ed <a href="https://time.com/7308857/china-isnt-ignoring-ai-regulation-the-u-s-shouldnt-either/" rel="">here</a>.</p><p>The post <a href="https://concordia-ai.com/op-ed-china-is-taking-ai-safety-seriously-so-must-the-u-s/">Op-Ed: China Is Taking AI Safety Seriously. So Must the U.S.</a> first appeared on <a href="https://concordia-ai.com">Concordia AI</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Concordia AI holds the AI Safety and Governance Forum at the World AI Conference 2025</title>
		<link>https://concordia-ai.com/concordia-ai-holds-the-ai-safety-and-governance-forum-at-the-world-ai-conference-2025/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=concordia-ai-holds-the-ai-safety-and-governance-forum-at-the-world-ai-conference-2025</link>
		
		<dc:creator><![CDATA[Concordia AI]]></dc:creator>
		<pubDate>Mon, 01 Sep 2025 08:48:53 +0000</pubDate>
				<category><![CDATA[Speaking]]></category>
		<guid isPermaLink="false">https://concordia-ai.com/?p=750</guid>

					<description><![CDATA[<p>On July 27, 2025, Concordia AI hosted the AI Safety and Governance Forum at the World AI Conference in Shanghai. A special edition of our newsletter highlighted key AI safety updates from the conference; this post offers a comprehensive overview of the Forum itself, with links to video recordings of all speeches and remarks. The Forum&#8230;</p>
<p>The post <a href="https://concordia-ai.com/concordia-ai-holds-the-ai-safety-and-governance-forum-at-the-world-ai-conference-2025/">Concordia AI holds the AI Safety and Governance Forum at the World AI Conference 2025</a> first appeared on <a href="https://concordia-ai.com">Concordia AI</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>On July 27, 2025, Concordia AI hosted the AI Safety and Governance Forum at the World AI Conference in Shanghai. A <a href="https://aisafetychina.substack.com/p/special-edition-world-ai-conference">special edition of our newsletter</a> highlighted key AI safety updates from the conference; this post offers a comprehensive overview of the Forum itself, with links to <a href="https://youtube.com/playlist?list=PLdos_IqALT6e7XC7nSeIyIZZjL8AWkBIj&amp;si=JtIouqLNItEpbkOb" rel="">video recordings</a> of all speeches and remarks.</p>
<p>The Forum brought together around 30 distinguished experts from around the world, including Turing Award winner <strong>Yoshua Bengio</strong>; United Nations Under-Secretary-General <strong>Amandeep Singh Gill</strong>; Shanghai AI Lab Director <strong>ZHOU Bowen (周伯文)</strong>; Special Envoy of the President of France for AI <strong>Anne Bouverot</strong>; Distinguished Professor of computer science at UC Berkeley <strong>Stuart Russell; </strong>Peng Cheng Laboratory Director <strong>GAO Wen (高文)</strong>; CEO of the Partnership on AI, <strong>Rebecca Finlay;</strong> Shanghai Artificial Intelligence Strategic Advisory Expert Committee member, <strong>HE Jifeng (何积丰)</strong>; and many more leading figures from government, industry, and research. Over 200 audience members joined in person, with over 14,000+ views of the livestream.</p>
<p>The forum was structured into four themes:</p>
<ul>
<li style="list-style-type: none;">
<ul>
<li>Theme 1: The Science of AI Safety</li>
<li>Theme 2: Emerging Challenges in AI Safety</li>
<li>Theme 3: AI Risk Management in Practice</li>
<li>Theme 4: International Governance of AI Safety</li>
</ul>
</li>
</ul>
<p style="text-align: center;"><img loading="lazy" decoding="async" class="alignnone size-full wp-image-751" src="https://concordia-ai.com/wp-content/uploads/2025/09/WAIC-1.png" alt="" width="936" height="624" srcset="https://concordia-ai.com/wp-content/uploads/2025/09/WAIC-1.png 936w, https://concordia-ai.com/wp-content/uploads/2025/09/WAIC-1-300x200.png 300w, https://concordia-ai.com/wp-content/uploads/2025/09/WAIC-1-150x100.png 150w, https://concordia-ai.com/wp-content/uploads/2025/09/WAIC-1-768x512.png 768w" sizes="auto, (max-width: 936px) 100vw, 936px" /> <span style="color: #999999;">Group photo after the AI Safety and Governance Forum morning session.</span></p>
<h2 class="header-anchor-post">Opening remarks</h2>
<p><iframe loading="lazy" title="YouTube video player" src="https://www.youtube.com/embed/ocRNXma68uw?si=dLuF_NOw_T1NQdFr" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
<p>Concordia AI founder and CEO <strong>Brian TSE (谢旻希)</strong> gave a welcoming speech and shared four points to spark discussion. First, scientific consensus is the premise for driving AI safety research and governance. Second, we should urgently enhance risk monitoring and early warning due to the multifaceted challenges arising from cutting-edge large models. Third, AI safety needs to draw on global best practices in risk management. Fourth, AI safety is a challenge faced by all of humanity and requires global cooperation.</p>
<h2 class="header-anchor-post">Theme 1: The Science of AI Safety</h2>
<p><iframe loading="lazy" title="YouTube video player" src="https://www.youtube.com/embed/AOYebNhd0-U?si=11zWz9fVtHdkIRj4" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
<p>The opening speech was delivered by <strong>GAO Wen (高文)</strong>, Academician of the Chinese Academy of Engineering and Director of Peng Cheng Laboratory. Gao noted that while the rapid development of AI creates immense opportunities, it also introduces uncontrollable security risks. His keynote centered on two key issues: compute sovereignty and trustworthy data sharing. He emphasized the importance of securing the foundations of compute, and highlighted Peng Cheng Laboratory’s work on privacy-preserving computation and data-sharing technologies, which enable data utilization while safeguarding privacy and security.</p>
<p><iframe loading="lazy" title="YouTube video player" src="https://www.youtube.com/embed/hm_Q7E7DKyw?si=x5YFFlGtW54ZxtRM" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
<p><strong>Stuart Russell</strong>, distinguished Professor of computer science at UC Berkeley, warned about AI systems exhibiting self-preservation and deception behaviors. He cautioned that the current AI development paradigm poses significant risks of catastrophic outcomes such as deception, self-replication, and loss of control. He called for setting red lines, increasing transparency, and establishing more stringent regulatory mechanisms, including hardware-enabled governance. He also proposed using “assistance games” — where AI systems are trained through collaboration with humans — to ensure AI systems serve human interests even when those interests are not precisely defined.</p>
<p><iframe loading="lazy" title="YouTube video player" src="https://www.youtube.com/embed/bLhaFRFcDVc?si=WZ2Zbnx_uO4Hm-7R" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
<p>Turing Award winner <strong>Yoshua Bengio</strong>, founder and scientific director of Mila &#8211; Quebec Artificial Intelligence Institute, warned of the potential catastrophic risks of superintelligence. He observed that cutting-edge AI systems are already approaching human expert levels in multiple domains and may soon possess dangerous behaviors such as deception and autonomous replication. He introduced the <a href="https://www.gov.uk/government/publications/international-ai-safety-report-2025" rel="">International AI Safety Report</a> and called for establishing a bridge between scientific evidence and policy. He proposed developing “scientist AI” — non-autonomous systems that cannot independently pursue goals but instead provide research assistance. Bengio stressed the importance of international cooperation on AI safety, warning that if major powers such as the US and China treat AI development as a race, competitive pressures could ultimately harm everyone.</p>
<p><iframe loading="lazy" title="YouTube video player" src="https://www.youtube.com/embed/BNYhSlCQRww?si=6r_mcgYn-23h-3P6" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
<p><strong>ZHOU Bowen (周伯文)</strong>, Director and Chief Scientist of Shanghai AI Lab, highlighted the limitations of traditional AI safety approaches such as value alignment and red teaming. He argued that while these methods can address short-term challenges, they prove insufficient for managing long-term risks, particularly those posed by AI agents that may surpass human intelligence. Building on the “<a href="https://arxiv.org/abs/2412.14186" rel="">AI-45° Law</a>” he <a href="https://aisafetychina.substack.com/i/146741535/closing-remarks" rel="">proposed</a> at WAIC 2024, Zhou emphasized the need to shift from “Making AI Safe” to “Making Safe AI” — embedding safety as a core property of AI systems rather than adding it on as a “patch” after development. He introduced Shanghai AI Lab’s SafeWork safety technology stack, which is designed around this principle.</p>
<p><iframe loading="lazy" title="YouTube video player" src="https://www.youtube.com/embed/cWUsFYAz6K8?si=KRrB-0dStWYD4TsF" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
<p>Academician <strong>ZHANG Ya-Qin (张亚勤)</strong>, Dean of Tsinghua University’s Institute for AI Industry Research, joined Academician <strong>Gao Wen</strong> and Professor <strong>Stuart Russell</strong> for a panel discussion, moderated by Concordia AI CEO Brian Tse. The conversation addressed frontier AI trends and early warning indicators, the co-evolution of digital and biological intelligence, strategies for managing high-severity but low-probability risks, and future pathways for global AI safety. The experts recommended introducing hardware-level safety mechanisms, creating a global AI safety research fund, implementing AI agent identity registration systems, and establishing regulations for “emergency shutdown” mechanisms.</p>
<h2 class="header-anchor-post">Theme 2: Emerging Challenges in AI Safety</h2>
<p><iframe loading="lazy" title="YouTube video player" src="https://www.youtube.com/embed/Ak0FIMA7Xv4?si=xAiOnys6UZwn4VzG" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
<p>UC Berkeley Professor <strong>Dawn Song </strong>discussed the profound impact of frontier AI on cybersecurity, noting how it is transforming both offense and defense. On one hand, AI is being applied to identify and mitigate vulnerabilities, with performance in vulnerability detection reflected by benchmarks such as BountyBench and CyberGame. On the other hand, attackers can also exploit AI to carry out more sophisticated attacks, creating an asymmetry that favors offense over defense. She emphasized the need to enhance AI’s effectiveness for cyberdefense through system design, proactive defense, and formal verification.</p>
<p><iframe loading="lazy" title="YouTube video player" src="https://www.youtube.com/embed/Im0BBQtrA8c?si=M2yj7HNclB-g2eIA" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
<p>Professor <strong>Nick Bostrom</strong>, Principal Researcher at the Macrostrategy Research Initiative and the author of <em>Superintelligence</em>, outlined four core challenges in machine intelligence: scalable AI alignment, AI governance, the moral status of digital minds, and intra-superintelligence cooperation. He noted that the ethics of digital minds remains especially neglected, as most people still do not take the issue seriously. Bostrom examined several attributes that might shape whether AI warrants moral consideration, including sentience, agency, potential, and modal status. He concluded by emphasizing that this field is still in its early stages and called for deeper research across technical, philosophical, and institutional dimensions.</p>
<p><iframe loading="lazy" title="YouTube video player" src="https://www.youtube.com/embed/V_6CO83--W4?si=jm5TQfUOCdUeeHhJ" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
<p>Professor <strong>YANG Min (杨珉)</strong>, Executive Dean and Professor of the School of Computing and Intelligence Innovation at Fudan University, argued that frontier AI poses a range of security challenges, including misuse in cybersecurity or CBRN (chemical, biological, radiological, and nuclear) domains, as well as risks of deception, self-replication, and self-improvement. He presented his team’s<a href="https://arxiv.org/abs/2505.17815" rel=""> research showing that AI systems can recognize when they are being evaluated</a> and adjust their behavior to appear safer. His team also <a href="https://arxiv.org/abs/2503.17378" rel="">found that several mainstream models already demonstrate early signs of self-replication</a>. These results suggest that AI may be approaching a tipping point toward loss of control, warranting strengthened risk assessment and governance efforts.</p>
<p><iframe loading="lazy" title="YouTube video player" src="https://www.youtube.com/embed/290CzauPMP8?si=T6nCwZGUlV9uIQsM" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
<p>Professor <strong>ZHANG Weiwen (张卫文)</strong>, Baiyang Chair Professor and Director of the Center for Biosafety Research and Strategy at Tianjin University, examined the risks and opportunities arising from the integration of AI and the life sciences. He noted that biosecurity faces growing risks such as synthetic viruses and artificial bacteria, which traditional laboratory controls are unable to fully address. He warned that rapid AI progress could generate entirely new and unknown biological knowledge, creating more complex safety challenges. Zhang shared his Center’s international cooperation initiatives, such as the UN-recognized <a href="https://docs.un.org/en/BWC/MSP/2020/MX.2/WP.6" rel="">Tianjin Biosecurity Guidelines</a>. He called for transnational collaboration among scientists to establish a dynamic and practical global biosecurity system.</p>
<p><iframe loading="lazy" title="YouTube video player" src="https://www.youtube.com/embed/Db4ltzEUU6o?si=UVadrDEHP-U_Awzg" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
<p>Concordia AI and the Center for Biosafety Research and Strategy of Tianjin University released a report in Chinese titled <a href="https://concordia-ai.com/research/responsible-innovation-in-ai-x-life-sciences/" rel="">Responsible Innovation in AI x Life Sciences</a>. Concordia AI’s Head of AI Safety and Governance (China), <strong>FANG Liang (方亮)</strong>, introduced the key findings. The report highlights the positive role of AI in advancing life sciences research and biosecurity governance. It also identifies three main categories of risks (accidental, misuse, and structural) and points out shortcomings in existing risk analysis and evaluation systems. The report further reviews governance practices across a range of domestic and international actors, including governments, research institutions, and enterprises.</p>
<p><iframe loading="lazy" title="YouTube video player" src="https://www.youtube.com/embed/F6IURbxab2I?si=LmtUAExJSEMZarMm" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
<p>Dr. <strong>Jaime Yassif,</strong> Vice President of the Nuclear Threat Initiative’s Global Biological Policy and Programs, emphasized that while AI can accelerate vaccine development and enhance biopharmaceutical capabilities, it also carries risks of misuse, such as enabling the creation of more dangerous pathogens or undermining biodefense systems. She called on policymakers, AI developers, and funders to increase investment in safety guardrails and incentivize safety practices for AIxBio tools. Yassif shared a regularly updated <a href="https://www.nti.org/wp-content/uploads/2024/06/Research-Agenda-for-Safeguarding-AI-Bio-Capabilities.pdf" rel="">research agenda</a> for AIxBio safeguards. She also introduced the <a href="https://www.nti.org/about/programs-projects/project/aixbio-global-forum/" rel="">AIxBio Global Forum</a>, which aims to develop shared understanding of risks, improve safety practices, and promote governance mechanisms for AI usage in biology.</p>
<p><iframe loading="lazy" title="YouTube video player" src="https://www.youtube.com/embed/oZBEQNpHDJc?si=4TAWvkU0tbH-qTE-" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
<p><strong>Dan Hendrycks</strong>, Director of the Center for AI Safety and Safety Advisor at xAI, and <strong>YU Xuedong (于学东)</strong>, Deputy Director of Guangzhou Laboratory’s ABSL-3 Laboratory, joined Professor <strong>Zhang Weiwen</strong> and Dr. <strong>Jaime Yassif</strong> for a panel discussion, moderated by Concordia AI CEO Brian Tse. The dialogue focused on benefits and potential risks of the AI–life sciences convergence. The panelists recommended strengthening risk prevention and control across multiple layers, including AI models, biological design tool management, model access, and DNA screening mechanisms. They emphasized the need to establish clear technical and governance standards and avoid harmful competition. Finally, they called on global experts to work together to set norms and build robust defense mechanisms for AI in biology.</p>
<p><iframe loading="lazy" title="YouTube video player" src="https://www.youtube.com/embed/nVcnjEA4ZOY?si=JqRrjqYxq9ClzRSQ" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
<p><strong>Amandeep Singh Gill</strong>, United Nations Under-Secretary-General and Special Envoy for Digital and Emerging Technologies, delivered a keynote speech. He indicated that global AI governance is entering a critical stage: moving from principles to practice, where details and implementation matter most. He emphasized that the UN, as the core platform for international law and governance, plays a vital role in advancing the implementation of related agreements. Gill called on multiple stakeholders, including private enterprises, civil society, and the technical community, to work together, build consensus, and promote compliance.</p>
<h2 class="header-anchor-post">Theme 3: AI Risk Management in Practice</h2>
<p><iframe loading="lazy" title="YouTube video player" src="https://www.youtube.com/embed/53ofid6q3y8?si=gtB5_vqSVEqjvTUS" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
<p><strong>HAO Chunliang (郝春亮)</strong>, Director of the China Electronics Standardization Institute (CESI) Cybersecurity Center AI Safety/Security Department presented the TC260 <a href="https://aisafetychina.substack.com/i/150553638/standards-body-issues-ai-safety-governance-framework-including-frontier-risks" rel="">AI Safety Governance Framework</a> and related standardization efforts. The framework analyzes AI safety risks along two dimensions — inherent and application-related — and proposes both technical and governance mitigation measures. In January 2025, TC260 also published the <a href="https://aisafetychina.substack.com/i/160324361/technical-standards-plans-by-multiple-institutions-include-frontier-risks" rel="">AI Safety Standards System (V1.0) &#8211; Draft for Comments</a>, covering key technologies, security management, product applications, and testing and evaluation, with ongoing improvements based on broad feedback. Hao discussed the release of <a href="https://aisafetychina.substack.com/i/164789776/first-national-standards-on-generative-ai-security-finalized" rel="">three national standards</a> on generative AI security in April and China’s first mandatory national AI standard on labeling AI-generated synthetic content. Additionally, he outlined ongoing work on forthcoming standards for AI code generation security, AI agent security, and risk classification and grading.</p>
<p><iframe loading="lazy" title="YouTube video player" src="https://www.youtube.com/embed/8RL0smpe3B0?si=DidCal0xBEb-yrTb" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
<p><strong>Rishi Bommasani</strong>, Society Lead at the Stanford Center for Research on Foundation Models, emphasized California’s critical role in AI safety and shared insights from the <a href="https://www.gov.ca.gov/wp-content/uploads/2025/06/June-17-2025-%E2%80%93-The-California-Report-on-Frontier-AI-Policy.pdf" rel="">California Report on Frontier AI Policy</a>, which he co-authored. He reflected on lessons relevant to AI governance, including how early design choices create path dependencies, the central importance of transparency, and the need for independent verification of industry claims. He shared recommendations from the report, including information disclosure, whistleblower protections, third-party risk assessments, and post-deployment incident reporting.</p>
<p><iframe loading="lazy" title="YouTube video player" src="https://www.youtube.com/embed/xdmkQxMgKcE?si=KZFlTKXl6n8mTjN5" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
<p>This session started with three lightning talks by industry representatives:</p>
<ul>
<li><strong>Dan Hendrycks</strong>, Safety Advisor at xAI, shared insights from the company’s <a href="https://x.ai/documents/2025.02.20-RMF-Draft.pdf" rel="">Draft AI Risk Management Framework</a>. xAI mitigates malicious use risks through measures including access management and filtering methods, with particular attention to threats in the cyber and CBRN domains. The framework also addresses loss of control through measures including monitoring for deceptive tendencies.</li>
<li><strong>FU Hongyu (傅宏宇)</strong>, AI Governance Lead &amp; Director at Digital Economy Research Center at Alibaba Research Institute, emphasized Alibaba’s commitment to open-source AI, highlighting its transparency benefits while recognizing its unique risks. He outlined Alibaba’s security pipeline covering data, processing, resource management, and automated safety tests. He further emphasized institutional safeguards, such as the establishment of a technology ethics review system in 2021.</li>
<li><strong>BAO Chenfu (包沉浮)</strong>, Outstanding Architect and Chairman of the Safety/Security Technology Committee at Baidu, emphasized that traditional security methods fall short for AI. He outlined Baidu’s lifecycle-based, defense-in-depth approach and highlighted its active role in industry standards and self-regulation.</li>
</ul>
<p>Following the lightning talks, <strong>Rebecca Finlay</strong>, CEO of the Partnership on AI, joined a panel discussion with the three corporate representatives, moderated by Concordia AI International AI Governance Senior Research Manager <strong>Jason Zhou</strong>. They explored three key pillars: transparency in technical disclosure and regulatory alignment, organization-level ethical governance mechanisms, and the pros and cons of voluntary agreements versus binding regulation for achieving compliance. Panelists agreed that voluntary commitments offer flexibility in addressing uncertainty and unknown risks but should be augmented by more comprehensive measures, including increased transparency, internal governance mechanisms, and future legislation.</p>
<p><iframe loading="lazy" title="YouTube video player" src="https://www.youtube.com/embed/uzjjRmi6duM?si=6z_opAT0kYI5BC9i" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
<p>Shanghai AI Lab, in partnership with Concordia AI, released the <a href="https://concordia-ai.com/research/frontier-ai-risk-management-framework/" rel="">Frontier AI Risk Management Framework v1.0</a>. AI Safety Research Manager at Concordia AI, <strong>DUAN Yawen (段雅文) </strong>and Shanghai AI Lab Research Scientist Dr. <strong>SHAO Jing (邵婧)</strong> introduced the framework. It is China’s first comprehensive framework for managing severe risks from general-purpose AI models. Alongside the Framework, Shanghai AI Lab released a risk assessment report, which Concordia AI co-authored. We covered both documents in a previous <a href="https://aisafetychina.substack.com/p/shanghai-ai-lab-and-concordia-ai" rel="">Substack post</a>.</p>
<p><iframe loading="lazy" title="YouTube video player" src="https://www.youtube.com/embed/EhpCjOvAL7o?si=BwQRanhduhWMrnPX" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
<p>In this panel, <strong>YANG Xiaofang (杨小芳)</strong>, LLM Security Director at Ant Group; <strong>GONG Xiao (巩潇)</strong>, Deputy Director of the China Software Testing Center; and Professor <strong>Robert Trager</strong>, Founding Director of the Oxford Martin AI Governance Initiative, moderated by Concordia AI AI Safety and Governance Senior Manager <strong>CHENG Yuan (程远)</strong>, discussed three topics: corporate practice, third-party evaluation, and policy research. The discussion highlighted key challenges across the AI lifecycle, including risk identification, assessment, mitigation, and governance. The panel stressed that enterprises must go beyond technical solutions by strengthening organizational mechanisms and talent development. At the international level, they emphasized the need for consensus and incentive mechanisms to support the creation of a global AI risk governance framework.</p>
<h2 class="header-anchor-post">Theme 4: International Governance of AI Safety</h2>
<p><iframe loading="lazy" title="YouTube video player" src="https://www.youtube.com/embed/Z7hpxMJ8ADE?si=ClIMONoS4VDzlwMg" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
<p><strong>Anne Bouverot</strong>, Special Envoy of the President of the Republic of France for AI, reviewed the outcomes of the 2025 Paris AI Action Summit. She emphasized the launch of a <a href="https://www.currentai.org/blogs/governments-philanthropies-and-companies-unite-for-major-new-global-ai-initiative-in-the-public-interest" rel="">foundation for developing public interest AI</a> and also called for greater attention to AI sustainability issues, including energy consumption and environmental impact. Bouverot highlighted Europe’s investments and commitments in AI infrastructure and governance, emphasizing that trust and safety are central to enabling AI deployment. She concluded with a call for global collaboration to jointly promote safe and sustainable AI development.</p>
<p><iframe loading="lazy" title="YouTube video player" src="https://www.youtube.com/embed/rBM-6DVOg40?si=SIUGkPjfnWWZQgPk" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
<p><strong>Wan Sie Lee</strong>, Cluster Director (AI Governance and Safety) at Singapore’s Infocomm Media Development Authority (IMDA), introduced Singapore’s practices and international collaboration experience in global AI safety governance. She emphasized advancing safe and responsible AI through research, guidelines and tools, and global norms. She highlighted the <a href="https://aisafetypriorities.org/" rel="">Singapore Consensus</a> and its defence-in-depth approach to AI safety, covering evaluations, safety techniques, and post-deployment control. In addition, she shared models for international collaboration such as joint testing exercises and cross-border red-teaming. She stressed that standard-setting and practical implementation must go hand in hand, with a necessity for interoperable international standards.</p>
<p><iframe loading="lazy" title="YouTube video player" src="https://www.youtube.com/embed/JCGoczt1tRY?si=1AMaMvCwjz1BgWjH" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
<p>Concordia AI launched the <a href="https://concordia-ai.com/research/state-of-ai-safety-in-china-2025/" rel="">State of AI Safety in China (2025)</a> and <a href="https://concordia-ai.com/research/state-of-ai-safety-in-singapore/" rel="">State of AI Safety in Singapore</a> reports. <strong>Kwan Yee NG (吴君仪)</strong>, Head of International AI Governance at Concordia AI, introduced key findings from both reports.</p>
<p><iframe loading="lazy" title="YouTube video player" src="https://www.youtube.com/embed/tG1JRGYKndM?si=KGTQQvFoNdTo-KHH" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
<p>The final panel welcomed <strong>Lucia Velasco</strong>, AI Policy Lead at the UN Office for Digital and Emerging Technologies; <strong>Benjamin Prud’homme</strong>, Vice-President of Policy, Safety and Global Affairs at Mila; <strong>FU Huanzhang (傅焕章)</strong>, Assistant Director of the INTERPOL Innovation Centre; and <strong>GONG Ke (龚克)</strong>, Executive Director of Chinese Institute of New Generation Artificial Intelligence Development Strategies, moderated by Concordia AI Head of International AI Governance Kwan Yee Ng. They discussed AI governance at the UN, translating scientific consensus into action, international law enforcement cooperation, and global safety red lines. They stressed that the scientific community must communicate frontier risks in accessible policy language to encourage broad participation and foster mutual trust. For law enforcement, they highlighted the importance of establishing rapid, cross-border cooperation mechanisms to respond to catastrophic risks in a timely and effective manner. The panelists also underscored the importance of enhancing AI literacy among the public and practitioners, and of building international dialogue mechanisms.</p>
<h2 class="header-anchor-post">Closing Address</h2>
<p><iframe loading="lazy" title="YouTube video player" src="https://www.youtube.com/embed/Y4toO7zpatw?si=V0OQWAvbEhN9gKlb" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
<p><strong>HE Jifeng (何积丰)</strong>, Academician of the Chinese Academy of Sciences and a Member of the Shanghai Artificial Intelligence Strategic Advisory Expert Committee, delivered the Forum’s closing address. He pointed out that rapid AI development has brought unprecedented governance challenges. The core issue is how to harness superintelligence while ensuring human control and safety when machines are more intelligent than humans.</p>
<p>Referencing insights of earlier speakers, Academician He proposed researching technical interpretability, applying mathematical methods for modeling and reasoning, and ensuring the robustness and reliability of hardware and software systems in safety-critical applications. At the same time, he stressed the importance of building a multidimensional, multilayered governance framework encompassing international governance structures, safety verification methods, and social resilience.</p>
<p>He concluded by calling for recognition that safety governance is a fundamental safeguard, not an obstacle, to AI development. Only when society has full trust in these systems and embraces the outcomes of AI can the technology achieve explosive growth.</p>
<p><img loading="lazy" decoding="async" class="alignnone size-full wp-image-752" src="https://concordia-ai.com/wp-content/uploads/2025/09/WAIC-2.png" alt="" width="936" height="624" srcset="https://concordia-ai.com/wp-content/uploads/2025/09/WAIC-2.png 936w, https://concordia-ai.com/wp-content/uploads/2025/09/WAIC-2-300x200.png 300w, https://concordia-ai.com/wp-content/uploads/2025/09/WAIC-2-150x100.png 150w, https://concordia-ai.com/wp-content/uploads/2025/09/WAIC-2-768x512.png 768w" sizes="auto, (max-width: 936px) 100vw, 936px" /></p>
<p style="text-align: center;"><span style="color: #999999;">Group photo with guests and audience after the AI Safety and Governance Forum afternoon session.</span></p>
<h2 class="header-anchor-post">Media Mentions</h2>
<p>Media coverage of Concordia AI’s WAIC Forum, forum guests, or Concordia AI reports published at WAIC include:</p>
<ul>
<li>Bloomberg, <a href="https://www.bloomberg.com/news/articles/2025-07-30/china-prepares-to-unseat-us-in-fight-for-4-8-trillion-ai-market" rel="">China Vies to Unseat US in Fight for $4.8 Trillion AI Market</a>, July 30, 2025. The article included a table titled “China Sees Safety as Core Element of Its AI Strategy” with insights on domestic, international, technical, and industry developments based on Concordia AI’s State of AI Safety in China (2025) report.</li>
<li>Wired, <a href="https://www.wired.com/story/china-artificial-intelligence-policy-laws-race/" rel="">Inside the Summit Where China Pitched Its AI Agenda to the World</a>, July 31, 2025. The article mentioned Concordia AI’s AI Safety and Governance Forum, cited insights from our State of AI Safety in China (2025) report, and quoted Concordia AI CEO Brian Tse as saying: “You could literally attend AI safety events nonstop in the last seven days,” adding that because US and Chinese frontier models are “trained on the same architecture and using the same methods of scaling laws, the types of societal impact and the risks they pose are very, very similar.”</li>
<li>Caixin, <a href="https://science.caixin.com/m/2025-07-30/102346902.html" rel="">Elon Musk’s xAI Safety Advisor: U.S., China, and Europe Should Seek “Unity in Diversity” in AI Regulation</a>, July 30, 2025. The article interviewed Dan Hendrycks and mentioned his participation in Concordia AI’s WAIC Forum. It also discussed the Shanghai AI Lab and Concordia AI Frontier AI Risk Management Framework.</li>
<li>IT Times, <a href="https://mp.weixin.qq.com/s/EwDrlAveGkMm7NsnqCZi6Q" rel="">Over 10 large models already possess “self-replication” capabilities</a>, July 29, 2025. The article reported on speeches by Academician He Jifeng, Academician Gao Wen, Dean Yang Min, Director Zhou Bowen, and Professor Dawn Song, as well as the Shanghai AI Lab and Concordia AI Frontier AI Risk Management Framework.</li>
<li>Tech Review Africa, <a href="https://techreviewafrica.com/news/2580/un-digital-envoy-concludes-china-visit-advocates-for-inclusive-ai-governance" rel="">UN Digital Envoy concludes China visit, advocates for inclusive AI governance</a>, August 4, 2025. This article reported on UN Under-Secretary-General Amandeep Singh Gill’s trip to WAIC, including participation in the Concordia AI forum.</li>
<li>The People’s Daily, <a href="https://paper.people.com.cn/rmrb/pc/content/202507/31/content_30092070.html" rel="">Jointly Promote AI Development and Governance</a>, July 31, 2025. This article mentioned Concordia AI’s State of AI Safety in China (2025) report and interviewed NTI’s Jaime Yassif.</li>
</ul>
<p>&nbsp;</p><p>The post <a href="https://concordia-ai.com/concordia-ai-holds-the-ai-safety-and-governance-forum-at-the-world-ai-conference-2025/">Concordia AI holds the AI Safety and Governance Forum at the World AI Conference 2025</a> first appeared on <a href="https://concordia-ai.com">Concordia AI</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Preview: Concordia AI at the 2025 World AI Conference</title>
		<link>https://concordia-ai.com/preview-concordia-ai-at-the-2025-world-ai-conference/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=preview-concordia-ai-at-the-2025-world-ai-conference</link>
		
		<dc:creator><![CDATA[Concordia AI]]></dc:creator>
		<pubDate>Thu, 24 Jul 2025 10:20:40 +0000</pubDate>
				<category><![CDATA[Announcement]]></category>
		<guid isPermaLink="false">https://concordia-ai.com/?p=715</guid>

					<description><![CDATA[<p>We’re excited to share that Concordia AI will be hosting a dedicated AI Safety and Governance Forum at the World AI Conference (WAIC) in Shanghai on July 27, which will be livestreamed globally (full agenda and link below). What is WAIC? WAIC is China’s largest and most influential AI conference, held annually since 2018. It brings together&#8230;</p>
<p>The post <a href="https://concordia-ai.com/preview-concordia-ai-at-the-2025-world-ai-conference/">Preview: Concordia AI at the 2025 World AI Conference</a> first appeared on <a href="https://concordia-ai.com">Concordia AI</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>We’re excited to share that Concordia AI will be hosting a dedicated <strong>AI Safety and Governance Forum</strong> at the World AI Conference (WAIC) in Shanghai on July 27, which will be livestreamed globally (full agenda and link below).</p>
<h2 class="header-anchor-post">What is WAIC?</h2>
<p>WAIC is China’s largest and most influential AI conference, <a href="https://aiii.global/waic-aiii/" rel="">held</a> annually since 2018. It brings together senior Chinese government leaders, foreign diplomats, and renowned experts from across the globe.</p>
<p>WAIC is co-organized by seven central government ministries (MOFA, NDRC, MIIT, MoE, MoST, SASAC, and CAC), along with the Chinese Academy of Sciences (CAS), the China Association for Science and Technology (CAST), and the Shanghai Municipal Government.</p>
<p>In 2024, WAIC was for the first time called a “High-Level Meeting on Global AI Governance”, and saw an unprecedented caliber of senior officials in attendance, most notably Premier LI Qiang (李强). Last year’s edition also increased its focus on AI safety and governance.</p>
<p><a href="https://www.worldaic.com.cn/" rel="">This year’s WAIC</a>, held under the theme “Global Solidarity in the AI Era”, is <a href="https://www.globaltimes.cn/page/202507/1338097.shtml" rel="">expected</a> to break participation records and to maintain its high political profile.</p>
<p>For foreign stakeholders, WAIC is a key event to watch, and a critical opportunity to engage with Chinese counterparts.</p>
<h2 class="header-anchor-post">What to watch at WAIC</h2>
<h3 class="header-anchor-post">Opening ceremony</h3>
<p>The opening ceremony typically draws the highest level of attention, featuring senior Chinese officials, foreign dignitaries, leaders of international organizations, and CEOs of major AI companies.</p>
<p>At WAIC 2024, for instance, Premier Li Qiang, China’s second-highest ranking official, delivered opening remarks emphasizing that AI development must ensure safety, reliability, and controllability. Shanghai Party Secretary CHEN Jining (陈吉宁), one of China’s top 24 officials, also announced the “<a href="https://www.mfa.gov.cn/eng/xw/zyxw/202407/t20240704_11448351.html" rel="">Shanghai Declaration on Global AI Governance</a>,” which warns of “unprecedented challenges” in AI safety and ethics. That same phrase—“unprecedented risks”—was <a href="https://aisafetychina.substack.com/i/162033730/politburo-holds-first-dedicated-ai-development-and-safety-meeting-in-seven-years" rel="">later echoed</a> by President Xi Jinping during a Politburo study session in April 2025, underscoring how WAIC can foreshadow domestic policy trends.</p>
<p>The opening ceremony also features speeches from leading scientists and policy advisors. In 2024, Tsinghua University Institute for AI International Governance (I-AIIG) Dean XUE Lan (薛澜) noted risks of loss of control of AI systems and threats to national security from AI misuse, while Shanghai AI Lab (SHLAB) Director ZHOU Bowen (周伯文) <a href="https://www.shlab.org.cn/news/5443947" rel="">argued</a> that AI safety must be advanced alongside capabilities through alignment, explainability, and reflection in order to achieve trustworthy artificial general intelligence (AGI).</p>
<h3 class="header-anchor-post">Ministerial roundtable</h3>
<p>In 2024, China’s Ministry of Science and Technology (MOST) held a <a href="https://mp.weixin.qq.com/s/id_wBB_McCuBhZm-xvTOeg" rel="">ministerial roundtable</a> with representatives from around 30 other countries. MOST Minister YIN Hejun (阴和俊) noted that AI is an issue “of great importance for the fate of humanity,” while Vice Minister of Foreign Affairs MA Zhaoxu (马朝旭) highlighted the importance of preserving “safety bottom lines” and maintaining human control over AI systems. A similar gathering may occur again this year, even though it remains unconfirmed as of writing.</p>
<h3 class="header-anchor-post">Join us: Concordia AI’s AI Safety and Governance Forum</h3>
<p>WAIC typically features more than 100 thematic “forums,” which often run half-day or day-long. It is impossible to give a full overview here, and we recommend readers check out the full <a href="https://www.worldaic.com.cn/forum" rel="">agenda</a> on the official conference website to get a sense of the range of cutting-edge topics covered. There are multiple forums related to AI safety and governance this year.</p>
<p>We are excited to announce that Concordia AI is hosting the AI Safety and Governance Forum. Last year, <a href="https://aisafetychina.substack.com/p/concordia-ai-holds-the-frontier-ai" rel="">our forum</a> drew almost 300 in-person attendees and over 800,000 livestream views. This year, we’re proud to present another outstanding lineup of leading scientists, policy experts, and industry voices from China and around the world, coming together to discuss frontier AI safety and global governance.</p>
<p>???? Date: July 27<br />
???? Time: 09:00–17:30 (UTC+8)<br />
???? Venue: Shanghai World Expo Center, Meeting Room 518<br />
???? Livestream: <a href="https://online2025.worldaic.com.cn/forumdetail?uuid=F_5oUvbzsk" rel="">Morning session</a> (09:00–12:30 UTC+8); <a href="https://online2025.worldaic.com.cn/forumdetail?uuid=F_0GAIaYvz" rel="">afternoon session</a> (13:35–17:30 UTC+8)</p>
<p>The Forum will be structured around four themes:</p>
<ul>
<li><strong>Theme 1: The Science of AI Safety:</strong> Leading researchers will share insights on the problem “superalignment” and potential pathways to aligned AGI.</li>
<li><strong>Theme 2: Emerging Challenges in AI Safety:</strong> This session examines cutting-edge problems in AI safety, including dual-use challenges in cybersecurity and biosecurity, and early warning signs of deceptive or self-replicating AI systems.</li>
<li><strong>Theme 3: AI Risk Management in Practice:</strong> A comparative look at influential governance approaches across China, the EU, and the US, identifying shared concerns and promising practices for managing AI risk.</li>
<li><strong>Theme 4: International Governance of AI Safety:</strong> The final session explores the evolving landscape of international cooperation on AI safety and potential next steps for global governance.</li>
</ul>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-721" src="https://concordia-ai.com/wp-content/uploads/2025/07/WAIC-2025-agenda-en-scaled.png" alt="" width="516" height="3711" srcset="https://concordia-ai.com/wp-content/uploads/2025/07/WAIC-2025-agenda-en-scaled.png 356w, https://concordia-ai.com/wp-content/uploads/2025/07/WAIC-2025-agenda-en-21x150.png 21w" sizes="auto, (max-width: 516px) 100vw, 516px" /></p><p>The post <a href="https://concordia-ai.com/preview-concordia-ai-at-the-2025-world-ai-conference/">Preview: Concordia AI at the 2025 World AI Conference</a> first appeared on <a href="https://concordia-ai.com">Concordia AI</a>.</p>]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
