X

AI Side Events at the Munich Security Conference: Focusing on Governance and Crisis Management of Non-State Misuse Risks

02 27, 2026

Author: JiangTianjiao, Researcher of CGAIG and Associate Director at the Center for BRICS Studies, FDDI

From February 10 to 13, 2026, Jiang Tianjiao, a researcher at the Center for Global AI Innovative Governance, was invited to participate in multiple AI governance–related side events during the 62nd Munich Security Conference. Throughout the discussions, a notable convergence emerged among European and American experts: heightened concern over the security risks posed by non-state actors’ misuse of advanced artificial intelligence technologies and the urgent need to strengthen related governance frameworks.

The opening ceremony site of the 62nd Munich Security Conference

Image source: Xinhua News Agency

In terms of specific risk scenarios, participants expressed shared concern that non-state actors could exploit AI to conduct cyberattacks, facilitate nuclear, biological, or chemical (NBC) threats, or enable terrorist operations. Experts called upon governments, regulatory authorities, and private-sector actors to establish effective technical safeguards and policy interventions across the entire AI development and deployment chain.

European and American scholars voiced particular apprehension regarding the potential risks associated with open-source models. In previous months, Anthropic had publicly released a report highlighting the possibility that advanced AI systems could amplify the scale and sophistication of cyberattacks. At the conference, foreign experts further cited related assessments—including international evaluations of certain open-source models’ safety performance in nuclear and biochemical contexts—raising additional challenges in the global discourse. More sensitive still, some domestic model releases had not been accompanied by publicly available safety evaluation documentation, placing stakeholders at a disadvantage in international exchanges and debates.

Regarding policy responses, European and American experts proposed advancing AI-specific crisis management mechanisms, including the establishment of dedicated intergovernmental communication channels and corporate-level information disclosure or sharing frameworks. Similar initiatives had already been proposed and preliminarily developed during the AI Safety Summit hosted by the United Kingdom. However, current developments suggest two emerging difficulties: first, the risk of AI misuse appears to be intensifying rather than diminishing; second, existing mechanisms may not be functioning as effectively as anticipated. In particular, evolving U.S. policy approaches toward AI safety regulation constitute a major source of uncertainty. Moreover, fundamental questions remain unresolved: Who should lead crisis response efforts? To whom should incidents be reported or disclosed? Within what timeframe? And what information should be shared? These issues involve complex domestic governance considerations as well as delicate diplomatic negotiations.

Finally, during an informal dinner discussion, German experts expressed anxiety regarding Europe’s AI trajectory and the broader geopolitical-economic landscape. Following remarks by Canadian Prime Minister Mark Carney at the World Economic Forum in Davos, debates over “strategic autonomy” have gained renewed seriousness across Europe. In light of rapid advances in AI development in both the United States and China, some European experts acknowledged that it may be time to engage more directly with China to better understand the scale and dynamism of its AI transformation. Coinciding with the recent visit of the German Chancellor to China, there is growing expectation that China and Germany could expand cooperation in this critical domain and explore broader avenues for strategic collaboration.

 

Original link: https://mp.weixin.qq.com/s/URm0p6d5r2r8twv8PHLJ2Q


上一篇:下一篇: