Abstract
Over the past year, global AI governance has undergone significant changes: first, collaboration among industry, academia, and governments has become increasingly close; second, binding regulation imposed on enterprises has slowed down; third, the international community’s focus has shifted from safety to action and impact, and enthusiasm for AI Safety Summits has diminished. Meanwhile, AI Verification has emerged as a key governance issue. Looking ahead, efforts should be made to proactively promote collaboration between AI research institutions and national laboratories of various countries, so as to develop a unified understanding of risks and achieve substantive progress on key issues.
Interviewee Profile

Saad Siddiqui
AI Governance Researcher at the Safe AI Forum (SAIF)
Interviewer
Yang Zhao
Research Assistant at the Center for Global AI Innovative Governance
Interview
The past year witnessed significant shifts in global AI governance. A major trend is closer collaboration between industry, academia and governments, exemplified by the drafting of the EU's General-Purpose AI Code of Practice. These codes will apply universally to companies selling AI services in the EU, including Chinese and American firms. A noticeable step forward from the past is the move towards greater concreteness, particularly within these codes, providing increased specificity on governing frontier models. Overall, significant regulatory hesitation persists, although specific jurisdictions like the EU and certain US states continue regulatory efforts. In China, this manifests through targeted measures such as watermarking requirements. On the international stage, the momentum of AI safety summits appears to have waned due to a shift in focus from safety to action and impact, although there are new focused and technical fora like the Singapore Conference on AI that show great promise.
Verification of AI has emerged as a critical topic. As AI capabilities advance, potential future agreements between major powers like the US and China governing the use and access to advanced systems will hinge critically on trustworthy verification mechanisms. The current low trust between these powers necessitates robust verification frameworks, analogous to those in domains like nuclear arms control or the Open Skies Agreement. Verification stands out as an area where geopolitical rivals could cooperate effectively. The risks associated with such cooperation are relatively low, while the potential gains in managing catastrophic risks are significant. It represents one of the few areas ripe for technical cooperation on AI safety between competitors. Verification has also been key topic for the of International Dialogues on AI Safety.
The main challenges facing the Global South in AI governance involve ensuring they do not disproportionately bear the externalities of AI risks and striving for equitable access to advanced AI models. However, distinguishing between challenges related to frontier Ai governance and capacity building is crucial. While frontier models are increasingly open-source, a significant limitation for many regions is often the lack of necessary digital infrastructure to serve these models effectively, a fundamental capacity building issue rather than a governance one. In addition, governance challenges specific to these regions include ensuring model robustness against jailbreaks in local languages, sensitivity to cultural nuances, and empowering local entrepreneurs to build useful AI products for their communities. While developing channels like the ITU's AI for Good summit offer existing mechanisms for participation, resolving harder geopolitical questions related to AI likely demands smaller and focused formats, either bilateral or among a limited group of countries with a narrow scope.
Future global AI governance could prioritize establishing more joint testing exercises between AI institutes and national labs across different countries. This extends beyond the current International Network of AI Safety Institutes to include more bilateral arrangements or groups involving nations like Singapore, China, and the UK. The goal is to build shared testing capacity and a common vocabulary for discussing AI risks, currently limited mainly to interactions within the Safety Institutes network. This foundation is vital for developing a unified understanding of system risks. Sustained, topic-focused dialogue channels between key governments are also crucial for making progress on critical AI issues.
* These are Saad’s personal opinions and do not represent institutional positions held by the Safe AI Forum or other affiliated organisations.

