Abstract
Artificial intelligence has not yet achieved ecological security and equality. While holding great promise, it is also accompanied by significant potential risks. AI governance is therefore particularly necessary, with science, security, and accessibility serving as its three cornerstones. Although AI governance has made milestone achievements such as the Global Digital Compact, it still faces challenges including geopolitical disruptions, disagreements on governance approaches, and inadequacies in capacity building. Going forward, we should adhere to a multilateral approach, enhance governance transparency, make full use of financial tools such as Official Development Assistance (ODA), and strengthen the role of multilateral institutions like the United Nations.
Interviewee Profile

Maxime Stauffer
Co-founder and Co-CEO of the Simon Institute for Longterm Governance
Interviewer
Xiao Zehui
Research Assistant at the Center for Global AI Innovative Governance
Interview
We believe that the following key events have occurred in the field of global artificial intelligence governance over the past year:
These two divergent priorities make integrated global AI governance impossible.
Artificial intelligence is not a safe technology. Nor is it an equal one. It holds immense promise, but it is also a dual-use tool with significant destabilizing potential. For that reason, I believe AI governance is an absolute necessity. The governance architecture we are building needs three foundations: science, safety, and access.
First, a mechanism is needed to track and assess the state of AI development. The UN recently agreed to establish an Independent, International Scientific Panel on AI - which, similar to the IPCC, will regularly report on technological trajectories and societal impacts, providing a shared basis for decision-making among governments, companies, and civil society. While it remains in its early implementation stages, this is a promising first step towards establishing global scientific consensus on AI.
Second, a global safety and verification regime is required. AI presents not only dual-use concerns but also real risks of loss of control. There is no reliable system today to ensure that AI developers adhere to safety protocols, nor are there mechanisms for monitoring or verifying compliance.
Third, benefit-sharing and capacity-building mechanisms are needed to correct the distributional imbalances of AI. In the short term, mechanisms must be developed to redistribute the value created by AI, which is otherwise likely to concentrate in a few hands. In the longer term, developing countries need investment in digital infrastructure, energy systems, and education to ensure they can deploy AI systems and participate meaningfully in the emerging global AI economy.
We are beginning to see these elements emerge—albeit slowly and unevenly. A milestone achievement came last year, when UN member states adopted the Global Digital Compact, which contains a dedicated section on AI. This section proposes three important steps forward. First, the creation of a scientific panel on AI, which, as mentioned, is tasked with updating the world continuously on the trajectory of the technology and its consequences. Second, the establishment of a global dialogue on AI governance, providing a forum for negotiation of universal norms and coordination of capacity-building programs. Third, the exploration of a global AI fund, designed to help finance infrastructure projects—especially in under-resourced regions.
This direction is essential because most prior initiatives—such as the UK's AI Safety Summits, the Hiroshima Process under the G7, the OECD's AI work, or the Global Partnership on AI—are driven largely by the Global North and fail to create the kind of universal, equitable ecosystem that we need. The UN-led efforts offer a more legitimate and representative path.
However, the current state of global governance is nowhere near where it needs to be. The UN system has published only about 2% of its outputs in the field of science and technology, and most of its member states have no tradition of working on advanced tech governance. So while the Global Digital Compact offers a promising framework, the actual machinery of implementation needs further strengthening.
There are several interlocking challenges we must address if we are to make meaningful progress on global AI governance. First, AI is becoming a destabilizing force in geopolitics. I'm not referring to its possible use in nuclear command and control systems, but rather to the fact that AI is changing economic structures and global power balances in unpredictable ways. No one has experience managing this kind of transformation. As uncertainty rises, so does the risk of misinterpretation and miscalculation. Countries may act not on facts, but on incorrect assumptions about others' intentions. That increases the risk of escalation—trade wars, sanctions, or worse.
Second, when opinions differ, dialogue is necessary. Increasingly, countries treat dialogue as something that only happens when there's already consensus. But that misses the point. Dialogue is most needed when trust is low. We need to re-learn how to talk even when we are not aligned. Thus far, discussions between major powers on AI have encountered difficulties. Last year's China-US talks in Geneva, while producing limited results, should not be seen as failure—continued efforts remain essential, as without them, there is no possibility of effective governance.
Third, capacity building presents another challenge. It features prominently in the Shanghai Declaration and the UN resolutions, and while the intention is right, the discussion lacks depth—particularly on how AI-related capacity building differs from traditional infrastructure projects. Building internet infrastructure is good, whether we have AI or not. What's specific to AI is how to ensure that developing countries are not permanently excluded from the benefits of this technology. We also lack serious cost assessments—without which it's hard to mobilize private sector support.
Another layer of complexity is open-source AI development. Open-source models can help democratize AI development and make the benefits more widely available. But open source is not an all-or-nothing proposition. Some models—particularly those with potential for misuse in bioweapons or terrorism—should not be open. We need to be much more precise about what should and shouldn't be made public.
The situation for the Global South is particularly concerning. I see three major risks here. First, international inequality: the gap between countries that can build advanced models and those that can't will only grow. Second, domestic inequality: in countries like Egypt or Kenya, even if local AI systems are built, the benefits are likely to go to the elites, not the broader population. Third, geopolitical dependence: most Global South countries will not develop their own frontier models and must rely on the technological systems of leading countries, which means that technological dependence carries the risk of diffusing potentially unsafe AI models.
Looking forward, one concrete step is to promote transparency. If governments shared more about what they're doing—both technically and strategically—we would have fewer misunderstandings.
We also need financial tools to support governance and inclusion. One option is to redefine Official Development Assistance (ODA). At present, digital infrastructure and AI are excluded from most ODA frameworks, meaning that institutions like the World Bank or Chinese development banks cannot fund AI-related projects. Changing the definition of ODA to include digital technologies would unlock substantial investment.
Lastly, it will be important to maintain strong multilateral institutions, like the UN, and in the short term, promote the participation of multiple stakeholders in governance. Geneva can play an important supporting role—not necessarily as a government actor, but as a location. Geneva is home to many trusted international organizations, most countries are already represented there, which provides a relatively neutral platform for dialogue and is an ideal place for communication.
The path forward remains uncertain, with shifting political winds and competing interests creating a series of obstacles. But the foundations for global AI governance are being laid, through the UN's scientific panel, new dialogue mechanisms, and emerging financial frameworks. Progress may be slow and uneven, as well as progress via minilateral and bilateral bilateral pathways, but the alternative to international cooperation is far worse. We cannot afford to abandon the multilateral approach.

