X

International Perspectives

Global Perspectives丨Interview with Huw Roberts

03 25, 2026

Introduction

As the global governance of artificial intelligence continues to evolve, the issue of imbalanced AI development is becoming increasingly prominent. China and the United Nations play crucial roles in building sovereign AI capabilities. Currently, three major trends are shaping global AI governance: first, geopolitical factors will persistently influence technological development; second, the scientifically contested nature of AI is likely to endure and impact policy outcomes; and third, certain nations will serve as exemplars in promoting cooperation and benefit-sharing with Global South countries. All governance actors must adopt more inclusive and multilateral approaches to address interoperability challenges among diverse governance rules.

Interviewee Profile

Doctoral Researcher at the University of Oxford's Internet Institute

Interviewer

Zhang Shuyan

Research Assistant at the Center for Global AI Innovative Governance

Interview

Perhaps if I could try to characterize maybe three phases of AI governance. The first phase was from 2018 to late 2022, where institutions like UNESCO and the OECD developed high-level frameworks, but those frameworks weren't very tangible. The second phase began with the release of ChatGPT in November 2022, accelerating the conversation and attempts to address risks. The Bletchley Summit marked a high point in international cooperation, and academia and think tanks proposed ambitious ideas like new institutions similar to the IAEA. In Autumn 2024, a third phase began which involved a more realistic assessment of what is needed and possible for global AI governance, given the current state of international relations.

Three factors led to this switch. First, geopolitics, particularly the narrative of less governance and more innovation under Trump administration, has posed a barrier to cooperation;Second, institutional barriers,Cooperation in legacy international institutions is difficult because they often face gridlock, as is developing new globally inclusive institutions because of political and ideological differences. Third, the scientifically contested nature of AI has brought up to little agreement on what policy issues to focus on. As seen in the Paris AI Summit, the discussion shifted from AI safety risks to a wider set of issues.

Since last year, I have witnessed 3 key milestones in global AI governance. First, the Shanghai Declaration and the UN Resolution on Enhancing International Cooperation for Artificial Intelligence (AI) Capacity Building, along with China's capacity building strategy, indicating China's maturing position in global AI governance. Second, Ted Cruz's letter has reflected the differences between the United States and the European Union in terms of AI governance paths, and at the same time implies that topics such as disinformation, fairness and bias are currently particularly sensitive in the US context. Third, the Draghi Report, which emphasized that the EU should build its own capabilities, is influencing EU policy conversations, especially with the unpredictability of Trump's presidency. I believe that these three incidents reflect the different attitudes and fundamental positions of China, US, UK and EU towards AI governance.

2024 World Artificial Intelligence Conference Releases Shanghai Declaration

Image credit: Xinhua News Agency

First, the Shanghai Declaration and the Draghi Report reflects the different attitudes of US and European countries towards AI capacity building. The UK and US differ in their response. The US shows interest in China's AI capacity-building, possibly for misguided reasons. Previously, under Biden Administration, the US restricted Chinese tech access. Recently, two views emerged in the US: one calls for more tech restrictions; the other thinks the US should promote its technology globally. Private actors like OpenAI have shown such intent. Likely, US policy will shift towards promoting its technology and AI products. While the UK and EU focus more on domestic issues. The UK emphasizes AI safety amid the Frontier AI Bill. While the EU focuses on governance frameworks, debating on innovation and digital sovereignty. I am looking for more cooperation in capacity-building but not sure how to achieve it.

Secondly, Ted Cruz's letter indicates that the differences between Europe and US on the issue of AI safety are constantly expanding. The US will further expand AI innovation, while EU is engaged in a new round of debate on strengthening AI regulation. In the future, it is not clear whether cooperation can be reached between these two development models. Furthermore, I would like to add that there are also differences between the UK's approach to AI safety and that of the EU. The EU is creating technical standards and a general code of practice for AI, aiming to have an overarching framework by next year. The UK, however, has set up an AI Security Institute and relies on existing regulators. It's also trying to introduce the Frontier AI Bill, though it has faced delays and uncertainty. Initially, the EU in 2023 didn't think AI safety was important, but as discussions gained momentum, the EU started taking it seriously, aligning with the UK. However, at the Paris AI Summit, Macron focused on almost everything except AI safety. It's unclear whether this represents a broader European shift, but it seems that the intensive focus on AI safety and security might be waning.

At present, regarding global AI governance, I believe the most significant trend is the rise of sovereign AI.

So far, the rise of AI brings a concentration of high-quality work and high value chain in AI-leading countries, causing discontent. As for Global South countries, numerous questions remain, regarding the underlying infrastructures and essential components required for developing AI sovereignty. For instance, during a recent conference, a Kenyan stakeholder highlighted that no datasets exist for one of the African languages spoken by over a million people in the region. Without this foundational data to input into a local model, achieving cultural AI sovereignty becomes highly challenging. Interestingly, similar issues exist even in the Global North. The UK, for example, faces certain dependencies that hinder its pursuit of AI sovereignty.

I think China's capacity-building program and the United Nations should play a greater role in this regard. China's capacity building is significant for AI sovereignty, especially with its open-source models. About a third of the top 12 models are Chinese, 75% of which are open-source, offering opportunities for the Global South to develop suitable technologies. Also, Capacity-building is viewed as highly suitable for the UN due to its legitimacy in international affairs, and making it as multilateral as possible is considered beneficial as policies mature.

AI sovereignty also matters to the Global North. The technical standards of EU AI Act, though not legally binding, are highly recommended for compliance. These standards may create a Brussels effect, making even non-EU entities follow them due to EU market regulations. Skepticism exists, but their influence in practice is hard to avoid. Around two-thirds of European standards align with international ones, and in key areas like AI management, European standards may have a broader impact, limiting third countries' ability to shape their own rules due to global interconnection.

Looking into the future, there are three features or trends in global AI governance. First of all, I saw the constraints of geopolitics on technological discussions. AI safety verification mechanisms, discussed a year ago, face difficulties in internationalization due to geopolitical conditions and ties to national security. Conversely, less controversial issues with general consensus might see progress despite institutional hurdles. The international scientific board for AI in the Global Digital Compact is an example. Since we're in Phase 3 where everyone is moving beyond over optimism to a more realistic stage, the focus should be on assessing existing initiatives, identifying improvement areas and enhancing cooperation between international institutions, even without direct country-to-country dialogue.

Global Digital Compact Policy Brief

Image credit: UN

Secondly, the scientifically contested nature of AI will likely persist, impacting policy outcomes. Opinions vary on when artificial general intelligence (AGI) might be achieved, leading to differing policy requirements. We might see a fourth stage triggered by major incidents or breakthroughs that could alter the current paradigm and overcome geopolitical barriers. However, until such events occur, there's disagreement on prioritizing AI policy issues due to scientific uncertainty.

Thirdly, certain countries will act as norm entrepreneurs in promoting international cooperation and benefit sharing with the Global South. For advanced AI entities, prioritizing safety and security through common standards or aligned frameworks is key to maintain the regulatory level. Despite international cooperation challenges, the UK shows promise, especially in its shifting policies toward China. The bilateral cooperation between the UK and China is constantly improving, including in the field of AI, with recent dialogues and participation in events like the World AI Conference. The UK can bridge gaps, enabling AI safety governance dialogue between advanced and less advanced AI producers. Also, China's capacity building leadership is important and should continue. Making these efforts more inclusive and multilateral can address interoperability issues.

上一篇:下一篇: