Abstract
Global AI governance is transitioning from the establishment of principles to regulatory implementation, yet fragmentation remains an issue. We are now entering a phase of regulatory diversification characterized by clearer rules and more divergent models. The United Nations should play a central role in norm-setting and coordination, ensuring that AI governance aligns with broader human development goals. Accountability, sovereignty, and interoperability are key issues, while asymmetries in capacity, discourse power, and access pose the greatest challenges. Rather than adhering to a one-size-fits-all global framework, countries should focus on common risks such as model misuse, security failures, and systemic biases, promote pragmatic coordination, and acknowledge the reality of regulatory diversity.
Interviewee Profile

Researcher, Institute for Foreign Policy and Strategic Studies, Diplomatic Academy of Vietnam
Interviewer
Jiang Junji,Research Assistant at the Center for Global AI Innovative Governance
Interview
Over the past year, global AI governance has moved decisively from principle-setting to regulatory implementation, amid rising geopolitical competition. In my view, some of the key milestones are: First, the EU AI Act, adopted in July 2024 and in force since August, has become the world's most comprehensive AI regulation. Its risk-based model, bans on certain AI uses, and early compliance incentives (via the AI Pact) have shaped global regulatory trends. Closely aligned is the Council of Europe's Framework Convention on AI, the first binding international treaty on AI and human rights. Together, these efforts have made Europe the normative frontrunner.
Secondly, global forums revealed widening divergence. The Paris AI Action Summit exposed rifts: the EU emphasized human rights; the U.S. and UK prioritized innovation and national security; and China took a more open and constructive approach, launching its AI Safety Institute and continuing to refine domestic rules. Notably, the Shanghai Declaration on Global AI Governance, announced earlier, reaffirmed principles of sovereignty, inclusiveness, and development, offering a Global South–oriented vision that gained resonance in parts of Asia and Africa.

Paris AI Action Summit,Image source: The Associated Press
Thirdly, while the UN and ITU advanced multilateral dialogue—such as through AI Governance Day and capacity-building initiatives—fragmentation persists. ISO and national frameworks (e.g., NIST, EU's DORA) continue to diverge. Many developing countries remain on the periphery, lacking meaningful input into standard-setting.
In short, global AI governance is entering a phase of regulation pluralism: clearer rules, but divergent models. Whether these can be bridged through inclusive multilateralism or will harden into competing blocs remains an open question. The Shanghai Declaration offers a valuable alternative perspective in global AI governance—one that stresses sovereignty, development, and inclusiveness. It's a timely reminder that AI norms must reflect diverse pathways, not just Western regulatory logic. For countries like Vietnam, it reinforces the legitimacy of development-first approaches. The UN resolution on capacity building is also a positive step which signals consensus on the need to support Global South participation. So overall, I think both the Shanghai Declaration and the UN resolution on capacity building contribute constructively to the current totality of global AI governance efforts—but their influence will be measured by whether they lead to meaningful redistribution of governance power and resources, not just statements of principle.

Shanghai Declaration Released,Image source: Baidu Baike
The UN plays a normative and convening role, rather than a regulatory one. Bodies like UNESCO and the ITU have been essential in setting shared principles and offering platforms—like AI for Good—for inclusive dialogue, especially for underrepresented countries. The recent UN resolution on capacity building and the Secretary-General's advisory bodies show growing ambition. But the UN's real value lies in ensuring that AI governance remains tied to broader human development goals—not just economic or security priorities. That said, without more funding and alignment among member states, the UN risks being a forum of consensus statements rather than a driver of concrete outcomes. Its future influence will depend on how well it bridges the gap between principle and practice.

AI for Good Global Summit,Image source: ITU
I think the most crucial keywords in current global AI governance are accountability, sovereignty, and interoperability. Accountability reflects growing demand for clear responsibility in AI outcomes—especially in high-risk domains like defense, finance, and healthcare. It's no longer enough to issue ethical principles. The systems need traceability, auditability, and legal liability frameworks. Sovereignty has re-emerged as a central concern, particularly for middle and low-income countries. The ability to shape national AI trajectories—without being locked into dependency on major powers or platforms—is now a governance priority. Interoperability may be the most practical and important. With divergent regulatory models emerging (EU, US, China), creating bridges—technical, legal, and normative—between systems is essential to avoid global fragmentation, something that many if not all small and mid-sized countries are worried about.
The biggest challenge of global AI governance is asymmetry—in capacity, voice, and access. Most Global South countries are rule-takers, not rule-makers. They lack not only the infrastructure to build foundational models but also the institutional capacity to regulate or audit them effectively. A related issue is the growing representation gap in standard-setting bodies and major governance forums. Without more inclusive participation, global AI norms risk reproducing existing inequalities under the banner of responsible AI, even become a tool of dominance of major powers.
We expect a shift toward regional blocs with overlapping but distinct governance models—the EU emphasizing rights, the U.S. focusing on innovation and security, and China promoting state-led, development-centric frameworks. Full global harmonization remains unlikely. For Southeast Asia and countries like Vietnam, the key trend will be strategic navigation—adopting elements from different models while preserving flexibility. We'll also likely see growing demand for capacity partnerships, especially around compute access, safety testing, and regulatory expertise. Finally, adaptive governance—lightweight, iterative approaches suited to fast-moving technologies—will become more important than rigid legal frameworks. The challenge will be staying agile without sacrificing sovereignty or safety.
Instead of chasing a one-size-fits-all global framework, countries should focus on building pragmatic alignment around shared risks—like model misuse, safety failures, and systemic bias—while accepting that regulatory diversity is here to stay. Cooperation should be modular and interest-based, not ideological. That means forming small, minilateral initiatives on specific issues, rather than waiting for universal consensus. This is especially important for fast-moving domains where delay equals irrelevance. For many countries in Southeast Asia and beyond, the priority is not just inclusion but strategic positioning: securing access to critical AI infrastructure, investing in domestic capacity, and engaging all major governance platforms without being locked into any single power's regulatory orbit.

