Introduction
A few days ago, Center for Global AI Innovative Governance released the bilingual version of the report New Trends in Global Artificial Intelligence Governance: Observations from the Shanghai Declaration. In order to have a more comprehensive understanding of the current complex situation of global artificial intelligence governance and listen to more diversified, cross-disciplinary and cross-national voices, the project team interviewed authoritative experts and related practitioners in the field of artificial intelligence governance from more than ten countries around the world. The interviewees came not only from economically and technologically developed Global North countries, but also from emerging economies and developing countries Global South representatives.
We will excerpt the interview content in the report one after another to show how experts from different regions and fields around the world view the current situation and prospects of artificial intelligence governance for the benefit of readers.
Abstract
Since the release of ChatGPT in 2022, AI has developed rapidly. Documents such as the EU's AI Act, UNESCO's readiness assessment methodology, and OECD AI Principles jointly shape the global governance framework for AI. The current key challenges faced by global AI governance are talent shortages and digital divides, as well as competition among countries for AI dividends. Countries in the global South are particularly constrained by insufficient infrastructure and talent retention. In the future, it is necessary to strike a balance between innovation, risk mitigation and inclusiveness, and rely on international collaboration to build an interoperable and evidence-based governance framework.
Interviewee Profile

Fahmi Islami
Lead Consultant in Mandala Consulting

Catherine Salim
Associate Consultant in Mandala Consulting

Alifian Arrazi
Associate Consultant in Mandala Consulting
Interviewer
Yang zhao, Research Assistant at the Center for Global AI Innovative Governance
Interview
The rapid advancement of artificial intelligence (AI), exemplified by the release of ChatGPT in 2022, has accelerated global efforts to regulate AI. Over the past year, the landscape of global AI governance has evolved significantly, marked by the emergence of key policy frameworks, regional disparities, and pressing challenges, particularly for countries in the Global South.
We could see three key global policy documents have shaped the AI governance discourse.
The EU's Artificial Intelligence Act (AIA) employs a risk-based approach, imposing sanctions on developers and deployers for malfunctions. Its Brussels effect has influenced policy makers worldwide, with Indonesia referencing its regulatory logic to analyze sector-specific risks. The UNESCO's Readiness Assessment Methodology provides a standardized framework to quantify AI development needs, guiding governments to prioritize areas for safety and societal preparedness. Indonesia's policy community has used this methodology to structure its national AI strategy. The OECD's AI Principles have become foundational in defining responsible AI, serving as a lingua franca in policy circles and a reference for national regulatory frameworks. The release of ChatGPT in 2022 triggered a regulatory rush, with countries scrambling to address the impacts of large language models and multi-purpose AI. Indonesia and other Global South countries have adopted a benchmark approach, drawing from both domestic or regional regulations, for instance, the EU AIA and Singapore's governance mode and international ethical principles. Besides, policies like China's Interim Measures for the Management of Generative Artificial Intelligence Services (《生成式人工智能服务管理暂行办法》) have had a more tangible impact in Asia, while the UN resolution on AI capacity building and the Shanghai Declaration could take efforts to design actionable tools for policymakers as the next step.
Nowadays, the global AI governance is facing two critical challenges, particularly in the Global South and developing countries. First, talent shortages and digital divide. Taken Indonesia as an example, we are facing a dual talent crisis. On the one hand, brain drain, which means skilled AI developers are lured by western tech companies, depleting local ecosystems. On the other hand, up-skill gaps, which means AI's disruption of traditional jobs demands workforce up-skill and re-skill, but Indonesia's stark digital divide, with unequal internet access and literacy between Java Island and rural areas, hinders widespread training. Second, countries are racing to capture AI's economic benefits. In Southeast Asia, Singapore and Malaysia attract data centers and AI investments. At the same time, Indonesia is aligning with the global trend of fostering an AI-enabled economic framework. This competition will drive policies to court AI-related industries, though Global South countries may struggle to compete without robust infrastructure or talent retention strategies. For them, attributed to lacking a clear competitive edge, they could seek to define its niche.
To make AI governance functions well, we need to focus on the defects of risk-based assessment systems and design innovation-friendly regulations. Current AI risk classification relies heavily on political processes rather than empirical evidence. The EU's high-risk categorization, while widely adopted, lacks comprehensive risk-benefit analyses. Indonesia argues for targeted regulations of specific AI applications, for example, deepfakes, as seen in China, or medical AI, as seen in Singapore rather than one-size-fits-all approaches, emphasizing the need for data-driven risk evaluations to balance innovation and safety. Since the validity of the EU's high-risk model is under scrutiny, future governance may pivot toward evidence-based and risk-benefit frameworks, integrating systematic analyses to ensure classifications are transparent and empirically grounded. Indonesia's approach exemplifies a trend from soft law to moderate law among developing countries. We has issued a national AI strategy, white papers, and sectoral guidelines, including fintech, that rely on voluntary standards rather than strict mandates. Future domestic regulations are expected to continue this balanced approach, prioritizing innovation while mitigating targeted risks.
At the same time, we need to intensify cooperation in global AI governance. At the level of actors, the international organizations and regional coordination play an important role in global AI governance. The UN, particularly UNESCO, has played a pivotal role in norm-setting. Its AI Readiness methodology is trying to integrated into Indonesia's regulatory framework, providing a roadmap for national AI development. While UN frameworks are non-binding, they establish shared standards that countries can adapt. Regional bodies like ASEAN are increasingly vital for harmonizing regulations. By promoting mutual recognition of AI testing standards and minimum norms, ASEAN aims to reduce cross-border barriers, enabling innovations like Indonesian AI solutions to scale across Southeast Asia. This coordination is essential for maximizing AI's global impact, as fragmented regulations could stifle cross-border innovation. At the level of action, there are two approaches to design collaborative actions for global AI governance. The first one is resource sharing and knowledge exchange. Countries should leverage platforms like the UN and ASEAN to pooling financial resources for AI research and development, sharing non-sensitive datasets to support technical innovation, and creating inventories of AI use cases , especially in healthcare and fintech, to inform policy decisions, as seen in Singapore's case studies. The second one is interoperability and standardization. Establishing global standards for AI testing and certification is crucial. Harmonized metrics for risk and safety would enable cross-border validation of AI systems, while regulatory coordination would prevent fragmented markets. This includes standardizing technical protocols, code frameworks, and compliance procedures.
Looking forward, global AI governance is at a crossroads, requiring nations to balance innovation, risk mitigation, and inclusive development. For the Global South, challenges like talent shortages and digital divides demand tailored solutions, while international collaboration remains essential for creating interoperable, evidence-based frameworks. As Indonesia and others navigate this landscape, they should focus on soft laws, regional coordination, and empirical risk assessment , in order to shape a governance model that prioritizes both technological progress and societal well-being.

