X

Global AI Governance: Diverse Paths to AI Safety

03 25, 2026

From the UN’ establishment of an International Independent Scientific Panel on Artificial Intelligence and a Global Dialogue mechanism, to the recent rollout of national regulations and safety advisories on AI agents, the international community is exploring multiple pathways toward AI governance. What are the emerging characteristics of global AI governance? As 2026 unfolds, what new challenges and trends are taking shape? To address these questions, People’s Daily Overseas Edition invited experts to provide insights.

Visitors attend The Light of Internet Expo during the 2025 World Internet Conference on November 6, 2025. (Xinhua News Agency / Huang Zongzhi)

New Developments in AI Governance

As artificial intelligence technologies continue to evolve rapidly, the international community is entering a new phase of exploration regarding what should be regulated and how. From multilateral mechanisms to comprehensive legislation and scenario-specific regulatory frameworks, several recent developments have drawn attention.

Not long ago, the United Nations General Assembly appointed 40 members to the “International Independent Scientific Panel on Artificial Intelligence,” including experts in machine learning, data governance, public health, and human rights. In a statement, UN Secretary-General António Guterres noted that in August 2025, the General Assembly adopted a resolution on global AI governance, formally establishing both the panel and the “Global Dialogue on AI Governance” mechanism. The panel is expected to assess the risks, opportunities, and broader impacts of AI.

Recently, the Council of the European Union reached agreement on a proposal aimed at simplifying certain AI regulatory rules. The proposal focuses on streamlining parts of the EU’s regulatory framework, reducing compliance burdens for businesses, and restricting the use of AI for generating explicit content. According to the Council, the proposal is part of a broader legislative package proposed by the European Commission on regulatory simplification and represents adjustments to aspects of the AI Act.

At the same time, several U.S. states have introduced legislation targeting specific use cases. According to media reports, the New York State Legislature is considering a bill that would prohibit AI chatbots from providing legal or medical advice and would allow users to file lawsuits against operators that violate the rules. In addition, Newsweek has reported that multiple states are expected to introduce legislation this year requiring AI companion systems to disclose safety-related information and implement protective measures.

In parallel, several countries have introduced AI-related laws and governance measures. On March 1, Vietnam’s law on artificial intelligence officially came into effect, making it one of the first relatively comprehensive AI regulatory frameworks in Southeast Asia. Earlier, on January 22, Singapore released a Model AI Governance Framework for Agentic AI during the World Economic Forum in Davos, Switzerland, addressing potential risks associated with deployment. On the same day, South Korea announced the implementation of its Basic Act on the Development of Artificial Intelligence (AI) and Creation of a Foundation for Trust, requiring stricter measures for applications in high-risk sectors.

Li Yan, Director and Research Fellow at the Institute of Sci-Tech and Cyber Security Studies under the China Institutes of Contemporary International Relations, noted that global AI governance remains at an early stage of exploration, which reflects the current pace of technological development. From a technical perspective, AI continues to evolve rapidly, and both technological pathways and application scenarios have yet to reach a stable or fully defined stage. This creates what is often described as the Collingridge dilemma: regulatory interventions that are too early or too strict may hinder innovation, while delayed or insufficient regulation may fail to effectively manage risks and could increase governance costs.

“At this stage, AI governance across countries shows clear signs of fragmentation and diversity,” Li said. “There is no single model. Different countries are exploring governance approaches aligned with their own national conditions and development priorities, broadly forming patterns such as regulation-oriented, development-oriented, and balanced approaches.”

Jiang Tianjiao, Associate Professor at the Fudan Development Institute and fellow researcher at the Center for Global AI Innovative Governance, also noted that countries currently exhibit differing policy orientations in AI governance. Despite these variations, balancing innovation with safety is increasingly becoming a shared objective.

Concerns Over the Gap Between Technology and Regulation

As 2026 progresses, the challenges facing AI governance are becoming more complex. From data poisoning to risks emerging in diverse application scenarios, existing governance frameworks are facing new pressures. The rapid development of AI agents such as OpenClaw has further intensified concerns that regulatory frameworks may struggle to keep pace with technological change.

Recently, researchers from Harvard University, the Massachusetts Institute of Technology, and several partner institutions released a report titled Agents of Chaos. In simulated enterprise environments, the study deployed AI agents and recorded 11 serious security incidents within just two weeks. The report suggests that more than 60% of companies lack effective mechanisms to shut down malfunctioning agents, underscoring the urgent need to strengthen AI safety governance.

Maha Hosain Aziz, a professor at New York University, has also warned that AI agents could be exploited at scale by malicious actors, potentially ushering in a new phase of cybersecurity risks.

Jiang noted that AI agents raise concerns about a range of interconnected risks, including data security, privacy protection, cyberattacks, and malicious misuse. These risks go beyond what traditional governance frameworks were designed to address. When users delegate authority to agents for the sake of efficiency, questions arise about how responsibility should be assigned in the event of failures. Similarly, the misuse of agents in cyberattacks, fraud, and illicit industries underscores the need for a comprehensive governance framework covering data, agents, accountability, and enforcement.

Yajin Zhou, Associate Professor in the Department of Information Engineering at the Chinese University of Hong Kong, pointed out that early AI legislation primarily focused on models themselves, particularly how they process data and are applied in specific contexts. At that stage, responsibility was relatively concentrated, often assigned to model providers. However, as AI evolves into a more complex ecosystem, including model providers, tool developers, application developers, and deployment platforms such as cloud service providers, the chain of responsibility has become increasingly diffuse.

“In such a complex system, determining who should bear responsibility is a challenge that early legislation did not fully anticipate,” Zhou said. While regulatory frameworks attempt to abstractly define roles such as model providers, application developers, and deployers, the rapid evolution of the AI ecosystem means that these categories may quickly become outdated. The gap between legislation and technological development remains a persistent and unavoidable challenge.

Li added that in 2026, the core issues of AI governance are shifting from purely technical concerns toward broader rule-making and institutional development, with three key challenges emerging: First, balancing development and security has become a central difficulty. The rapid pace of technological iteration appears to be in tension with the relatively slower development of governance frameworks, and the notion that “regulation is struggling to keep up with code” is widely seen as a common challenge across countries. Drawing boundaries between the benefits of innovation and risks such as algorithmic opacity, data breaches, and deepfakes has become a major test for regulators. Second, there are limitations in agile governance capacity. The cross-regional and cross-sector diffusion of AI technologies has outpaced the responsiveness of traditional regulatory systems. At the same time, cross-border data flows and the borderless nature of algorithmic models make it difficult for any single country to address these challenges independently. Third, the allocation of power and responsibility remains unclear. Key issues—including algorithmic accountability, platform compliance obligations, and risk preparedness for advanced AI systems—still lack widely accepted global standards, suggesting that the development of effective governance frameworks remains an ongoing and complex task.

Dialogue and Cooperation as Essential Paths Forward

Despite growing risks, countries continue to explore governance solutions, and several innovative practices have emerged.

Among them, regulatory sandboxes are increasingly being adopted as flexible governance tools. The EU Artificial Intelligence Act explicitly calls for the establishment of such mechanisms. Spain launched the EU’s first AI regulatory sandbox pilot in November 2023, and several other European countries are advancing similar initiatives.

In China, the Cyberspace Administration of China released the Interim Measures for the Management of Anthropomorphic AI Interactive Services (Exposure Draft) in December 2025 proposing the introduction of regulatory sandbox mechanisms, allowing companies to conduct innovation trials and safety testing under regulatory supervision.

Jiang noted that in regions such as the EU, the United Kingdom, and Singapore, regulatory sandboxes are gradually evolving from pilot tools into more institutionalized arrangements. These mechanisms provide controlled environments for testing applications in areas such as autonomous driving and AI-powered healthcare, while also helping to define acceptable risk boundaries.

Beyond sandboxes, scenario-based legislation is also expanding. For example, the United States has introduced laws addressing issues such as liability in autonomous driving and deepfakes. In China, discussions during the annual “Two Sessions” have also focused on legislative proposals related to AI-generated content and algorithmic discrimination.

Experts emphasize that diverse governance approaches do not imply fragmentation. Dialogue and cooperation remain essential at the global level.

“As an emerging technology, AI is inherently cross-border and cross-sectoral,” Zhou noted. “Models developed in one country can serve users worldwide. If regulatory frameworks diverge significantly, compliance costs for companies may increase substantially.” He added that many challenges—such as cross-border deepfake dissemination, election interference, and AI-enabled cyberattacks—already required international cooperation before the rise of AI and have now become even more pressing.

Li also emphasized that countries are exploring the integration of voluntary international norms and technical standards. Historical experience suggests that sharing best practices is key to overcoming governance challenges. Multilateral dialogue mechanisms can help bridge regulatory gaps and support the development of governance frameworks that balance national conditions with global interests.

“While there is a growing consensus on general principles of global AI governance, a stable and mature system has yet to take shape,” Li said. In this context, China’s Global AI Governance Initiative proposes principles such as common, comprehensive, cooperative, and sustainable security, offering a potential reference for international discussions.

Jiang concluded that the urgency of international dialogue on AI governance is increasing. Risks such as deepfakes, cyberattacks, and the potential misuse of AI in more severe threats highlight the interconnected nature of global security. “A regulatory gap in one country may become a global risk source,” he noted. Strengthening dialogue, coordinating policies, and gradually establishing mutual recognition of standards and joint evaluation mechanisms will be key to building an inclusive and effective global AI governance system.

Original link: 

https://mp.weixin.qq.com/s/pWnqsdjqHsrzygu9GJhKBA


上一篇:下一篇: