X

International Perspectives

Global Perspectives | Interview with Sundeep Waslekar

03 25, 2026

Abstract

Over the past year, while global AI governance has made positive progress, the divergence in risk concerns between Western countries and Global South countries has hindered cooperation. To build a balanced, inclusive, and forward-looking governance system, attention should be paid to four aspects: First, enhance the Global South countries' understanding of the short-term and long-term development of AI as well as its risks; Second, promote cooperation among Global South countries; Third, encourage Global South countries to explore independent large model technologies; Fourth, integrate risk mitigation measures with development demands, and bridge the gap between Global South countries and advanced AI-powered countries under the framework of global AI governance.

Interviewee Profile

SundeepWaslekar

President, Strategic Foresight Group

Interviewer

Jiang Junji

Research Assistant at the Center for Global AI Innovative Governance

Interview

We believe that the following key events have occurred in the field of global artificial intelligence governance over the past year:

1) The Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law was opened for signatures on 5 September 2024.

2) Canada, South Korea, and India established AI Safety Institutes, out of about 12 such institutes existing today. On November 21, 2024, the International Network of AI Safety Institutes was launched.

3) In January 2025, International AI Safety Repot was published.  It was led by Yoshua Bengio. It assesses risks from general-purpose AI and provides mitigation strategies. In May 2025, Singapore Consensus on Global AI Safety Research Priorities, a report on AI safety measures was published at an AI safety conference in Singapore.

4) In September 2024, approximately 60 countries, including the United States, endorsed a non-binding blueprint for action at the Responsible AI in the Military Domain (REAIM) summit in Seoul.

5) On December 19, 2024, UN Security Council hosted a ministerial-level briefing on AI and its implications for international peace and security. UN Secretary-General António Guterres emphasized the urgent need for international guardrails to ensure AI remains under human oversight, particularly in military applications.

6) In December 2024, Strategic Foresight Group and the Geneva Centre for Security Policy concluded three-year track two dialogue process on AI-NC3 convergence for P5 UNSC Member countries. It resulted in the Framework for Responsible Use of AI in the Nuclear Domain, published by the Future of Life Institute in February 2025. It outlines the need for an international framework addressing the convergence of AI and nuclear command, control, and communications systems.

7) In September 2024, California Governor Gavin Newsom vetoed Senate Bill 1047, which aimed to hold large AI model developers accountable for potential catastrophic harm.

8) In January 2025, the Trump administration rescinded a previous executive order that mandated major AI developers to share safety assessments with the federal government. This move effectively left the U.S. without formal AI guidelines, raising concerns about unchecked AI development. President Trump has announced his intention to proclaim a new AI policy in July 2025 and asked for policy suggestions.

9) On April 26, 2025, President Xi Jinping of China made an important speech on AI policy at CPC Central Committee Forum and placed emphasis on mitigating high end risks.

10) In May 2025, shortly after his election to the supreme position in the Catholic Church Pope Leo XIV issued a warning about the societal risks of AI, emphasizing threats to human dignity, justice, and labour. Drawing parallels to Pope Leo XIII's 1891 encyclical Rerum Novarum, which addressed the challenges of the Industrial Revolution, Pope Leo XIV likened the current AI revolution to a transformative era requiring moral clarity and guidance.

While some Western countries are concerned about extreme risks posed by advanced AI systems, potentially posing catastrophic and existential risks, the Global South countries are concerned about access to technology for social good and economic betterment and the risks at the level of misuse of AI for fraud and crime. These two divergent priorities make integrated global AI governance impossible.

Looking ahead, advanced companies in the US, UK, China and potentially South Korea are moving in the direction of building AI that can handle complex tasks (OpenAI released o3 in December 2024),  AI evolving without reliance on human-curated datasets (DeepMind's Boundless Socratic Learning BSL released in October 2024), AI that can integrate language, vision, and physical action to enable robots to perform complex tasks (Deep Mind's Gemini Robotics released in March 2025), building biocomputers composed of human brain organoids, offering a novel approach to AI computation with significantly lower energy consumption (launched by Final Spark in August 2024/ still early stage and experimental but earth-shaking long term potential), AI technology to interpret animal vocalizations and body language into human language, aiming to bridge interspecies communication gaps (patent filed by Baidu in May 2025/reported in the media but not much known, considered revolutionary in expanding the boundaries of cognition research when more scientific details are released by the company).

Thus, advanced companies in the US, UK, China and potentially South Korea are going in the direction of using AI to redefine science itself, to create new molecules, to discover new mathematical theorems and laws of physics. The future of AI in these countries will involve combination of advanced LLMs, cognitive engineering, neuro-symbolic AI, neuro-morphic computing, federated AI. Saudi Arabia and UAE are integrating themselves in these developments by investing billions of dollars in the US big tech, particularly since January 2025. The Global South countries are far removed from these developments with their focus on tweaking foundational models developed in the West to build local language LLMs, ignoring any discussion on AGI and advancements like cognition engineering and neuro-symbolic AI. The Global South countries want to use the technologies developed by AI scientists, including the two Nobel Laureates of 2024 and Turing Award Laureates, but they do not want to discuss the catastrophic risks of future AI advancements alerted by the same scientists. As a result, Global South countries, except China and South Korea, may end up as rule takers accepting the rules set by the Western countries, and using Western technologies for local application, rather than rule shapers. 

This gap is going to create A Split Wide Open both in terms of risks and opportunities. The advanced countries will use AI to challenge the frontiers of science and potentially pose catastrophic and existential risks to humanity if some of their efforts result in situations of AI evolution out of control of humans. Another risk is power competition between a few players with some of them influencing the destiny of the planet in a way nobody has done in the history of human civilization. The Global South countries, with their focus local language models and data sovereignty, face the possibility of playing the role of consumers, labour suppliers and secondary actors.

It is necessary to develop AI governance which is balanced, inclusive, futuristic, and addressing risks as well as opportunities. The UN is an important platform, but the question is whether it can play an effective role. There exists a valuable opportunity to strengthen its role in global AI governance by exploring mechanisms that go beyond non-binding declarations.

It is necessary to 1) Create awareness in the Global South about advancements and risks in the near and distant future; 2) Promote cooperation between the Global South countries so that they can collectively harness some of the potential benefits of AI; 3) Encourage the Global South to consider options other than data-heavy and energy-heavy LLMs; 4) Bridge the gaps between the Global South and advanced AI countries by connecting risk mitigation measures with development aspirations in a new and balanced framework for global AI governance; 5) Discuss the global AI governance framework developed by SFG when it is ready (possibly in partnership with other institutions).

上一篇:下一篇: