Abstract
Currently, AI is moving from technological breakthroughs to application expansion, but governance paths and discourse are still lagging behind. AI governance can be grasped from two dimensions: one is capability governance, the pain point of which lies in the continuous increase in barriers to trade, policy and technology acquisition; the other is subject governance, the difficulty of which lies in how to identify common challenges across countries and systems and establish enforceable mechanisms. Countries expect AI systems to reflect their national values and highlight their demands for data and digital sovereignty, but this may further exacerbate global fragmentation. Looking ahead, different countries and situations will require differentiated governance paths. Some areas will require rigid regulations, while other areas will be more suitable for flexible standards.
Interviewee Profile

Karman Lucero
Associate Research Scholar and Senior Fellow,
Paul Tsai China Center at Yale Law School
Interviewer
Xiao Zehui
Research Assistant at the Center for Global AI Innovative Governance
Interview
Over the past year, global AI governance has entered a new and complicated phase. The Paris AI Summit stands out as a crucial shift: We're heading towards a period of advancement and implementation of AI, but governance have not truly kept up with the speed of technological evolution, particularly at the global level. Many participants, including the U.S. Vice President JD Vance, placed a heavy emphasis on innovation, and framed it as being in tension with many forms of regulation. Even when discussions did focus on regulation or governance, they often failed to engage meaningfully with the perspectives of others. As a result, numerous dialogues lacked convergence; it is difficult to identify actionable common themes. This can be attributed to various factors, including recent technological advancements, changing national regulations, and changing perceptions of the geopolitical environment.
Technological advancements is one factor. In the past year, open-source AI models developed in countries like China, such as DeepSeek, have performed impressively in global benchmarks while performances by American models have moved the benchmarks of what is considered possible with AI. This has challenged old narratives about who is leading in AI and what sort of impact AI will have, respectively. Questions around who has the power the shape global norms and regulations, and how they will do so, are more ambiguous than ever. Different benchmarks for AI models have different purposes or metrics in terms of measuring a particular kind of performance ability for different models. Some models have changed the game by performing well across different benchmarks, a phenomenon that makes governance, especially at the international level, more urgent, but also difficult. The longstanding dynamic by which innovation outpaces regulation may be growing even more pronounced. International declarations, such as the Paris Declaration (from the eponymous summit), or the Shanghai Declaration (from the WAIC) tend to emphasize broad values that lead to broad, subjective interpretations.
Capacity building is arguably the most critical challenge in global AI governance—not just for countries in the Global South, but for nearly every nation outside the U.S. and China. It encompasses not only hardware such as data centers and chips, but also software, talent, energy infrastructure, compute resources, and institutional frameworks necessary for training professionals and supporting domestic AI ecosystems. Achieving this level of readiness requires substantial infrastructure and long-term investment, which most countries currently lack. In practice, capacity building is not primarily driven by national governments—though there are exceptions—but by private companies. Private companies are actively engaging emerging markets with a view toward strategic commercial expansion. While capacity building is essential, it is neither advancing fast enough nor coordinated at the global level. Several American firms have already begun building partnerships with countries to address this gap, and with a more comprehensive AI governance policy expected from the Trump administration in July, there may soon be policies to support these efforts.
AI governance can be understood in two dimensions. First, the ability to develop AI—who has access to chips, compute infrastructure, talent, and data—and second, the governance of how companies, countries, and private actors are using AI to do various things. The key issues of capacity, of course, are the growing trade barriers and the growing political barriers, or access barriers, that are being raised by leading powers. And the key issues of governance is in identifying common challenges and mechanisms of implementation that span borders and different institutional frameworks. Global risks associated with AI—from biosecurity to nuclear command systems—have prompted some shared declarations, (such as the agreement between Presidents Biden and Xi last year that AI should not be used to launch nuclear weapons). However, such agreements quickly become complicated in practice. It's relatively easy to list shared concerns, but far more difficult to act on them in a coordinated and credible way. The primary challenge is not necessarily about the technology or about the risks themselves, but rather about how different countries and different actors can communicate, think about, and engage on those risks together.
The concept of data and digital sovereignty makes sense on multiple levels. At the hardware level, countries want to ensure that their AI systems cannot be disabled or controlled by external actors. This is one reason why data sovereignty is appealing—it implies that data and infrastructure should be under the control of domestic political authorities, not accessible or subject to intervention by foreign governments. Beyond hardware and security, there are also cultural and political dimensions. Data security concerns long predate AI; if someone knows too much about you, they can use that information to manipulate or harm you. With AI, these concerns are amplified. When data is used to train models, those models inevitably acquire cultural biases and political assumptions. For this reason, it is understandable that governments and societies want AI systems to reflect their own values and operate in their own languages. On the other hand, demands for sovereignty around AI and data will exacerbate growing global fragmentation and the weakening of international institutions.
Looking forward, the trajectory of AI governance is unpredictable, at least in the short term. Technological progress will continue, perhaps even accelerate, but governance mechanisms will likely lag behind. As competition between major powers intensifies, many engagements in third countries may be viewed through a lens of national security rather than cooperation. The incentives right now for major countries to work together on coming up with a coherent set of governance are not yet strong enough to really induce action. For individual countries, different circumstances and different contexts, will require different approaches towards governance to succeed. In some cases, hard law can provide clarity and stability that encourages investment. In other cases, especially in early-stage or experimental domains, flexible standards may be more appropriate. The key is not to pursue a one-size-fits-all solutions, but rather to develop local regulatory capacity and adaptability.
It is possible for U.S. and China to restart dialogue on AI governance. For now, however, both countries are attaching more importance to competition. We could get to appoint where both countries see dialogue as being in their interest, even towards the ends of competition, but we are not there yet. In terms of capacity building, the U.S. and China could also work together to establish a framework that enables third countries to collaborate in developing AI capabilities. While such cooperation may be unlikely in practice, it remains a feasible and highly beneficial possibility for advancing global governance. There is not a contradiction between capacity building and having a leadership role. When the U.S. or another country helps other countries develop their own capacity, they are building the foundation of their own leadership as well.ip as well.

