This post discusses arguments for and against inviting countries from the Global South to international AI governance discussions hosted by developed countries. It suggests that the arguments for inclusion are underappreciated.
GovAI research blog posts represent the views of their authors, rather than the views of the organisation.
Introduction
In recent years, many countries have called for the international governance of AI. There is a growing sense that international coordination will be needed to manage risks from AI, which may range from copyright infringement to the proliferation of unconventional weapons.
However, many key early international discussions have taken place in forums that exclude the Global South. These exclusive forums include the G7, OECD, and Global Partnership on AI.1 The upcoming UK-hosted AI Safety Summit — whose invitees reportedly include China, which is a global AI power, but also some Global South countries further from the technological frontier — may partly break from this pattern. Still, there is no doubt that consequential AI governance discussions are heavily concentrated within developed countries.
There are a number of reasons why policymakers in developed countries may prefer, at least initially, to talk mostly amongst themselves. One argument is that smaller, more homogenous groups can reach consensus more quickly. Another argument is that — since only a small group of countries produce most of the cutting-edge AI technology that is used globally — only a small group of countries need to coordinate to reduce most of the global risk.
However, policymakers in developed countries should not underestimate the value of including a broad range of Global South countries. As AI capabilities diffuse, the success of global governance regimes will ultimately hinge on the participation of countries from across the world. Including a broad set of countries in conversation now can help to avoid governance failures down the line. More globally inclusive conversations can also help to preempt the emergence of competing coalitions, increase the supply of expertise, and avoid the ethical problems inherent in excluding most of the world from decisions with global importance.
Why policymakers often prefer exclusive forums
Policymakers in developed countries seem, so far, to have a preference for discussing international AI governance in exclusive forums. There are three main arguments that may explain this preference:
- Limiting the number of parties in global AI governance discussions can make it easier to reach consensus. Involving a larger and more diverse set of stakeholders will tend to make discussions less efficient and introduce additional complexity. This additional complexity could prolong — or even derail — the already difficult process of consensus-building.
- Effective global AI governance relies especially strongly — at least for now — on coordinated action between the leading AI developer countries. The argument here is that risks from AI emanate mostly from the small set of countries that host leading AI companies. (With the exception of China, all of these countries are in the Global North.) Therefore, at least for the time being, having this small set of countries converge on responsible policies may be sufficient to mitigate much of the global risk from AI.
- Policy consensuses initially reached by leading AI developer countries can later spread to other countries. There is precedent for countries adopting or taking heavy inspiration from governance frameworks developed within smaller groups. Notable examples include the EU’s General Data Protection Regulation (GDPR), which produced a so-called “Brussell’s Effect” globally, or the OECD’s plan to implement a global minimum tax rate, which was initially discussed within the OECD and G20 before eventually expanding to include more than 130 countries.
The value of including the Global South
Although the above arguments have merit, there are also several strong — and seemingly under-appreciated — arguments for including Global South countries in key discussions. These arguments highlight the importance of early inclusion for securing future buy-in, preventing the emergence of competing coalitions, drawing on additional expertise, and avoiding the ethical problems inherent in exclusion.
- In the future — even if not immediately — the success of global AI governance regimes will probably depend on the participation of many Global South countries. AI capabilities tend to diffuse over time, as technological progress makes AI development cheaper and easier, technical knowledge spreads, and key digital resources become freely available online. This means that AI capabilities that only a small number of countries possess today will probably eventually be possessed by a much larger portion of the globe. For certain policy issues, international governance regimes can also be undermined by even a single relevant country failing to implement effective policies. For example, if a single country fails to prevent the publication of biological design tools that are useful for building biological weapons, then this single failure could have global implications.2 Ultimately, in most cases, exclusive AI governance regimes will probably fail if they do not eventually expand to include a broader range of states.
- Including Global South countries in governance discussions now can help to secure their committed participation in the future. Early inclusion functions as a form of diplomatic capital: countries that are actively engaged and feel valued in initial stages are more likely to stay involved and committed over the long term. Early inclusion also ensures that initial agreements do not unintentionally lock in features or framings that will make it much harder to achieve broader buy-in later on. A number of past global governance failures, such as the Multilateral Agreement on Investment (MAI) and the Anti-Counterfeiting Trade Agreement (ACTA), illustrate the risks of early exclusion. Both initiatives, led by major economies such as the UK and the US, seemingly faltered due to their exclusionary approach, which fostered mistrust and skepticism among sidelined countries. A similar sense of mistrust could ultimately hamper global AI governance efforts, particularly if initial frameworks emphasize issues — such as risks from technology proliferation — that are seen to be in tension with economic development.
- Including Global South countries in governance discussions now can preempt the emergence of competing coalitions. If Western countries exclude Global South countries for international governance dialogues, this would leave a vacuum that could be filled by other geopolitical actors who are making concerted efforts to extend their influence. For instance, China has been diligently fostering relationships with Global South countries through ambitious initiatives like the Belt and Road. Moreover, China has recently taken a significant step by announcing the formation of an AI study group within the BRICS alliance. By neglecting to involve these nations, countries such as the US and UK might inadvertently cede influence to competitors who are more attentive to these emerging voices. This risks missing an opportunity to build a more comprehensive and unified global coalition.3
- Including Global South countries can provide an additional source of valuable expertise. These countries — although they do not host leading AI companies — do collectively possess a great deal of expertise in policy and technology. Some of these countries have also developed unique AI expertise through the distinctive roles they play in the AI supply chain, for instance by providing services such as data gathering and labeling. Furthermore, many Global South countries have confronted a spectrum of AI-related challenges less frequently encountered in developed countries. Including experts from these countries can therefore provide valuable additional insights into the multifaceted risks posed by AI.
- Since governance regimes crafted by leading AI developer countries will impact Global South countries, it is ethically important to give them a say in the design of these regimes. The AI products that leading developer countries produce have global effects. If a leading developer country releases a system that can be used to spread misinformation, for instance, then it may be used to spread misinformation anywhere in the world. In general, the misuse potential, biases, employment effects, and safety issues of systems released by these countries can affect all countries. On the other hand, the opportunities they offer for productivity growth, education, and healthcare can — at least with sufficient support — be harnessed by all countries as well. Although pragmatic considerations cannot be ignored, policymakers also cannot ignore the ethical issues inherent in excluding most of the world’s countries from conversations that will affect them deeply.
Conclusion
Many policymakers in developed countries have a preference for discussing international AI governance in exclusive forums. Although there are arguments that support this preference, there are also powerful — and seemingly underappreciated — arguments for ensuring that a substantial portion of early conversations include countries from the Global South. Early inclusion is vital for securing future buy-in, preventing the emergence of competing coalitions, drawing on additional expertise, and avoiding the ethical problems inherent in exclusion.
There is, of course, a time and place for tight-knit discussions between developed countries or between great powers. Nonetheless, it would be an important mistake to exclude the Global South from the AI governance discussions that matter most. The benefits of efficiency must be balanced with the benefits of inclusivity.
The author of this piece would like to thank Ben Garfinkel and Cullen O’Keefe for their feedback.
She can be contacted at
sn***@ca*.uk
.