This post, authored by Elizabeth Seger, describes and compares four different meanings of “AI democratisation” as highlighted in the author’s recent paper “Democratising AI: Multiple Meanings, Goals, and Methods”. It is part of a larger project by the author on the democratisation of AI.
GovAI research blog posts represent the views of their authors, rather than the views of the organisation.
What is “AI Democratisation”?
In recent months, discussion of “AI democratisation” has surged. AI companies around the world – such as Stability AI, Meta, Microsoft, and Hugging Face – are talking about their commitment to democratising AI, but it’s not always clear what they mean. The term “AI democratisation” seems to be employed in a variety of ways, causing commentators to speak past one another when discussing the goals, methodologies, risks, and benefits of AI democratisation efforts.
This post describes four different notions of AI democratisation currently in use: democratisation of AI use, democratisation of AI development, democratisation of AI benefits, and democratisation of AI governance. Although these different concepts of democratisation often complement each other, they sometimes conflict. For example, if the public prefers that access to certain kinds of AI systems be restricted, then the “democratisation of AI governance” may call for access restrictions – but enacting these restrictions may hinder the “democratisation of AI development”.
The purpose of this post is to illustrate the multifaceted and sometimes conflicting nature of AI democratisation and provide a foundation for more productive conversations.
Democratisation of AI Use
When people speak about democratising some technology, they typically refer to democratising its use – making it easier for a wide range of people to use the technology. For example the “democratisation of 3D printers” refers to how, over the last decade, 3D printers have become much more easily acquired, built, and operated by the general public.
The same meaning has been applied to the democratisation of AI. Stability AI, for instance, has been a vocal champion of AI democratisation. The company proudly describes its main product, the image generation model Stable Diffusion, as “a text-to-image model that will empower billions of people to create stunning art within seconds.” Microsoft similarly claims to be undertaking an ambitious effort “to democratize Artificial Intelligence (AI), to take it from the ivory towers and make it accessible to all.” A salient part of its plan is “to infuse every application that we interact with, on any device, at any point in time, with intelligence.”
Overall, efforts to democratise AI use involve reducing the costs of acquiring and running AI tools and providing intuitive interfaces to facilitate human-AI interaction without the need for extensive training or technical know-how.
Democratisation of AI Development
However, when the AI community talks about democratising AI, they rarely limit their focus to the democratisation of AI use. Excitement seems to primarily be about democratising AI development – that is, helping a wider range of people contribute to AI design and development processes.
Often, the idea is that tapping into a global community of AI developers will accelerate innovation and facilitate the development of AI applications that cater to diverse interests and needs. For example, Stability AI CEO Emad Mostaque advocates that “everyone needs to build [AI] technology for themselves…. It’s something that we want to enable because nobody knows what’s best [for example] for people in Vietnam besides the Vietnamese.”1 Toward this end, Stability AI has decided to open source Stable Diffusion. This means that they allow anyone to download the model, so long as they agree to terms of use, and then modify the model on their own computer. It is a move, Mostaque explains, that “puts power back into the hands of developer communities and opens the door for ground-breaking applications” by enabling widespread contributions to the technology’s development. The company motto reads “AI by the people, for the people”.
Other efforts to democratise AI development aim to widen the community of AI developers by making it easier for people with minimal programming experience and little familiarity with machine learning to participate. For example, a second aspect of Microsoft’s AI democratisation effort focuses on sharing AI’s power with the masses by “putting tools ‘…in the hands of every developer, every organisation, every public sector organisation around the world’ to build the AI systems and capabilities they need.” Towards this end, Microsoft – and similarly Google, H2O, and Amazon – have developed “no-code” tools that allow people to build models that are personalised to their own needs without prior coding or machine learning experience.
Overall, various factors relevant to the democratisation of AI development include the accessibility of AI models and the computational resources used to run them, the size of AI models (since smaller models require less compute to run), opportunities for aspiring developers to upskill, and the provision of tools that enable those with less experience and expertise to create and implement their own machine learning applications.
Democratisation of AI Benefits
A third sense of “AI democratisation” refers to democratising AI benefits, which is about facilitating the broad and equitable distribution of benefits that accrue to communities that build, control, and adopt advanced AI capabilities.2 Discussion tends to focus on the redistribution of profits generated by AI products.
The notion is nicely articulated by Microsoft’s CTO Kevin Scott: “I think we should have objectives around real democratisation of the technology. If the bulk of the value that gets created from AI accrues to a handful of companies in the West Coast of the United States, that is a failure.”3 Though DeepMind does not employ “AI democratisation” terminology, CEO Demis Hassabis expresses a similar sentiment. As reported by TIME, Hassabis believes the wealth generated by advanced AI technologies should be redistributed. “I think we need to make sure that the benefits accrue to as many people as possible – to all of humanity, ideally.”
Profits might be redistributed, for instance, through the state, philanthropy, or the marketplace.
Democratisation of AI Governance
Finally, some discussions about AI democratisation refer to democratising AI governance. AI governance decisions often involve balancing AI-related risks and benefits to determine if, how, and by whom AI should be used, developed, and shared. The democratisation of AI governance is about distributing influence over these decisions to a wider community of stakeholders.
Motivation for democratising AI governance largely stems from concern that individual tech companies hold unchecked control over the future of a transformative technology and too much freedom to decide for themselves what constitutes safe and responsible AI development and distribution. For example, a single actor deciding to release an unsafe and powerful AI model could cause significant harm to individuals, to particular communities, or to society as a whole.
One of Stability AI’s stated reasons for open sourcing Stable Diffusion is to avoid just such an outcome. As CEO Emad Mostaque told the New York Times, “We trust people, and we trust the community, as opposed to having a centralised, unelected entity controlling the most powerful technology in the world.”
However, upon closer inspection, there is an irony here. In declaring that the company’s AI models will be made open source, Stability AI created a situation in which a single tech company made a major AI governance decision: the decision that a dual-use AI system should be made freely accessible to all. (Stable Diffusion is considered a “dual-use technology” because it has both beneficial and damaging applications. It can be used to create beautiful art or easily modified, for instance, to create fake and damaging images of real people.) It is not clear in the end that Stability AI’s decision to open source their models was actually a step forward for the democratisation of AI governance.
This case illustrates an important point: unlike other forms of AI democratisation, the democratisation of AI governance is not straightforwardly about accessibility.
Democratising AI Governance Is About Introducing Democratic Processes
For the first three forms of democratisation discussed in this piece, “democratisation” is almost always used synonymously with “improving accessibility”. The democratisation of AI use is about making AI systems accessible for everyone to use. The democratisation of AI development is about making opportunities to participate in AI development widely accessible. The democratisation of AI benefits is mostly about distributing access to the wealth accrued through AI development, use, and control.
However, importantly, the democratisation of AI governance involves the introduction of democratic processes. Democracy is not about giving every individual the power to do whatever they would like. Rather, it involves introducing processes to facilitate the representation of diverse and often conflicting beliefs, opinions, and values into decisions about how people and their actions are governed. Indeed, very often democratic decisions place restrictions on access and individual choice. Such is the case, for instance, with speed limits, firearm-ownership restrictions, and medication access controls.
Accordingly, the democratisation of AI governance is not necessarily achieved by improving AI model accessibility so that everyone has the opportunity to use or build on AI models as they like. The democratisation of AI governance needs to involve careful consideration about how diverse stakeholder interests and values can be effectively elicited and incorporated into well-reasoned AI governance decisions.
How exactly such democratic processes should take form is too large a topic to cover here, but it is one that warrants investigation as part of a comprehensive AI democratisation effort. Promising work in this space investigates, for instance, the use of citizen assemblies and deliberative tools and digital platforms to help institutions democratically navigate challenging and controversial issues in tech development and regulation.
Conclusion
This post has outlined four different notions of “AI democratisation”: the democratisation of AI use, the democratisation of AI development, the democratisation of AI benefits, and the democratisation of AI governance. Although these different forms of AI democratisation often overlap in practice, they sometimes come apart. For example, we have just seen how efforts to democratise the development of AI – for which improving AI model accessibility is key – may conflict with the democratisation of AI governance.
We should be wary, therefore, of using the term “democratisation” too loosely or, as is often the case, as a stand-in for “all things good”. We need to think carefully and specifically about what a given speaker means when they express a commitment to “democratising AI”. What goals do they have in mind? Does their proposed approach conflict with other AI democratisation goals – or with any other important ethical goals?
If we want to move beyond ambiguous commitments, to productive discussions of concrete policies and trade-offs, then it’s time to tighten the language and clarify intentions.
Acknowledgements
I would like to thank Ben Garfinkel, Toby Shevlane, Ben Harack, Emma Bluemke, Aviv Ovadya, Divya Siddarth, Markus Anderljung, Noemi Dreksler, Anton Korinek, Guive Assadi, and Allan Dafoe for their feedback and helpful discussion.