Summary
- There is an overwhelming consensus for careful management of AI in Europe and the United States. Across the two regions, 91% of respondents agree that “AI is a technology that requires careful management”.
- The portion of people who agree is growing. There has been an 8% increase in agreement in the United States since 2018 that AI needs to be managed carefully.
- The level of agreement is increasing. In the last half a decade, more individuals in both Europe and the United States say they “totally agree” that AI needs to be managed carefully.
A new cross-cultural survey of public opinion
Previous research by the Centre for the Governance of AI on the public opinion of artificial intelligence (AI), explored the US public’s attitudes in 20181. A new cross-cultural public opinion survey was conducted with collaborators at Cornell University, the Council on Foreign Relations, Syracuse University, and the University of Pennsylvania. We surveyed a representative sample of 13350 individuals in ten European countries (N = 9350) and the United States (N = 4000) in January and February 2023. To give more timely updates on our research, before the final report’s release, we will publish a series of preliminary results of select questions. In this post, we show that there is a resounding consensus across the European and US publics that AI is a technology that requires careful management.
There is an overwhelming consensus that AI requires careful management
We asked respondents to what extent they agreed with the statement that “artificial Intelligence (AI) is a technology that requires careful management.” They were able to respond that they totally agree, tend to agree, tend to disagree, totally disagree, or don’t know, to keep answer options aligned with previous versions of this question in other surveys1,2.
Across the entire sample, weighted by sample size of the United States and Europe, 91% agreed with the statement that “Artificial Intelligence (AI) is a technology that requires careful management.” The survey defined AI as “computer systems that perform tasks or make decisions that usually require human intelligence.” In Europe, 92% agreed that AI is a technology that requires careful management. In the United States, 91% agreed.
The strength of this consensus is growing
Although the above finding mirrors previous findings in the United States and Europe, the level of agreement has seemingly increased in the last five years.
There is an increase in total agreement in the United States. In 2018, 83% of the US public agreed with the statement, with no significant difference in responses whether the question asked about AI, robots, or AI and robots1. This is an 8 percentage point increase to the 91% that agree in the United States five years later. The strength of the agreement with the statement has increased in the United states since 2018 too: In 2018, 52% of individuals “totally” agreed with the statement. This figure has increased to 71% in 2023, a 19 percentage point increase. We also see a six percentage point decrease in the number of individuals who responded “I don’t know” to this question in 2023 in comparison to 2018, speaking to individuals becoming less uncertain about their views on this topic.
In Europe the overall increase in agreement is smaller: In 2017, a Eurobarometer study of almost 28,000 members of the European public found that 88% agreed that AI and robots need to be carefully managed (European Commission, 2017). Compared to our 2023 data where 92% agreed that this was the case for AI, an almost four percentage point increase in the last six years. In Europe the strength of agreement has increased more starkly than the total agreement, with the Eurobarometer finding that 53% totally agreed in 2017, compared to 68% in 2023 in our survey. “I don’t know” responses have remained generally consistent between 2017 and 2023.
Overall the results demonstrate an increasingly strong consensus within and across the United States and Europe that AI should be carefully managed.
Conclusion
The results show that the European and US publics are increasingly aware of how important it is to carefully manage AI technology. We demonstrate the growing agreement with this question over time, as we have tracked responses longitudinally across multiple versions of the survey3. At the very least, the public does not seem to take a “move fast and break things”4 approach to how AI should be managed. This finding suggests that AI developers, policymakers, academic researchers, and civil society groups need to figure out what kind of management or regulation of AI technology would meet the demands of this public consensus.
In our upcoming report, we will examine how familiarity with and knowledge of AI technology and other demographic predictors relate to our findings. We will also evaluate how a US panel who were asked this question in 2018 now responds half a decade later. Finally, we will also take a closer look at questions such as what governance issues around AI the public are most aware of and which worry them the most, and what they predict the effects of AI to be over the next decades.
Citation
Dreksler, N., McCaffary, D., Kahn, L., Mays, K., Anderljung, M., Dafoe, A., Horowitz, M.C., & Zhang, B. (2023, 17th April). Preliminary survey results: US and European publics overwhelmingly and increasingly agree that AI needs to be managed carefully. Centre for the Governance of AI. www.governance.ai/post/increasing-consensus-ai-requires-careful-management.
For more information or questions, contact no************@go********.ai
Acknowledgements
We thank many colleagues at the Centre for the Governance of AI for their input into the design of the survey questions. Thank you in particular to Carolyn Ashurst, Joslyn Barnhart-Trager, Jeffrey Ding, and Robert Trager, for the in depth workshopping and help in the designing of questions. Thank you to the people at Yougov and Deltapoll who were incredibly patient and helpful during the process of completing such a large-scale survey: thank you Caitlin Collins, Joe Twyman, and Marissa Shih, and those working behind the scenes. For comments and edits on this blog post we would like to thank Ben Garfinkel. For help with translations we thank the professional translation firms employed by Deltapoll and a group of helpful people that checked these for us: Jesus Lopez, Ysaline Bourgine, Jacky Dreksler, Olga Makoglu, Marc Everin, Patryck Jarmakowicz, Sergio R., Michelle Viotti, and Andrea Polls.
Appendix
Tables of top-line results
Methodology
Sample
US. A representative adult sample (N=4000) of the US public was surveyed online by YouGov between 17th January 2023 and 8th February 2023. This included 775 respondents who were re-contacted from a previous survey in 2018 (Zhang & Dafoe, 2019). The final sample was matched down from 4095 respondents to a sampling frame on gender, age, race, and education constructed by stratified sampling from the full 2019 American Community Survey (ACS) 1-year sample with selection within strata by weighted sampling with replacements (using the person weights on the public use file). 4000 did not fail both attention checks. YouGov conducted additional verification and attention checks of its users.
Europe. A representative adult sample of 10,500 respondents across ten countries in Europe was surveyed between 17th January 2023 and 28th February 2023 by DeltaPoll (after excluding individuals who failed both attention checks the total sample size was 9350 (France = 872, Germany = 870, Greece = 455, Italy = 879, the Netherlands = 452, Poland = 896, Romania = 456, Spain = 888, Sweden = 438, the United Kingdom = 3144). DeltaPoll conducted additional metadata verification of the survey respondents.
Analysis
Respondents who failed both attention checks were removed from the analysis. In the final report, we will present analysis for all questions and more detailed breakdowns by demographics and statistical tests of the relationships between variables and differences in group responses. For the United States, we will also be able to compare whether a panel sample of 775 respondents has changed their views since 2018. Upcoming analyses can be taken from the pre-analysis plan uploaded to OSF: https://osf.io/rck9p/.
US. The results for the United States were weighted using the weights supplied by YouGov. YouGov reports that “the matched cases were weighted to the sampling frame using propensity scores. The matched cases and the frame were combined and a logistic regression was estimated for inclusion in the frame. The propensity score function included age, gender, race/ethnicity, years of education, and region. The propensity scores were grouped into deciles of the estimated propensity score in the frame and post-stratified according to these deciles. The weights were then post-stratified on 2016 and 2020 Presidential vote Choice [in the case of the 2023 data], and a four-way stratification of gender, age (4-categories), race (4-categories), and education (4-categories), to produce the final weight.”
Europe. We used the R package autumn (harvest function) to generate the weights based on country-level age and gender targets supplied by Deltapoll. Harvest generates weights by iterative proportional fitting (raking), as described in DeBell and Krosnick (2009). Where needed we supplemented these targets with data from a Ipsos (2021) survey to determine targets for the non-binary third gender option and prefer not to say response. These weights were combined with a population size weight generated from the Eurostat European Union Labour Force Survey, following the European Social Survey’s procedure for calculating population size weights.
Randomisation
Generally, the order of survey questions and items was randomised in the survey. The full survey flow will be released along with the full report on OSF.
Additional information
Ethics approval
This study was ethically approved by the Institutional Review boards of the University of Oxford (# 508-21), Cornell University (# 2102010107), and Syracuse University (# 22-045). This study has received an exemption from the University of Pennsylvania Institutional Review Board (Protocol # 828933). Informed consent was required from each respondent before completing the survey.
Materials and code
The full materials and code will be made available when the full report is published and will be uploaded on OSF: https://osf.io/rck9p/. Upcoming analyses can be taken from the pre-analysis plan uploaded to OSF. The full survey draft and translations, conducted by professional translation firms, will also be made available.
Conflicts of interest and transparency
The authors declare no conflicts of interest. For full transparency, we would like to report the following professional associations:
- Markus Anderljung, Centre for the Governance of AI, Center for a New American Security
- Allan Dafoe, DeepMind, Centre for the Governance of AI, Cooperative AI Foundation
- Noemi Dreksler, Centre for the Governance of AI
- Michael C. Horowitz, University of Pennsylvania
- Lauren Kahn, Council on Foreign Relations
- Kate Mays, Syracuse University
- David McCaffary, Centre for the Governance of AI
- Baobao Zhang, Syracuse University, Cornell University, Centre for the Governance of AI
Allan Dafoe conducted this research in an academic capacity at the Centre for the Governance of AI; he joined DeepMind part way through the project. Michael C. Horowitz went on a leave of absence to the United States Department of Defense during the course of the research but remained a Professor at the University of Pennsylvania. Neither Allan Dafoe nor Michael C. Horowitz had veto power in determining the contents or sample of the survey nor how the research was reported on. The project was fully scoped, and funding was received before these professional association changes took place. Neither had or have access to the data before the public release. Noemi Dreksler, employed by the Centre for the Governance of AI, led the running of the survey, dealt with determining if there were any conflict of interest issues, and had final say on the content of the survey.
Survey questions
The blog post reports preliminary results from the following survey questions:
Definition of AI. The definition of AI used in the survey was as follows:
Artificial Intelligence (AI) refers to computer systems that perform tasks or make decisions that usually require human intelligence. AI can perform these tasks or make these decisions without explicit human instructions.
Careful AI management. This question was adapted from a 2017 Eurobarometer survey (European Commission, 2017) originally and was also on the 2018 survey by Zhang & Dafoe (2019).
Please tell me to what extent you agree or disagree with the following statement.
“Artificial Intelligence (AI) is a technology that requires careful management.”
- Totally agree (2)
- Tend to agree (1)
- Tend to disagree (-1)
- Totally disagree (-2)
- I don’t know (-88)
References
DeBell, M., & Krosnick, J.A. (2009). Computing Weights for American National Election Study SurveyData. ANES Technical Report series, no. nes012427. Ann Arbor, MI, and Palo Alto, CA: American National Election Studies. Available at http://www.electionstidies.org
European Commission. (2017). Attitudes towards the impact of digitisation and automation on daily life: Report. https://data.europa.eu/doi/10.2759/835661
European Union (n.d.). About Eurobarometer. Eurobarometer. https://europa.eu/eurobarometer/about/eurobarometer
European Union (n.d.). Employment and unemployment (LFS) database. Eurostat. https://ec.europa.eu/eurostat/web/lfs/database
Ipsos (2021). LGBT+ Pride 2021 Global Survey. Ipsos report. https://www.ipsos.com/sites/default/files/LGBT%20Pride%202021%20Global%20Survey%20Report%20-%20US%20Version.pdf
Rudkin, A. (n.d.). autumn. R-package. Available at https://github.com/aaronrudkin/autumn
Zhang, B., & Dafoe, A. (2019). Artificial Intelligence: American Attitudes and Trends. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3312874