21May


The governance of AI is in my view the most important global issue of the coming decades, and it remains highly neglected. It is heartening to see how rapidly this field is growing, and exciting to be part of that growth. This report provides a short summary of our work in 2018, with brief notes on our plans for 2019.

2018 has been an important year for GovAI. We are now a core research team of 5 full-time researchers and a network of research affiliates. Most importantly, we’ve had a productive year, producing over 10 research outputs, ranging from reports (such as the AI Governance Research Agenda and The Malicious Use of AI) to academic papers (e.g. When will AI exceed human performance? and Policy Desiderata for Superintelligent AI) and manuscripts (including How does the Offense-Defense Balance Scale? and Nick Bostrom’s Vulnerable World Hypothesis).

We have ambitious aspirations for growth going forward. Our recently added 1.5 FTE Project Manager capacity between Jade Leung and Markus Anderljung, will hopefully enable this growth. As such, we are always looking to help new talent get into the field of AI governance. If you’re interested, visit www.governance.ai for updates on our latest opportunities.

Thank you to the many people and institutions that have supported us, including our institutional home–the Future of Humanity Institute and the University of Oxford–our funders–including the Open Philanthropy Project, the Leverhulme Trust, and the Future of Life Institute–and the many excellent researchers who contribute to our conversation and work. We look forward to seeing what we can all achieve in 2019.

Allan Dafoe
Director, Centre for the Governance of AI
Future of Humanity Institute
University of Oxford

Below is a summary of our research, public engagement, in addition to our team and growth.

Research

On the research front we have been pushing forward a number of individual and collaborative research projects. Below is a summary of some of the biggest pieces of research published over the past year.

AI Governance: A Research Agenda
GovAI/FHI Report.
Allan Dafoe

The AI Governance field is in its infancy and rapidly developing. Our research agenda is the most comprehensive attempt to date to introduce and orient researchers to the space of plausibly important problems in the field. The agenda offers a framing of the overall problem, an attempt to be comprehensive in posing questions that could be pivotal, and references to published articles relevant to these questions.

Malicious Use of Artificial Intelligence
GovAI/FHI Report.
Miles Brundage et al [incubated and largely prepared by GovAI/FHI]

Artificial intelligence and machine learning capabilities are growing at an unprecedented rate. These technologies have many widely beneficial applications, ranging from machine translation to medical image analysis. Countless more such applications are being developed and can be expected over the long term. Less attention has historically been paid to the ways in which artificial intelligence can be used maliciously. The report surveys the landscape of potential security threats from malicious uses of artificial intelligence technologies, and proposes ways to better forecast, prevent, and mitigate these threats.

The report was featured in over 50 outlets, including the BBC, The New York Times, The Telegraph, The Financial Times, Wired and Quartz.

Deciphering China’s AI Dream
GovAI/FHI Report
Jeffrey Ding

The Chinese government has made the development of AI a top-level strategic priority, and Chinese firms are investing heavily in AI research and development. This report contextualizes China’s AI strategy with respect to past science and technology plans, and it also links features of China’s technological policy with the drivers of AI development (e.g. hardware, data, and talented scientists). In addition, it benchmarks China’s current AI capabilities by developing a novel index to measure any country’s AI potential and highlights the potential implications of China’s AI dream for issues of AI safety, national security, economic development, and social governance.

Cited by dozens of outlets, including The Washington Post, Bloomberg, MIT Tech Review, and South China Morning Post, the report will form the basis for further research on China’s AI development.

The Vulnerable World Hypothesis
Manuscript.
Nick Bostrom

The paper introduces the concept of a vulnerable world: roughly, one in which there is some level of technological development at which civilization almost certainly gets devastated by default, i.e. unless it has exited the “semi-anarchic default condition”. Several counterfactual historical and speculative future vulnerabilities are analyzed and arranged into a typology.

Discussed in Financial Times.

How Does the Offense-Defense Balance Scale?
Manuscript.
Ben Garfinkel and Allan Dafoe

The offense-defense balance is a central concept for understanding the international security implications of new technologies. The paper asks how this balance scales, meaning how it changes as investments into a conflict increase. To do so it offers a novel formalization of the offense-defense balance and explores models of conflict in various domains. The paper also attempts to explore the security implications of several specific military applications of AI.

Policy Desiderata for Superintelligent AI: A Vector Field Approach
In S. Matthew Liao ed.  Ethics of Artificial Intelligence. Oxford University Press.
Nick Bostrom, Allan Dafoe, and Carrick Flynn

The paper considers the speculative prospect of superintelligent AI and its normative implications for governance and global policy. Machine superintelligence would be a transformative development that would present a host of political challenges and opportunities. The paper identifies a set of distinctive features of this hypothetical policy context, from which we derive a correlative set of policy desiderata — considerations that should be given extra weight in long-term AI policy compared to in other policy contexts.

When Will AI Exceed Human Performance? Evidence from AI Experts
Published in Journal of Artificial Intelligence Research.
Katja Grace (AI Impacts), John Salvatier (AI Impacts), Allan Dafoe, Baobao Zhang, Owain Evans (Future of Humanity Institute)

Our expert survey, we report the results from a large survey of machine learning researchers on their beliefs about progress in AI. The piece was the 16th most discussed article in 2017 according to Altmetric. It was reported on in e.g. the BBC, Newsweek, NewScientist, Tech Review, ZDNet, Slate Star Codex and The Economist.

Governing Boring Apocalypses: A New Typology of Existential Vulnerabilities and Exposures for Existential Risk Research
Published in Futures.

Hin-Yan Liu (University of Copenhagen), Kristian Cedervall Lauta (University of Copenhagen), and Matthijs Maas

This article argues that an emphasis on mitigating the hazards (discrete causes) of existential risks is an unnecessarily narrow framing of the challenge facing humanity, one which risks prematurely curtailing the spectrum of policy responses considered.  By focusing on vulnerability and exposure rather than simply existential hazards, the paper proposes a new taxonomy which captures factors contributing to these existential risks. The paper argues that these “boring apocalypses” may well prove to be the more endemic and problematic, than those commonly focused on.

Syllabus on AI and International Security
GovAI Syllabus.
Remco Zwetsloot

This syllabus covers material located at the intersection between artificial intelligence and international security. It is designed to be useful to (a) people new to both AI and international relations; (b) people coming from AI who are interested in an international relations angle on the problems; (c) people coming from international relations who are interested in working on AI.

Scaling Up Humanity: The Case for Conditional Optimism about Artificial Intelligence
In Should we fear artificial intelligence? an in-depth analysis for the European Parliament by the Scientific Foresight Unit.
Miles Brundage

This paper makes a case for conditional optimism about AI and to fleshes out the reasons one might anticipate AI being a transformative technology for humanity – possibly transformatively beneficial. If humanity successfully navigates the technical, ethical and political challenges of developing and diffusing powerful AI technologies, AI may have an enormous and potentially very positive impact on humanity’s wellbeing.

Public engagement

We have been active in various public fora – you can see a sample of the presentations, keynotes, panels and interviews that our team has engaged in here.

Allan Dafoe has been giving several talks each month, including at a hearing to the Security and Defense subcommittee of the European Parliament, Oxford’s Department for International Relations, and being featured in the documentary “Man in the Machine”, by VPRO Backlight (Video, at 33:00). He has done outreach via the Future of Life Institute, Futuremakers and 80,000Hours podcasts.

Nick Bostrom participated in several government, private and academic events, including DeepMind Ethics and Society Fellows event, Tech for Good Summit convened by French President Macron, Sam Altman’s AGI Weekend, Jeff Bezos’s MARS, World Government Summit – Dubai, Emerging Leaders in Biosecurity event in Oxford, among others. In 2018 his outreach included circa 50 media engagements, including BBC radio and television, podcasts for SYSK, WaitButWhy, print interviews, and multiple documentary filmings.

Our other researchers have also participated in many public fora. Jeffrey Ding has, on the back of his report on China’s AI Ambitions, interviewed with the likes of the BBC and has recently been invited to lecture at Georgetown University to DC policy-making circles. Additionally, he runs the ChinAI newsletter, weekly translations of writings on AI policy and strategy from Chinese thinkers, and has contributed to MarcoPolo’s ChinAI which presents interactive data on China’s AI development. Matthijs Maas presented work on “normal accidents” in AI at the AAAI/ACM conference on Artificial Intelligence, Ethics, and Society and presented at a Cass Sunstein masterclass on human error and AI (video here). Sophie Fischer  was recently invited to China as part of a German-Chinese Young Professional’s program on AI, and Jade Leung has presented on her research at conferences in San Francisco and London, notably at the latest Deep Learning Summit on AI regulation.

Moreover, we have participated in Partnership on AI working groups on Safety-Critical AI, Fair, Transparent, and Accountable AI in addition to AI, Labor and the Economy. The team has also interacted considerably with the effective altruism community, including a total of six talks at this year’s EA Global conferences.

Members of our team have also published in select media outlets. Remco Zwetsloot, Helen Toner and Jeffrey Ding published “Beyond the AI Arms Race: America, China, and the Dangers of Zero-Sum Thinking” in Foreign Affairs, a review of Kai-Fu Lee’s “AI Superpowers: China, Silicon Valley, and the New World Order.” In addition, Jade Leung and Sophie-Charlotte Fischer published a piece in the Bulletin of the Atomic Scientists on the US Defense Department’s Joint Artificial Intelligence Center.

Team and Growth

We have large ambitions and demands for growth. The Future of Humanity Institute has recently been awarded £13.3 million from the Open Philanthropy Project, we have received $276,000 from the Future of Life Institute, and we have collaborated with Baobao Zhang on a $250,000 grant from the Ethics and Governance of Artificial Intelligence Fund.

The team has grown substantially. We are now a core research team of 5 full-time researchers, with a network of research affiliates who are often in residence, coming to us from across the U.S. and Europe at institutions such as ETH Zurich and Yale University. As part of signalling our growth to date, as well as our planned growth trajectory, we are now the “Center for the Governance of AI”, housed at the Future of Humanity Institute.

We continue to receive a lot of applications and expressions of interest from researchers across the world who are eager to join our team. We are working hard with the operations team here at FHI to ensure that we can meet this demand by expanding our hiring pipeline capacity.

On the operations front, we now have 1.5 FTE Project Manager capacity between two recent hires, Jade Leung and Markus Anderljung, which has been an excellent boost to our bandwidth. FHI’s recently announced DPhil scholarship program as well as the Research Scholars Program are both initiatives that we are looking forward to growing in the coming years in order to bring in more research talent.



Source link

Protected by Security by CleanTalk