21May

Noah Feldman, Sophie-Charlotte Fischer, and Gillian Hadfield on the Design of Facebook’s Oversight Board


Noah Feldman is an American author, columnist, public intellectual, and host of the podcast Deep Background. He is the Felix Frankfurter Professor of Law at Harvard Law School and Chairman of the Society of Fellows at Harvard University. His work is devoted to constitutional law, with an emphasis on free speech, law & religion, and the history of constitutional ideas.

Sophie-Charlotte Fischer is a PhD candidate at the Center for Security Studies (CSS), ETH Zurich and a Research Affiliate at the AI Governance Research Group. She holds a Master’s degree in International Security Studies from Sciences Po Paris and a Bachelor’s degree in Liberal Arts and Sciences from University College Maastricht. Sophie is an alumna of the German National Academic Foundation.

Gillian Hadfield is the director of the Schwartz Reisman Institute for Technology and Society. She is the Schwartz Reisman Chair in Technology and Society, professor of law and of strategic management at the University of Toronto, a faculty affiliate at the Vector Institute for Artificial Intelligence, and a senior policy advisor at OpenAI. Her current research is focused on innovative design for legal and regulatory systems for AI and other complex global technologies; computational models of human normative systems; and working with machine learning researchers to build ML systems that understand and respond to human norms.

You can watch a recording of the event here or read the transcript below:

Allan Dafoe  00:09

Okay, welcome. Hopefully you can all hear and see us. So I am Allan Dafoe, Director of the Center for the Governance of AI, which we often abbreviate GovAI, which is at the University of Oxford’s Future of Humanity Institute. Before we start today, I wanted to mention a few things. One, we are currently hiring a project manager for the center, as well as researchers at all levels of seniority, for GovAI and the rest of the Future of Humanity Institute, including interested in further work on this topic. So, for those of you in the audience, take a look. A reminder that you can ask questions in this interface at the bottom and you can vote on which questions you find most interesting, we can’t promise that we will answer them, but we will try to see them and integrate them into the conversation.

Okay, so we have a very exciting event scheduled, we will hear from Professor Noah Feldman about the Facebook oversight board and his views about what a meaningful review board for the AI industry would look like. Noah is a professor of law at Harvard Law School, an expert on constitutional law, and a prominent author and public intellectual. We’re also fortunate to have two excellent discussants with us today. Gillian Hadfield,  whose in my bottom right, maybe it’s the same for you, is a longtime friend of GovAI. She is the director of the Schwartz Reisman Institute for Technology and Society at the University of Toronto, where she’s also professor of law and of strategic management. She also has affiliations at the Vectra Institute for artificial intelligence and OpenAI. Gillian has produced a lot of fascinating work, including some on AI governance, and specifically, I’ll call out her work on regulatory markets for AI safety. She’s also doing some interesting work on how machine learning can learn and adapt to human norms. Our second discussant is Sophie Charlotte Fisher. I’ve actually known Sophie from before GovAI was even established. She was a founding member of GovAI’is predecessor, which was called the Global Politics of AI Research Group, that was in 2016 at Yale, if you can believe those old days. She is currently a PhD candidate at the Center for Security Studies at ETH Zurich. And she continues to work with us as a GovAI affiliate. She has done great work on a range of topics, including the ethics and law of autonomous weapons, US export controls for AI technology, and on the idea of a CERN for AI. And specifically, perhaps one might put it in Switzerland. In 2018, she was listed as one of the 100 brilliant women in AI ethics. And in 2019 she was the Mercator technology fellow at the German Foreign Ministry. So thank you, both of you, for joining us.

Now I’ll share a little bit of background of how I came to this topic and learned about Noah Feldman. So I first learned about him actually, from reading one of his recent books, fairly recent book on the US founding father, James Madison, at the time I was and I still am struck by how much of the work in AI governance has this character of a constitutional moment, we have this opportunity it seems to set not just norms for the future, but also to deeply rooted institutions, which could shape decisions for decades to come. So at the time, I wanted to learn more about James Madison, as he seemed to be one of the best examples of how someone who has is deeply committed to scholarship can have a centuries long impact through the formation of long lasting institutions. I know Feldman, as I understand it, came to this conversation the other way around. So in 2017, he was had just finished publishing this biography of James Madison, and he was visiting the Bay Area talking to some people in the tech community, staying with his friend Sheryl Sandberg when this insight, this constitutional insight came to him that what Facebook needed was a Supreme Court. He sketched what that would look like; Sheryl Sandberg and Mark Zuckerberg were interested. And now two plus years later, the oversight board is on the cusp of starting its work and represents, in my view, a radical new experiment in corporate governance and technology governance. I find this origin story so fascinating, because it shows like with James Madison’s life, how a life of committed scholarship can suddenly and potentially profoundly, offer useful insights that can shape history. So with that, we will now hear from Professor Noah Feldman, about his thoughts on the Facebook oversight board and the governance of AI. Noah.

Noah Feldman  04:30

Thank you so much for that generous introduction, I’m really grateful for it. I have a special feeling for Oxford from when I was a doctoral student. And you say that 2017 was a long time ago, but I am of the generation now who when we were students at Oxford, the computer center was one room by the old parsonage hotel with a bunch of mainframe computers that you could use for email. And the idea that the university which treated my work on medieval Islamic political philosophy [garbled] talk about Aristotle and Plato than it was to talk about the Middle Ages would eventually become a leader in spaces like the governance of AI was literally unimaginable. So I’m thrilled by that. And very excited to be to be here with all of you today, under the auspices of the GovAI webinar.

As you mentioned, I came to the issues here from the standpoint of governance, and specifically from the governance standpoint of constitutional design. If you think about it, constitutional design, as a field is a field about the management of complex social conflicts through the creation of governance institutions. That’s not a terrible summing up of what the whole field of constitutional design is about. And as you mentioned too, I was thinking about constitutional design in a specifically American context, because of this book I wrote about James Madison, who was, after all, the chief designer of the US Constitution, but I’d also been lucky enough to work most recently in Tunisia on constitutional design there after the Arab Spring and earlier, Iraq on constitutional design there, although under much different and worse circumstances of US occupation. So the design issues were of recurring interest to me. But those were always in the context of states. It was always in the context of the state as the locus for the creation of a governance institution to manage some form of social conflict. And I was in fact in in Stanford giving at Stanford giving a talk at Michael McConnell’s seminar. And we’ll come back to Michael McConnell in a few moments, because some of you may know he’s one of the new chairs of this Facebook oversight board. And he is a constitutional law professor, former judge, and I was speaking entirely about Madison, and that was what was on my mind. And then as you say, I was also having some conversations with people at Facebook about content moderation, because like so many other people in the field of free expression, which is one of my fields, I was trying to figure out what free expression was gonna look like, in a world where more and more expression took place on platforms. It was that juxtaposition of thoughts about the development of new challenges for content moderation and simultaneously, the idea of institutional design to manage social conflicts through constitutional mechanisms, that I think led me to think on a long bike ride up in the hills, behind Palo Alto, that actually, Facebook and other platforms could benefit by the introduction of a governance mechanism that has traditionally been used, and intensively for the last, you know, 50 or 60 years in liberal democracies, to manage the social conflict around what speech should be allowed and what speech should not be allowed, namely, the Constitutional Court or Supreme Court model. That is, in its essence, a model where there is an independent body that is not directly answerable to the primary first order decision maker that has a set of principles that are clearly articulated, on which it relies to make decisions, that transparently describes to the world why it has made the decisions that it has made via an explicit balancing of competing social values, such as the value of equality, the value of dignity, the value of safety, and the value of free expression.

And so I thought, perhaps an institution like that could be tried in the context of the private sector of a corporation, even though essentially it had never been done before. And the reason that it hadn’t been done, I think, is largely that we imagined the institutional governance solutions associated with states as solely appropriate for the public sector, and not as appropriate for private actors or private entities. And of course, the difficulty with that restrictive view is that it then deprives us of a whole realm where serious attempts to solve institutional governance problems have been made. On the ground that well, this is the private sector and not the public sector. If you imagine the kind of cognitive divide that people often make, they think, well, if the government is going to regulate us, then it would be appropriate for the government to use its institutional governance mechanisms. But if we’re going to be regulated by a private sector entity, a whole other set of mechanisms are appropriate and kick in. And that is really an arbitrary and artificial divide. I wouldn’t say it’s arbitrary, but it’s an artificial divide in the negative sense of the word artificial. I know with this audience, the word artificial is itself subject to deep analysis. But it’s a divide that is not necessarily valuable pragmatically, it’s simply something to treat as an opening reality that one can then explore and potentially explode. And so that’s essentially what Facebook subsequently did. And I was lucky enough to be advising them throughout this last, you know, two and three quarters years, to the point now where their oversight board is in existence, it has four co-chairs, 20 total members. One of them, as I mentioned, is Michael McConnell. Others are people of different backgrounds. There’s a former prime minister of Denmark, there’s a prominent constitutional law professor at Columbia University Law School, there’s the Dean of the law school in Colombia, who’s also a special rapporteur for the United Nations on free expression. And they’re a diverse group of people from all over. The core design remains the one that basically struck me on the bike ride. And again, just to reiterate that it’s independent, its members are appointed to three year terms that are renew automatically renewable, and therefore are not hired and fired by Facebook. They are paid, but they’re not paid by Facebook, they’re paid by an independent trust, that was funded by Facebook and then spun out of Facebook to become independent. Their decisions will be rendered transparently and publicly, they will give reasons and reason giving is hugely important in this context. And their decisions will balance competing values. And not least, in addition to its so called community standards, which are the content moderation rules that Facebook has. Facebook has also articulated a set of high level principles, a set of values, what they call values, that function effectively as a set of constitutional values here that are also relevant to the decisions that will be made, international legal principles will inform but not dictate results.

So that’s the basic structure of what’s going on here. I’m thrilled to answer questions about the technical sides of this, the difficulty of it, the design of it, I want to say one or two words about its purpose overall. And about two ways to look at it a more optimistic way and a more cynical way. And then from there, I’m going to tack to talking about ways that similar or related governance models could potentially be used in other contexts, including in the context of the governance of AI. So that’s my thought roadmap. So let me start with two ways of thinking about the purpose of the oversight board, let’s call them a publicly interested way, and a more cynical corporate interest way. So let’s start with the more publicly interested approach. And because I do constitutional law as a as my day job, we always have to look at everything through these two lenses, right. Every constitution in the world expresses high flown values, and is institutionally implemented by people who often really believe in those values. And yet every constitution is also a distribution of power by real people in the context of real governments and real estates, where politics and self interest dominate decision making, as we all understand it in the real world. So for those of you who don’t move in a world where these two frames are constantly going back and forth, I just want to flag that these are the two frames that I use all the time, and I’m going to use them here.

From the publicly interested frame it’s just really clear the crucial decisions on issues that affect billions of people should not be made, ultimately, by unelected tech founders, CEOs and COOs. And I think that’s rather obviously true in the case of free expression, as Mark Zuckerberg is the first to acknowledge: he should not be deciding himself whether the President of the United States has or has not breached some ethical principle of safety, when he criticizes Black Lives Matter protesters, and therefore should have his content taken down or whether the President of the United States running for office is participating in a political process that needs to be facilitated, and therefore what he says should be left up. That’s just much too important decision to be left to Mark or to Mark and Cheryl, or to the excellent teams that they nevertheless put together. I would argue that it goes even beyond those kind of hot button issues and extends to the more you know, in the weeds, but hugely important questions. What counts as hate speech? What hate speech should be prohibited? What hate speech should be permitted, because it’s necessary to have some forms of free expression. What forms of human dignity are respected by displays of the human body? What forms of human dignity might be violated by certain displays of the human body or certain human behavior or conduct? These are questions on which reasonable people can and do disagree. They’re questions that implicate major forms of social conflict. I’m not a relativist, I don’t think there are no right answers on these questions. But I do think there’s a lot of variation in what different societies might come up with as the right answers. And especially when you consider the platform’s cross social and political and legal boundaries, it just makes almost no sense for the power to make those ultimate decisions to be concentrated in just a few people. Now, that doesn’t mean that the decisions don’t have to be made, there has to be responsibility taken. And so the objective of a revolutionary strategy, which is what we with the oversight board uses, is to ensure that there are people who are making these decisions, who are accountable in the sense that they give reasons for their decisions, accountable in the sense that they explain transparently what they’re doing, accountable in the sense that they can be criticized, but are nevertheless not easily removable by the for profit actors who are involved. The result of this, again, speaking in terms of the public interest should be – it may not be, but experimentally, it ought to be – some legitimacy for the decision-making process. And now what I’m talking about is legitimacy in what philosophers call the normative sense of legitimacy. That is something is legitimate, because it should be legitimate, it ought to be considered legitimate. And from a publicly interested perspective, we should all want important decisions to be made in ways and with values that ultimately serve the goal of public legitimacy. Now, let me turn briefly to the cynical perspective, the cynical, self interested perspective. Facebook is a for profit company, it is governed under corporate law of the United States. And by virtue of being governed in that way, its management and its board of directors have certain fiduciary duties to its shareholders, which includes the duty to make it an effective and profitable company.

If Facebook’s senior management hadn’t believed that it was in the interests of the company, to devolve decision making power on these issues away from senior management, they would actually have been in breach of their fiduciary duties to advocate and then adopt the strategy. So in that sense, when somebody says to me, and people do say to me all the time, well, Facebook just did this because it’s in Facebook’s self interest. My answer that is twofold. First of all, yes, that’s absolutely right, they would have been in breach of their own fiduciary obligations if they had thought they were acting against the company’s interests. And my second is, please show me any example anywhere in the world, of any person or entity with power, giving out a power for any reason other than that, they believed in that given circumstance, they have more to gain by giving up that power than by keeping that power. I mean, this is an insight from constitutional studies. Any would be dictator would like to just be the dictator all the time. It’s really nice to be the dictator. But we recognize that governments based on dictatorial principles are frequently, not always, but are frequently unstable, and effective lead to bad outcomes, not just for the general public, but also to bad outcomes for the dictators who have a bad habit of ending up dead rather than, you know, beloved and in retirement. And so systems of power that involve power sharing are always shot through with structures of self interest.

So then that raises the question of why should anybody trust an oversight board or any other devolutionary governance experiment that is adopted by for profit actors? We might imagine that if the state imposes something, then it would reflect public interest. But if it’s adopted by private actors, we might say it should never be trusted. Well, part of the answer that is that even state bodies aren’t purely publicly interested. You know, political science as a field has spent much of the last half century showing the ways that state actors, including governmental actors, are privately interested, notwithstanding that they have jobs where in principle, they’re supposed to be answerable to the public. So there is no perfect world where everybody is perfectly publicly interested. But more importantly, the reason that the public should be able under some circumstances to trust a system put in place through the self-interest of corporate actors, is that it is in the self-interest of those corporate actors to be trusted. And to do so in this day and age they must impose transparency, independence and reason given, not because it’s their first order of preference. After all, this is not how content moderation was designed in any major platform initially, but because they realize that they have so much to lose by continuing the model that they’ve been following, and they need to try something new. So the cynical view is that in this day and age, companies can’t get away with appearing to devolve power or appearing to be legitimate, they have to actually go ahead and do it. You might say, well, their most effective game theoretic strategy is to appear to be not getting away with it, but actually to be getting away with it. That might be true. And it’s an empirical proposition to say that in this day and age, with as much scrutiny and skepticism as exists, it’s very difficult for a corporate actor to get away with that, in a way that might not have been true as recently as a quarter century ago.

Okay, let me say a word now about other contexts and other possible governance solutions. You know, having spent the better part of the last three years incredibly focused on the problem of content moderation and the solution of a governance institution modeled loosely on a Constitutional Court, I have now shifted my own attention to trying to think about other kinds of governance institutions, they could also be borrowed from different places, and shapes and contexts that might be appropriate to the governance of other kinds of social conflicts that arise in technology spaces. And here, I come close to the topic of your ongoing seminar and to your program, namely, the question of the governance of AI. Now, we do have, and I actually had it right in front of my face as I was originally – not right when I was designing this thing in the very beginning, but very quickly in the process – the Google AI committee that came into existence and came out of existence in a incredibly short period of time, a story that you all know better than I, had. So I had in front of me exactly what not to do from an early moment.

So we can stipulate that the model of the corporate appointed group of potential advisors on its own without more with a high risk and unstable models to adopt if we in circumstances where the corporate actor would react very negatively to criticisms of the membership of the board. But that doesn’t mean that there aren’t other mechanisms that are worthy of being explored. And these are other models of governance. So let me just name a few of them. And then we can talk more in our conversation about which of these might be adaptable in different circumstances to different aspects of AI governance. So one interesting model that comes not purely from the public sector, but from the educational and medical sector, is the model of the institutional review board or IRB. So those of you who are social scientists aren’t used to dealing with IRBs, the same will be true of those of you in the harder sciences, whose work interfaces with important ethical considerations. IRBs are quasi-independent bodies, typically, constituted and composed by institutions, universities and hospitals, most typically, that have full authority to approve or disapprove proposed research plans or programs. Their power is enormous, as anybody who’s ever dealt with an IRB knows. It’s subject to abuse, like all great powers, and the question of how to govern IRBs is itself a rich and important question. But the IRB model is a model that remarkably, hasn’t really been tried in the private corporate sector. Sometimes there’s overlap, because if you are a researcher at Harvard Medical School, and you have a great idea, you form a company, but you also continue to do research in the university. And so you need to both go through an IRB and then discuss it with your investors. So there are some points of overlap. But we don’t really have in the private corporate sector, an institutionalized IRB model in place. Now, IRB is have something in common with the oversight board that Facebook has created, because they’re meant to be institutionally independent, but they still belong to the institution. So the Supreme Court of the United States is the Supreme Court of the United States, it’s part of the US government, but it’s also independent and its independence is assured by certain institutional features, life tenure, etc, etc. It’s not without government influence. We see that right now in the United States. We’re in the middle of a huge fight over our next Supreme Court appointees. So you see, there’s a politicization of one aspect of the process. But part of the reason for that that intensity of that fight is that once appointed, the Justice will have complete independence.

IRBs are typically part of technically the university or hospital with which they’re affiliated. So in that sense, they’re part of that entity. Therefore, they internalize some sense of responsibility, but their members typically come from the outside and they cannot have their judgment overruled by the institutional actor that convenes that. So could corporations create IRBs on their own? One option is that corporations could create independent IRBs of their own. If they offloaded management and devolved it through [garbled] foundation in the way that Facebook has done. That’s very expensive. And it requires long term commitments, but it can be done. Another alternative is to have IRB-like independent entities created by third parties. Those could be nonprofit foundations that produce their own IRBs, that are then selectively employed by companies that are looking for independent judgment. Or you can also imagine, and I’m toying with trying to create one of these right now, one can also imagine a private entity being created either for profit or not for profit, but a private entity not growing out of an existing foundation that maintains an IRB, or multiple IRBs with subject matter expertise, that can be as it were rented by the corporation, which says, gee, we’re going to be making the following difficult decisions over deploying our AI for the next two years or five years. We publicly commit ourselves to submit those decisions at a given juncture point to this independent IRB, which has AI subject matter expertise alongside possessing ethicists, stakeholder interest and other sorts of interests. Now, there are all kinds of technical issues that need to be worked out here, which I’m happy to talk about. But I think they’re all in the realm of tractable problems. The overall model, though, would be to actually devolve some meaningful power to these IRBs, and for their decisions to be not merely advisory, but to function as actual choke points for the corporate actor. You may ask why would any corporate actor ever want to agree with that, agree to do that? And the answer is self interest: the corporate actor might be aware that in order to get credibility for its decisions, it needs to have those decisions blessed by a body that can only give a meaningful blessing if it can also prohibit certain lines of conduct, or blocks or lines of conduct or behavior. And I think there is a game theoretic situation where that becomes desirable and even necessary from the standpoint of the company. Transparency is a really interesting issue here. And I don’t need to tell all of you the transparency, challenging as it is, in any corporate domain is doubling or tripling hard in the context of AI where you don’t only have to deal, you have to deal first with proprietary technologies, but also with the – fascinating to me as an outsider to AI – conceptual problem of what counts as transparency in the case of certain, certain AI function, certain machine learning functions, where they may not be fully interpretable. I mean, there’s a fascinating conceptual question. I’m sure you’ve all spent time on this. When I taught a seminar on some of these issues a couple of years ago, we spend a couple of sessions on this fascinating issue of what counts as transparency in an in a situation where you have genuinely uninterpretable algorithm where again [garbled] I understand is also debatable term, but an algorithm that is not, we are not able to interpret under given circumstances. There are very rich and fascinating questions there deserve close scrutiny and attention.

That said, it is possible using an IRB structure to maintain selective confidentiality. So you could imagine a FinTech company, that is using a proprietary machine learning algorithm, to sort credit-worthiness of applicants, profound social conflict is inevitably going to arise there. And I can say a few more words about that if people are interested. But profound social conflict is inevitably going to arise. And there are many subtle questions to be worked through. For example, does the algorithm pick out discriminatory patterns that already exist in society? Does it reinforce those if the algorithm is quote unquote, “formally instructed” to ignore those, will it then replicate them nevertheless, by virtue of picking out a proxy that the algorithm is capable of picking out? These are incredibly rich, fascinating issues. I know you’ve spoken about them before here here, and I’m actually happy to discuss them as well. But one could imagine a private company with a proprietary algorithm, just saying to the IRB, listen, we will show you what’s under the hood. You will agree not to share that with anybody else, but in your public account what you will say is we have been under the hood, and we say that what we consider to be the cutting edge techniques that can be used to manage and limit the discriminatory effects have been employed here. And those are techniques such as such and such or such and such. Right. So imagine you think, with a very, very brilliant new professor at Columbia Law School, Talia Gillis, a recent graduate of the PhD and  SJD programs at Harvard, who worked with me that one of Talia’s arguments is that the only really reliable mechanism for evaluating discriminatory effects in a range of algorithmic contexts is running empirical tests of those algorithms and measuring outcomes, much in the way that historically actors who are trying to constrain governmental – sorry, governmental actors were private actors who were trying to use existing law to constrain private discriminatory conduct in say the housing context or the employment context, historically ran empirical tests to see whether a given company or institution was discriminating. So imagine one has Talia’s view – it’s not the only possible view but imagine one has that view – well, if one has that view, then what the IRB would do is it would say, we self-certify that we’ve run those tests, we’ve done the cutting edge, you know, approach, and we’ve created a protocol or supervisor protocol where those tests will be run regularly on the data as it develops. And so we’re not showing you what’s under the hood, but we’re telling you what our approach is transparently, we’re telling you what the research is transparently, and we’ll probably be able to show you the results transparently, or compel the private actor, the corporate actor that has the proprietary algorithm to do so. That’s just a sketch out an example of how this kind of institutional governance mechanism might potentially work.

So that’s the sort of, you know, private IRB or independent IRB type of approach, then there are some other potential governance mechanisms that are also worth thinking about that go outside of the IRB context, and that could also be borrowed from various institutional structures. There’s industry-level regulatory bodies, that could be created, that are always subject to the skepticism that they’re just like the Motion Picture Association, you know, the MPAA, controlled largely by their members. But it’s possible to create a more robust, industry wide regulatory actors, which again, by use of transparency and independent funding, and real independence from the corporations that constitute them, could engage in regulation of a kind that is analogous to what a governmental regulatory agency might do, but could do it more efficiently than a government regulatory agency. And it could potentially also maintain certain kinds of confidentiality to a greater degree than a government institution might be able to do. So there, you have a full range of different regulatory mechanisms, you know, European governance is different from Chinese governance is different from US governance, one can pick and choose in the institutional design process to obtain the best features or the most appropriate features here. And so there’s a full scale set of options for what I would call private, collective regulatory governance, that, again, look familiar in the context of state regulation, but avoid some of the problems that scientists and corporate actors alike inevitably fear. When they start thinking about government regulation, among them external political influence on that regulation, among them, a tendency to always be as conservative as possible to avoid criticism, sort of, you know, cover yourself in the worst case scenario, danger or risk. So that’s yet another set of techniques that can be borrowed from the public sector, suitably adopted and tweaked.

I could go on about, you know, other possible directions, I won’t, because I want to leave as much time as possible for conversation. So I’m going to pause there and just say in conclusion, that I’m eager to talk both about the particularities of how the Facebook model is working. But I’m also really eager to speak about other potential directions and options that might be more suitable in some of these AI contexts. Then the full on Constitutional Court like review board, and those may be IRB style, they may be regulatory style, and they may there may be other techniques, too. I have some other ideas for other things. They’re not as well developed, but maybe you can get me to throw them out there in conversation, that are potential directions all have in common billing is to say that we can learn from institutional governance solutions from different contexts and try to adapt and adopt them and we should never say that will never work here because that comes from this context. Rather, we should say in these other contexts, these things have these benefits and these costs, how might we try to adapt them to our needs, such that we will capture some of the benefits while reducing some of the costs. So thank you all for listening, and I’m looking forward to our conversation.

Allan Dafoe  35:17

Thank you, Noah. That was fantastic. And I’m sure if we were live, there would be a very enthusiastic round of applause from the 90 plus people in this room. I have lots of thoughts and questions. And I was very stimulated by this. But it’s the honour of Gillian now and our honor to hear from her to share her thoughts.

Gillian Hadfield  35:36

Great. Thanks. Thanks, Allan. And thanks Noah, that was terrific, really. It’s such an important thing to be discussing, and I really, the way you you wind up there with: we need to be looking for alternative regulatory models, we should look and draw on other models out there, and then thinking creatively about what the demands are regulating in the AI context, and, and, and how do we how do we meet those demands? Lots and lots for us to discuss, I want to try and just keep focused on a couple of a couple of points. First, I think it’s fantastic that somebody with your background and thinking about the origins of democracy and the development of the constitutional system, I think it’s really great that context here because I do think we are – maybe Alan is the one who used – a constitutional moment, I do think we are at a point in time, where we are seeing kind of the same question like, you know, Magna Carta 1215, where, you know, you know, we have entities, they’re not monarchs, but they’re private corporations that have become the dominant aggregations of wealth and power, and they’re defining so much about the way our lives work. That to be exploring, okay, so how do we democratize that process, is absolutely critical. I think it does raise the question, is that the right way to do this? Is it both feasible, and is it desirable to democratize private technology companies? You’re exactly right to frame it up, as I’m pretty sure I agree with you, people who are running these corporations that they hate, look, I don’t want to be the person deciding, you know, what, what Trump’s tweets can be left up or what postings can be there. But as you pointed out, this is this a global platform with two and a half billion people on the Facebook platform. It’s just something we’ve never seen before. And so I think both the question of democratizing that – both is it feasible, and is it desirable, where the [garbled], so I think that’s exactly the way we have to be thinking about this, this moment in time. And then I want to say a little bit about, you know, the appeal to existing institutions. And then, you know, we’re talking about the Facebook Supreme Court, particularly, but your comments, sort of in a broader context of thinking more generally about that, IRBs, and so on. But one of the things I think we’re also at a point of is, you know, the set of institutions that we created over the last couple of hundred years, in the context, originally of the commercial revolution, then the Industrial Revolution, the mass manufacturing economy, the nation state-based economy, and society that those institutions, you know, in many ways worked fabulously well, for significant periods there. But there are lots of ways in which they are no longer working very well. So we’re talking here, you’re sort of using a model of the high level Constitutional Court, but a lot of the issues are facing is like, you know, I’m user 5262 if I got in there early, I guess. And I’ve got a picture from my, you know, party that I that I want to post or I have a political statement I want to make, and those numbers are in the millions I was trying to get. Yeah, so 8 billion pieces of content were removed, something like that, in 2019, from Facebook, these are just massive, massive numbers. And one of the things we know about our existing institutions, which are heavily process-based, phenomenally expensive, the vast majority of people have zero access to those existing institutions. Now, certainly, that’s true if we go up to the level of the Supreme Court, and you’re not proposing that we create something that is going to be responsive to every individual who has a complaint. So I totally get that you have focuses, you know, even jump up to the Supreme Court rather than to say, you know what, let’s start with our trial courts or, but, but I think that’s actually a really critical thing for us to be thinking about is that both processes are incredibly expensive. They end up being like little pinholes of light into this big, big area. What can we be doing to, in fact, bring many, many more people into this process of expressing and communicating and constituting the norms of what we consider to be okay and not okay, on our platforms just to focus on that that framing in the context of, you know, free speech. And the other things, the other values that that are lined up against against free speech, what can we be doing to incorporate that? Now I think that’s where we just have to come to grips with this massive mismatch between a huge volume and a cost to the process and the – and I’ll go back to that – that language of democratization of that process. I think we will not develop methods that will be responsive that don’t ultimately involve AI. You’ve mentioned some of those. And actually, your concept of privately recruiting an IRB to review under confidentiality provisions, you know, what’s under the hood, in a model, and so on, I think is great.

I’ve been thinking about comparable models, Alan mentioned at the outset some of this work on regulatory markets, and I think we do need to be figuring out, how are we going to simultaneously get investment and methods of, in this instance, content, moderation, that are still responsive and legitimate, but I think we’re gonna have to figure out ways to incorporate a lot more people. So I’m particularly interested in thinking through the, the technical as well as legitimacy challenge of how can you get many more people involved in that in that process. And I actually think that’s really important, not just from the point of view of thinking about equality, or thinking about equal participation, but also because it’s fundamentally critical for the constituting of social order, that, that our norms are deeply rooted in ordinary people, their ordinary lives and, and communities. And of course, once you start talking global, that’s where it becomes tremendously difficult. I worry a little bit that, you know, the, the model of the Supreme Court, with an elite group, and so on, is going to be actually pretty difficult to try and make that progress. I think one of the things that I also want us to think about is, if we are so there’s a way to go back to this point about democratizing the private technology company, one of the things you might say is I can let them recruit the market here, in the sense that part of the global issues, you know, Facebook is such a massive platform and dominates the space. So incredibly, is there a role for communities to be working? You know, can we develop groups within our platforms, multiple platforms where people can, basically sort of in that in the [garbled] from political scientists voting with your feet, you know, voting with your with your browser, for which platform in which values and which norms you want to, you want to follow? I think there’s a lot of challenges to think about, I think this is the great challenge of how do we figure out how to respond to this massive scale, and respond to the global nature of these platforms, without taking decision making further and further and further away from ordinary people and their and their experiences and their experience of being a member of the community who’ve seen and recognized in these environments. I’ll stop there. Thanks.

Allan Dafoe  43:14

Thanks, Gillian, that was great. I’m thinking actually, perhaps Noah, we can give you some time to reflect and respond. And I’ll use the fact that I have the mic right now to add on to Gillian the specific question of the the choice of whether to have a global court and sort of a global moderation policy versus culturally specific policies. Yeah. And I you discussed at one point having regions or countries but yeah, that’s sort of just an interesting question.

Noah Feldman  43:43

Thank you. Thank you, Gillian. Those are really very rich, and hugely important issues that you’re raising. So let me just say a few a few words about them. If I if I could summarize, and maybe slightly oversimplify the argument that you’re making, it’s that we need greater democratization, greater public access, more people involved, I think you said, in order to aim at constituting social order. And you expressed a concern, which is completely correct, that the Constitutional Court model – and this I think would also be true of an IRB model – tends to rely on a smaller, elite group of people to make the relevant decisions. And I think that’s a correct analysis of what’s going on in both of those contexts. So I want to start by just acknowledging how incredibly challenging this problem is in democracies, not in you know, platforms, not in AI, not not on on social media, but just in democracy, right. How does one get genuine public participation in decision making?

It remains the central problem in most developed democracies: some have great turnout or lots of people show up to vote, but many have relatively weak turnout, where not that many people show up to vote. Voting, as political science has repeatedly demonstrated, is subject to all kinds of strange problems of agent principle control, and doesn’t always give ordinary people all the options that they would like to see represented. And there are tweaks for that, proportional representation tweaks, which have their own consequences, like the production of a parliament with so many parties in it, that it becomes very difficult for anything to get done. Even though a greater set of points of view are represented, there’s a set of complex trade offs that arise there as well. You know, the strongest critics of contemporary liberal democracy would probably say that one of the worst things about contemporary liberal democracy is that it purports to give the public opportunities to participate, and doesn’t actually give that to them, or gives them some simulacrum of participation. So that’s just to deepen the problem that you’re describing. Even if we could borrow some of the features that come from democracy, that might not solve our problems, because democracy itself is struggling. Making it much harder is the problem that I like to sum up with the example that I’m sure most or all of you know about. The example that arose a few years ago, when Great Britain decided to use an online voting process to name a new warship. As you will recall, the eventual winner was not Intrepid, or Valor, or Harry and Kate, but Boaty McBoatface. And sometimes in our conversations at Facebook, around the difficulties of democratization, we just summed up that this problem, which I’ll say a word about, in that phrase, Boaty McBoatface. And, you know, to formalize the Boaty McBoatface problem is that so far, it seems that when voting techniques are used online, the fact that the UN user is very far from internalizing the costs of his his or her vote, make it appealing, and not unappealing maybe more importantly, to cast votes that are silly or frivolous, or humorous. And so, you know, Facebook had actually experimented more than a decade before I got involved with them with a regulatory democratization approach, which is famous only in the small circles of people who care about online, regulatory democratization. It was an utter disaster, they said they were certain they wouldn’t have they wouldn’t make basically, they wouldn’t make certain kinds of major changes on the platform without getting a certain number of votes from a certain percentage of users. They couldn’t get participation, comparable to what they needed to get anything done. And it was also very subject to capture, again, a political science concept that’s very familiar here by small groups of concentrated people who had interest and could generate votes. And it’s actually I mean, sometimes I wonder, like, how did it happen that, you know, a random constitutional law person made a suggestion and Facebook decided to do it. Of course, the reason was that when I came to Cheryl with this idea, and then she brought it to Mark, Mark had actually been thinking for years, for at least a decade, about potential ways to devolve power. But the problem that he and the very, very smart people around Facebook kept bumping into was that if you devolve power, you want to democratize it. And if you democratize it, you run into cycling problems, and capture problems, and Boaty McBoatface problems. And I think what was actually when I just to finish the thought, and then by all means, yeah, but from the outside, when I look at it from the outside, I think, oh, no wonder they like this solution. Because it was it was about devolution, without democratization. It was devolution into an institutional structure like a court, that is not technically a, you know, small democratizing structure. So this is all by way of acknowledgement. And then I’ll say a word, you should say speak Gillian, and then I’ll say a word about what I think might be scary.

Gillian Hadfield  48:58

Yeah. And I only just want to jump in there and say, because I, I think the challenges of developing the regulatory model here that are democratically responsive is, is as big a challenge as building AI. And it’s also why sort of I’m under a focus on regulatory markets models, because I think we need to attract investment into this problem in the same way that we attract investment into building AI. So when I think about, okay, we want Yes, okay, getting voting, I think that’s inevitably going to be a poor that that was a technology that worked in various time, but don’t think that is going to work here. You you’ve said that’s been tried. But being able to read the normative environment, is something that we have now tremendous tools at our disposal, like what what is the reaction to different kinds we could build? I think we could be building machine learning models that are reading the rich, dense, massive volume of responses. And I think we should be figuring out how to do that and how to make that more legible, but I don’t think it’s only voting, I think that that we just sort of say, we want the idea of voting. But it’s not, as you say that that’s kind of broken in our current in our offline worlds. And I’m not surprised it doesn’t carry over. Anyway, I just wanted to jump in there with that, with that thought as well.

Noah Feldman  50:15

A couple of couple of thoughts on that. First, I actually think it’s harder than AI, because we’re in an early stage still of AI. And yet, the problem that you and I are talking about now of giving the legitimate public access to democratic participation, was posed explicitly by Plato and Aristotle. And in about 2500 years, smart people have been thinking about it, and nobody’s really solved it, you could say that the most intense process probably goes back to the French Revolution, of trying to mobilize a mass democratic public to make decisions effectively. So let’s just say it’s been the last 200 plus years that people have been trying to do it. And a lot of really smart people have focused on it, and haven’t really solved it. So I think it’s even harder. I also think it’s interesting when you say maybe we could use AI in order to solve it. And there will be hundreds of people out there, if there are hundreds of people listening, I’ve got a 222 people mark at the bottom of mine, but I don’t know if that means that’s the number of people listening. But if there are, then there are 221 people better than I, at answering the technical question of whether current techniques of aggregation are promising for doing what I would call normative political theory, you know, substantive analysis of what people are saying out there so as to glean a direction, maybe, but so as to glean a set of arguments about legitimacy. That’s a hard problem. I don’t I don’t claim to say that it’s an insoluble problem, just that it’s a genuinely hard problem. And if we were in over in the seminar room, where we talk about administrative and regulatory law, and and and Gillian were to say, you know, we should improve our legitimacy by using machine learning tools to get a sense of what all the comments are out there, I would say, interesting. doubtful, tell me more, I guess, is what I would have said. And maybe maybe it would work.

Just a last thought on this, I have a kind of approach to the problem that Gillian is talking about. And the approach is to say that we actually have a series of legitimating techniques that we use when mass voting doesn’t work very well. And those include transparent reason giving and subjection to intense public criticism. When a regulatory body is silent and behind closed doors, and is not easily transparent for analysis, it tends to lose legitimacy. And, you know, those of you who are in the UK and lived through the the Brexit process, probably know, on both wherever you were on that issue, that the perception I’m not speaking of realities now. But the perception that European regulation was insufficiently transparent, and therefore could not be subject to detailed criticism played a crucial role, I would argue, in the delegitimation within the UK of the project of European regulation, I mean, it’s not a coincidence that one of the most powerful and pro Brexit arguments from the powerful, most powerful leave arguments was, again, rhetorically was a claim of a legitimate regulation, illegitimate because non-democratic, and illegitimate and non-democratic, because non-transparent. So transparency can be an import- play an important role, because then we have other institutions, institutions like advocacy groups, institutions like the press, that can then engage in criticism of what are perceived as bad regulatory outcomes. So to me, in the absence of a magic bullet solution, I am interested in finding ways that it’s possible to use existing mechanisms of legitimation, I would call a democratic legitimation in the absence of mass voting, to improve participation and to improve access, not that these are perfect solutions at all, they’re very far from perfect, but they’re definitely starts in that direction. And they’re, they’re identifiable and they’re concrete. And you can point to them and say this regulatory process is good because people know what’s happening. They know the reasons, and they can be criticized and discussed. This is bad because they don’t.

Allan Dafoe  54:26

Thanks. Sophie, over to you.

Sophie Fisher  54:37

Okay, can you hear me now? Perfect. Okay, so already, we’re already in the middle of this really interesting discussion. I just want to take a couple of steps back and talk about how we actually got to the point that we’re now talk, we’re talking about the Facebook oversight board before offering some reflections on the limitations but then also the strength of this approach and maybe some of the lessons But we can learn for other regulatory models, maybe for the case of AI. Now we all know that Facebook has actually made for a long time important decisions of what kind of content it removes or leaves up on its platform that affected its 2.7 billion users around the world. And we’ve just heard from Noah that actually also within Facebook, there’s been a lot of thinking before about how to make this process of content moderation more participatory. But I think what really outside of Facebook has changed over the last couple of years are that we have seen new challenges brought about by, for example, the interference in the 2016 US elections in which Facebook has played a prominent role, or even the prosecution of targeted populations, and most notably, the Rohingya minority in Myanmar. And I think these cases really have showed that the stakes inherent in handling this kind of content that we see on a platform, like Facebook really have changed. And this incidents have not only emphasized the difficulty of balancing the freedom of expression and removing harmful content from the platform, in different national and cultural contexts. But, and I think this is important to stress again, it also created tangible economic costs for Facebook, due to an audible loss of consumer trust, which threaten Facebook’s business model and future growth.

So I think these developments really have emphasized again, the need for new measures and participatory measures to evaluate content in a fair and transparent manner to maintain the trust of Facebook users in the long term. And what we’re looking at now is the Facebook oversight board, which is certainly one of the most ambitious private governance experiments to date as a transnational platforms get mechanism to govern something which is very vital to the public and an essential human rights speech. Now, the board hasn’t even started operations yet. And we’re still at a very early stage. But different facets of its design has already been criticized widely, for example by journalists, but also nonprofit organizations, and I very briefly want to get into one of the most fundamental ones, and that is the at present very limited mandate of the board. So the limited mandate of this board implies that most probably the board won’t be in the position where it is able to really solve some of the most critical issues related to the content that we see on these platforms and do the most harm. So for example, it won’t probably tackle the selection and augmentation of certain content visible to users made by Facebook’s algorithm, including this information, it won’t necessarily minimize coordinated attacks on democracies around the world. And although there’s also an expedited procedure to bring issues more quickly to the attention of the board, the board won’t be able to offer a quick reaction to and prevent the spread of harmful content, such as the live streaming of the Christchurch shooting a while ago. Now, some of these limitations are probably inherent in the function of a court-like body as the board that exerts influence by making clear how the law applies for cases. But the problem is that many of the most contentious incidents, and I’ve named a couple before, that [garbled] the past few years and then have shown that the stakes and handing this cognitive change won’t be tackled by the organization that at least partially was established in response to begin to regain user trust. And they were also again to safeguard Facebook’s future growth. So I would argue that there’s a risk that the board would distract regulators from addressing some of the fundamental and most harmful activities on the platform and by the company that will maintain. Out of that, I would also argue that when we judge the board based on its mandate, and its court-like function from what we know about it today, its design is very thoughtful, and also very promising, because not only is it a clear improvement to the existing system, that existing system that we currently have in place, but I would also argue that we can learn different lessons from the way the board was set up, especially with regard to one of the key challenges of industry self governance, and that is how to structure private governance mechanism and essentially diplomacy already in the institution vetting process given that it originates with the organization that it is supposed to check on. And with legitimacy, I mean, here how to ensure meaningful transparency, impartiality and accountability.

And I briefly want to reflect on five of these lessons that we I think, have learned from the process of how the board was established. And the first is probably a very banal one, that is power sharing. So first of all, we need to reach a situation when we look at these tech firms, where they’re actually willing to share power. And I think Facebook is a really extreme case here, because due to its dual stock structure, then the exclusive power for a very long time over contentious decisions was with a CEO Mark Zuckerberg and I think now at the board, there’s the power to actually overrule Zuckerberg on contentious decisions and also previous decisions made by content moderators. The second aspect is public outreach. So what I found very fascinating about the way in which this board was set up is that there was actually a month long consultation process all around the world, with users and stakeholders in different countries, and also that this feedback was actually published afterwards. And then you can see that it is flown into the design of the board. So I think developing a public process that incorporates listening to outside users and stakeholders and to show as a company that you take this feedback seriously, is a really important issue to keep in mind. The third aspect is diversity. So Facebook’s community standards have been looking very American for a long time. And I think they’ve shifted towards most of the European approach, but input of the global south has always been absent. And while the composition of the board as it looks now is definitely not perfect, I think it reflects much better the diversity of this reserve base in the very broadest sense representing different cultural backgrounds, professional experiences, languages, etc. The fourth aspect is independent judgment, a really fundamental one. And I think if a private governance initiative should be perceived as legitimate, it is, of course, important that the people are working in these kind of boards, or outside organizations should not be working for this company. And I think, of course, there’s a chicken and egg problem that Facebook has always also faced, how to select the first members of this kind of institution that will then select other members. But I think the solution of using a non-charitable purpose trust, pay the members and set up a limited liability company to run the operations of the board is actually quite an elegant solution that we can run from. And the last aspect is transparency. And I think also here, Facebook did quite a good job of making all the steps and key decisions taken on the design of the board transparent. And [garbled] plans to make the decisions of the board transparent, how they’re being implemented. And also policy recommendations are issued by the board explained by the public, why, how they’re being implemented, or if not why they’re not being implemented. And I think being transparent all along the way, also really increases the cost of Facebook to just drop the board or threaten its independence.

So this was this will basically the five lessons where I think we can really learn from this process. And to conclude the oversight board, as it stands, is certainly no silver bullet to reform Facebook, and it shouldn’t distract regulators from tackling some of the remaining probably most harmful activities that are happening as well on the board and that are to a certain extent, also promoted by the platform. However, within the scope of what an outside body with a limited mandate as the board can do, it is certainly a really important step towards more transparency, and also to empower users by providing them with potential lever for accountability and a mechanism for due process. I also want to stress at the end that I think it is way too early to really say how meaningful and effective the board will eventually be and whether its operations will be independent, before it even has started operations. And there are many other important unknowns outside of the realm of the board and Facebook, including how exactly foreign governments or national governments will react to the board, how national courts will react to it, and how other platforms will perceive it. So I think for now to close, we can just impatiently wait for the board to finally start its work and see how things will unfold. Thank you very much.

Allan Dafoe  1:03:11

Thanks, Sophie and Noah’s muted. There we go.

Noah Feldman  1:03:14

Let me make just a few responses. And in the process, I think I’ll still try to answer Allan’s which I didn’t answer before about the global versus regional. I agree with, you know, 95% of what Sophie said. And it’s important to note that experiments need to evolve in the in the real world. And that evolutionary experimentalism and incrementalism are sometimes the right thing when you’re trying something radical, you’re trying a radical experiment, you don’t necessarily want to roll it out, giving it all of the power to do everything that it could possibly do. Because it might not work well. Instead, a little incremental ism is appropriate. And in fact, every Constitutional Court in the world has only gradually and incrementally increased its power, you also have to realize that in the process of institutional design, the oversight board face two opposite criticisms from within Facebook. One was, it will be much too powerful, it’s going to take over the core decision making that goes to our business function and shut us down, we can’t have this. The other was this will be a big waste of time and money, it will be purely symbolic, it will have no impact. I won’t help us at all. It’s a waste of time and money to do it. And my response to both was to say you’re both completely correct that these are risks, they can’t both be correct. You know, either it will turn out to be so powerful, that it threatens Facebook’s business model or it will turn out to be purely symbolic. The history of constitutional courts is a history of gradually expanding powers sometimes having to pull back after they’ve gotten too much power. But you also couldn’t possibly have convinced the board of directors of a major company to do something, or the management of the company, or the you know the leading shareholder in the case of Mark, to do or if you thought it was going to destroy the company. And in fact that would be, that wouldn’t be responsible on his part. So I think we will see whether the limited mandate, first of all that mandate is described already in the documents as intended to expand. Second of all, there are many things that the board can do to expand its mandate right out of the box. They can say to Facebook, we don’t like your rules, write new ones in light of these values, and they have the capacity to do that written into their mandate, which is a very, very great power. In the first instance, they’re supposed to decide if Facebook is following on rules, and if they accord with their values and the second instance, they can say your rules don’t fit your values, write new rules. So I’m agreeing with Sophie, that we’re at the beginning of the experiment. And we’ll see how it goes. And I hope that we don’t we remain patient rather than impatient because it will take time for this experiment to run out. It’s not going to solve all of the problems at Facebook, and it’s not going to solve them all right away.

With respect to the global versus the local. That was a really interesting and important design question, Allan, from the beginning, it may be relevant in the AI context as well, it was very relevant with respect to content moderation, because reasonable cultures, let’s say, could have different solutions to the question, right? I mean, and then there are real cultural value differences on the platform. So you know that what is culturally appropriate to wear to the beach in San Jose is different from what is culturally appropriate to wear on Main Street in Jeddah at prayer time. I like being in both of those places, but they are very different cultural norms for what dress is appropriate. And so, and I mentioned that because nudity policy is, you know, one of the most basic policies that a social media platform has to cope with, I mean, nobody in all of the consultation that Facebook did, I didn’t encounter anybody who said, Facebook should have such a radical free expression that it’s open to pornography, I didn’t hear anybody say that. But there is such a view out there, one could imagine in that view, and there has been a real fight on Instagram, about the extent to which sex workers accounts should be constrained or limited with organized sex workers in some places in Northern Europe, arguing for greater range of expression in order to facilitate their businesses. So this is a kind of everyday day in day out difficult thing to deal with. I think the difficulty of going down the every culture on its own is basically a line drawing one out, you know, where do you draw the line? Where do you say is the definitive view within a given culture? You know, some women in Saudi Arabia really don’t want to wear the hijab. And some consider the job to be liberating and say so, who’s right? That’s a very difficult social question, which couldn’t be answered without some independent base of [garbled], as Sophie says, community standards have traditionally been very American and their orientation, opening that up is risky. Because it may lead to, you know, I was often asked in Facebook internal deliberations? Well, what are the things that you imagine could happen in terms of interest group politics? And I said, well, the single largest, if you’re gonna break groups down by interests, the single largest group of Facebook users is Muslims.

Noah Feldman  1:07:59

Right, and so you know, not all Muslims agree on all things. Many Muslims disagree on a wide range of things. But imagine that there were agreement among Muslims on some set of issues, you know, what if one then want the views held by Muslims to govern the platform? What about the views of Christians? What about the views of, you know, so you know, these are hard and genuine questions. And I think Facebook, in the end decided that hard as it is to have standards that fit the whole platform. It would be harder to divide the world up into a kind of quasi, [would have to] map making kind of way, to create different Facebook’s for different contexts and places. And I think that’s where they drove that, coupled with Facebook’s ongoing vision of wanting to be a global community. And we have a fascinating conversation about what is a global community? Can there be a global community with two and a half billion people? What is the word community even mean in that context, but that is also part of the part of the aspirational picture. So you know, much more to be said about all these topics. But I think our time is sort of coming to a to its end, if I’m not mistaken. So I just want to thank all of you for great questions and comments. And if we have more time, and I’m happy to keep talking, I’m leaving that up to you.

Allan Dafoe  1:09:06

Great, well, I’m torn, because formally, we said it would end in one minute, but of course, I would love to keep talking. Why don’t we see if there’s any burning last thoughts more discusses? And maybe I’ll say something, and then No, you can reflect again, and then we’ll we’ll close. Gillian, Sophie, do have anything last you want to share?

Gillian Hadfield  1:09:24

So I think this question about the global and the local is really quite critical. And I think that’s how we train that challenge is how do you have a global platform that yet allows smaller subgroups to have different values and to have – somebody in the chat has picked up this idea of, you know, competition between those different subgroups. It’s, you know, the, the challenges around a harmonization of standards, globally, is one we’ve been struggling with in many, many domains for decades. And I don’t think it’s reasonable to think we’ll get there Allan, I’ve had a lot of conversations along these lines over time, so I think the real challenge is how can you have, how can you have a global community where people nonetheless feel that there are smaller communities to which they belong and in which they feel reflected and respected?

Sophie Fisher  1:10:17

I agree. And I think it’s also going to be very interesting to see what the support staff of the board will maybe be able to contribute in terms of acquiring local knowledge that may be necessary to really get into the culture of these of these individual cases. And it’s not only about the diversity of the board members as such, but really also about the support staff and what they can contribute.

Allan Dafoe  1:10:40

Maybe I’ll just add to this. So I find this decision fascinating politically, and I can completely believe that global is just the most viable solution, because as you say, you know, are you going to make them national, are you going to start drawing, defining the sort of cultural, social networks? Me, I’m imagining maybe there’s some clever social network clustering algorithm that could allow subgroups to self identify, self select? And maybe this actually gets to a broader governance question about Facebook, which is the ability of users to define the mechanisms of their interaction, you know, maybe they different users would like, different weightings of what kind of media they’re provided with, you know, news versus family updates versus political inputs. Maybe I’ll say one last thing, which is, yeah, I think your argument is right, it makes sense that you want to start in many ways, you wanna start with the lowest hanging fruit. If we think this kind of governance initiative is promising, you want to start with something that ideally will succeed, right, that ideally, is good for Facebook, and it’s good for Facebook shareholders, and it’s good for users, and it’s good for the public and, and then you can grow from there. I can imagine that speech moderation is in many ways, the easiest of sort of governance issues facing a company like Facebook, because there’s not as many trade offs between Facebook’s profit and the decisions that are being made, versus other decisions, like how to how to personalize advertising or, or just anything around, I guess, advertising, or perhaps, say the addictiveness of the device, you know, to what extent you use various, you know, notification techniques or other other techniques to keep people engaged. So maybe a worry is, it’s going to be much more difficult to have these sorts of solutions for domains where there is more of a trade off between profit motive and the sort of the legitimate decision. I’ll conclude there. So over to know if you have last thoughts.

Noah Feldman  1:12:40

Just, just briefly, again, thanking everybody for great comments, I think it’s just worth noting that the problems we’re talking about are the problems of human societies. And their problems that we face at the local level, at the sub state level, and their problems we face at the global level. One interesting thing about the social media platforms is that they have, they’re both not state problems, because they’re, this is a private corporation, not a state, it doesn’t have an, Facebook doesn’t have an army, it can be shut down by states, it’s, Facebook is weaker in many ways than certain states, than most states. But at the same time, they’re also super state problems, because they’re about crossing borders and users globally. And these are problems that in international affairs, international relations of international law, we also haven’t solved, you know, the United Nations, you know, we have universal declaration rights, which are defined at such a high level of generality that lots of countries can adopt them. But many of those countries don’t follow those, those principles, because that’s the only way you could get the consensus. So you have both sub state level problems and super state problems. And I think that carries through to the AI context, as well, insofar as AI is deployed by platforms that have this kind of reach, in so far as it’s to a certain degree shaped and controlled at the highest end, by multi, by corporations that are multinational and that are present in many different contexts. And I guess I would end just with a plea to people who are listening in to remember that, in order for us to make good decisions about governance, whether in AI or other tech contexts, we need to be deeply aware of the body of social conflict, and the body of thought and debate that exists around the deepest governance problems that we face as human beings. I mean, in the end, you know, when Aristotle said that, that humans were political animals. He didn’t just mean that we do politics, he meant that we live in a polis, and that we make a politeia, which is a constitution, you know, that humans have the capacity uniquely, not to live socially, lots of animals or social, but to have a conscious thought through a set of our publicly articulated values and norms by which we try to live together, and that, to me is the challenge of governance. And I’m all for doing that across the, across the disciplines, the less we hive ourselves off, the better we’ll do. And we also have to have modesty in knowing that, unlike some problems in science, and unlike, unlike some problems in AI, which may actually be soluble, by, you know, better work and faster processors and more sophisticated algorithmic design, some of the problems that we’re talking about here, don’t are not they don’t admit, have definitive solutions. If they did, we would have converged on one system of government sometime in the last 3000 or say, 10,000 years since we started making constitutions. But we haven’t converged because there are a range of different possibilities, a range of different viewpoints, again, about which reasonable people can disagree. So some degree of epistemological modesty, I mean, that’s always good in life to have epistemological modesty, but I’m never I’m not the one to tell anybody who works in the scientific domain to be epistemologically modest, what I can say is in the domain of governance, that kind of modesty is very called for, and people like me who want to and like you who want to contribute to doing better governance. It behooves us to be modest, and incremental, and cautious, and experimental. So thanks to all of you for a great conversation, and thanks to those who listened in for listening in.

Allan Dafoe  1:16:29

Fantastic, what a great conclusion. So yes, thank you again, to our wonderful discussions and to know for this great conversation.

Gillian Hadfield  1:16:41

All right. Thanks, everybody. Bye bye.



Source link

21May

Ben Jones & Chad Jones on Economic Growth in the Long Run: Artificial Intelligence Explosion or an Empty Planet?


Benjamin Jones is the Gordon and Llura Gund Family Professor of Entrepreneurship at the Kellogg School of Management at Northwestern. He studies the sources of economic growth in advanced economies, with an emphasis innovation, entrepreneurship, and scientific progress. He also studies global economic development, including the roles of education, climate, and national leadership in explaining the wealth and poverty of nations. His research has appeared in journals such as Science, the Quarterly Journal of Economics and the American Economic Review, and has been profiled in media outlets such as the Wall Street Journal, the Economist, and The New Yorker.

Chad Jones is the STANCO 25 Professor of Economics at the Stanford Graduate School of Business. He is noted for his research on long-run economic growth. In particular, he has examined theoretically and empirically the fundamental sources of growth in incomes over time and the reasons underlying the enormous differences in standards of living across countries. In recent years, he has used his expertise in macroeconomic methods to study the economic causes behind the rise in health spending and top income inequality. He is the author of one of the most popular textbooks of Macroeconomics, and his research has been published in the top journals of economics.

The session was moderated by Anton Korinek (UVA) and featured Rachael Ngai (LSE) and Phil Trammell (Oxford) as discussants.

You can watch a recording of the event here or read the transcript below:

Anton Korinek  00:06

Welcome to our webinar on the governance and economics of AI. I’m glad that so many of you from all corners of the earth are joining us today. I’m Anton Korinek. I’m an economist at the University of Virginia. And the topic of our webinar today is economic growth in the long run, whether it be an artificial intelligence explosion, or an empty planet. And we have two eminent speakers, Ben Jones and Chad Jones, as well as two distinguished discussants, Rachael Ngai and Phil Trammell. I will introduce each one of them when they are taking the stage.

We’re excited to have this discussion today, because the field of economic growth theory has gone through a really interesting resurgence in recent years. At the risk of oversimplifying, a lot of growth theory in the past has focused on describing or explaining the steady state growth experience that much of the advanced world has experienced in the post-war period, that was captured in what economists call the “Kaldor facts.” But in recent years, a chorus of technologists, especially in the field of AI, have emphasized that there is no natural law that growth in the future has to continue on the same trajectory as it has in the past, and they have spoken of the possibility of an artificial intelligence explosion, or even a singularity in economic growth. Our two speakers, Ben Jones and Chad Jones, have been at the forefront of this literature in a paper that is published in an NBER volume on the economics of AI. And Ben will tell us a bit about this today. And since an explosion in economic growth is by no means guaranteed, Chad will then remind us that the range of possible outcomes for economic growth is indeed vast. And we cannot rule out that growth may, in fact, go the other direction.

Our webinar today is co-organized by the Center for the Governance of AI at Oxford’s Future of Humanity Institute and by the University of Virginia’s Human and Machine Intelligence group, both of which I’m glad to be a member of. It is also sponsored by the UVA Darden School of Business. And before I yield to our speakers, let me thank everyone who has worked hard to put this event together: Anne le Roux, Markus Anderljung at the Center for the Governance of AI and Paul Humphreys at the UVA Human Machine Intelligence Group, as well as Azmi Yousef at Darden.

So let me now introduce Ben Jones more formally. Ben is the Gordon and Llura Gund Family Professor of Entrepreneurship at the Kellogg School of Management at Northwestern. He studies the sources of economic growth in advanced economies, with an emphasis on innovation, entrepreneurship, and scientific progress. He also studies global economic development, including the roles of education, climate, and national leadership in explaining the wealth and poverty of nations. His research has appeared in journals such as Science, the Quarterly Journal of Economics and the American Economic Review, and has been profiled in media outlets such as the Wall Street Journal, the Economist, and The New Yorker. Ben, the virtual floor is yours.

Ben Jones  03:49

Okay, thank you very much, Anton, for that introduction. And let me share my screen here. It’s great to be with you to talk about these issues. And thanks, again, to Anton and the organizers for putting this together and for inviting me to participate. So the first paper that I’m going to talk about is actually joint with Chad, your second speaker, he’s gonna appear in both, and this is also with Philippe Aghion. The idea in this paper was rather than sort of a typical economics paper, where you go super deep into one model and do all the details, this was really to kind of step back and look at the kind of breadth of growth models that we have. And then say, well, how would you insert artificial intelligence into these more standard understandings of growth? And where would that lead us? So we actually have a series of sort of toy models here. We’re exploring the variety of directions this can lead us and seeing what you have to believe in order for those various outcomes to occur. So that’s kind of the idea behind this paper. I’m going to do this in an almost non-mathematical way, not a completely math-free way, but I know that this is a seminar with a group of people with diverse disciplinary backgrounds. I don’t want to presume people are steeped in endogenous growth models. So I’m going to try to really emphasize the intuition as I go through the best that I can. I will have to show a little bit of math a couple of times, but not too much.

The idea in this paper is how would we think about AI? And you might think that AI helps us make goods and services, and things that go into GDP, and that we consume. It also might help us be more creative. Okay, so we’re gonna distinguish between AI entering kind of the ordinary production function for goods and services in the economy, but also that it might go into the so called knowledge production function, into R&D, and it might help us, succeed better in revealing new insights and breakthroughs about the world in the economy, and then the kind of implications we want to look at: just two very high level implications what will happen to long run growth under various assumptions, what do you have to believe, to get different outcomes in terms of the rate at which standards of living are improving, but also inequality, what share like the GDP per capita might go up, but what share that is going to go to labor, particular workers. There’s obviously a lot of fear that AI would displace workers, and that maybe more and more of the fruits of income will go to the owners of the capitals, or the owners of the AI. And then of course, there’s this other idea almost more from science fiction, it seems but taken seriously by some in the computer science community, that we might actually experience radical accelerations in growth, even to the point of some singularity. Anton referenced how growth has been very steady, according to sort of since the Industrial Revolution, but maybe we’re gonna see an actual structural break, where things will really take off. And of course, I think in Chad’s paper, he’ll show later it may be going the other way. Potentially, we’ll explore that as well.

So how are we going to think about AI? And you might think AI is this radically new thing. And in some ways it is. But one way to think about it is that we are furthering automation, right? What are we doing, we’re taking a task that is performed by labor, maybe reading a radiology result in a medical setting. And then we’re going to have a machine or algorithm do that for us. And we’re going to be every image search on Google, I used to categorize which is a cat. And now Google can just have an AI that tells us which image is a cat. But if you think about it in terms of automation, it can be very useful, because then we can think about AI in more standard terms that we’re used to, to some extent, in economics. So if you think, in the past, the Industrial Revolution was largely about replacing labor with certain kinds of capital equipment, maybe that was textile looms, and steam engines for power. AI is sort of a continuation of that process in this view. And it’s things like driverless cars and, and pathology and other other applications. So that’s one main theme in the work and how we want to introduce AI into our thinking of growth and see where it takes us.

The second main theme that really comes out in this paper we developed writing it is that we want to be very careful to think about not just what we get good at, but what we’re not so good at. And the idea that growth might be determined more by bottlenecks that growth, maybe constrained not by what we get really good at, but what is actually really important, what is essential, and yet it’s hard to improve. And I’ll make a lot of sense of that as we go intuitively.

I have a picture of the these guys are sugar beet farmers, pulling sugar beets out of the ground by hand, harvesting them. And that was done. This is a combine harvester type machine, that’s automating that and pulling sugar beets out of the ground with a machine. So that’s kind of like 20th century automation. And then a lower graph, lower picture, I’m trying to think about automation as AI. On the left, if you’ve seen the movie, Hidden Figures, these are the computers, I always think it’s very interesting. So computer was actually the job description. These women were computers at NASA involved in spaceflight. And they were actually doing computational calculations by hand. And then on the right, I have one of massive supercomputers who have basically replaced that job description entirely. So we see a lot of laborers being replaced by capital, raising productivity, but also displacing workers. And so how do we think about those forces?

Okay, so one way to think about this model is to start with a Zeira model, which is the following. So imagine there’s just n different things we do in the economy, n different tasks. And each task represents the same constant share of GDP, of total output. And to an economist here, that would sound like Cobb Douglas. Right? So we have the Cobb Douglas model. But if you’re not an economist, ignore that we just imagined the tasks every task has an equivalent share of GDP for simplicity. And when we think about automation, what we’re saying is that a task was done by labor, but now it might be done by capital equipment. Instead, AI would be a computer and an algorithm, a combine harvester would be a piece of farming equipment, automation. And so if you think that a fraction of the tasks beta, so there’s all of task one or beta percent are automated, then the capital share of total GDP is beta. So that means labor gets one minus beta. And then the expenditure on the capital equipment is a beta share of GDP. Okay? So that would seem that’s a very simple model, like it’s very elegant in a way. And it would say that if we keep automating, if you increase beta, we keep taking tasks that were done by labor and replacing them with machines, or AI, what will happen? Well, the capital share of income will increase and the labor share of income will decrease. So that sounds like it will be inequality in the sense that labor will get less – less income. That might sound very natural, maybe that’s what’s happening today, we have seen the capital share, it seems to be going up, in a lot of countries in advanced economies and like in the US, it seems like there’s a lot of automation going on from robots. So these new AI type things. Those are, of course, those two trends, though, so maybe they’re just happened to be correlated, we think that the AI is causing that rise in the capital share, well, this would be a model in which that could be true.

The problem with that model, though, is if you look backwards, we’ve seen tons of automation, like sugar beets, or so many other things, robots and auto manufacturing automobile manufacturing, that we didn’t see the capital share go up in the 20th century was very, very steady. So that suggests that if  , it just wouldn’t really fit kind of our normal understanding of automation. And so it’s not clear that this seems like quite the right model. So how can we repair it? A simple way to repair it, one idea that we developed in the paper, is to introduce the so-called Baumol’s cost disease, which is that the better you get at a task, the less you spend on it. So as you automate more tasks, maybe the capital share wants to go up. But something else also happens, right. So if I automate a task, like collecting sugar beets, what can I do, I can start throwing a lot more capital at that task, I can keep getting more and more machines and doing sugar beets, right. And moreover, the capital I put at the task, the capital equipment might get better and better, I first use a pretty rudimentary type of capital, and eventually these very fancy machines or introduced computers, and then computers get faster. If you throw more capital or better capital at it, what’s going to happen? Well, you’re gonna get more productive at getting sugar beets or doing computation at NASA. And so the cost of doing the task is going to drop. But if the cost drops, and things are kind of competitive in the market, the price should drop, right, the price will also drop. So what’s going on, you can do more of it at greater quantity, but the price of the task you’re performing will fall. So what’s the share in GDP, well, the quantity is going up, but the price is falling. If the price is falling fast enough, that actually the share in GDP will go down, even though you do more of it. So you get more sugar beets, but the prices of sugar beets plummets. And so sugar beets as a share of GDP is actually declining. And then what happens is that the non automated sort of bottleneck tasks, the ones you’re not very good at, actually come to dominate even more and more.

If you think backwards in the 20th century history or back to the Industrial Revolution, we see that agriculture and manufacturing have had rapid productivity growth and lots of apparent automation, like sugar beets, that agriculture and manufacturing agriculture, surely dwindling and dwindling share of GDP. And manufacturing GDP shares also seem to be going down. So it’s like what you get good at, what you automate, actually, interestingly, becomes less important with time as it starts to disappear in the overall economy. We’re left with things like health services, education services, government services, this is Baumol’s cost disease point, things that we find hard to improve the hard stuff actually comes to take on a larger and larger share of GDP. And if we can’t improve that, then our chances for growth, because it matters so much to GDP, dwindle.

So in this view, the capital share is a balance between automating more tasks, which tends to make the capital share go up. But the expense of sharing each task declines. And that tends to make it go down. So one model we offer in this paper is what we can call more of the same, maybe that’s what AI is, maybe AI is just more balanced growth. And we keep automating more and more tasks, but then they keep becoming a dwindling share of the economy. And we never automate everything. And you can actually show as we do in the paper, a model, where even though this is all the set of tasks, and a greater and greater share being done by capital equipment, and artificial intelligence, and a tinier and tinier share being done by labor, even the labor gets stuck in a very, very small share, it still actually gets, say two thirds of GDP, it still gets the same historical number. And again, why is that? It’s because this stuff, all the labor, all the capital stuff, it’s doing more tasks, but its price is plummeting, because we’re so good at it. And you’re left with just a small set of tasks being done by labor, that pays enormously for them. And that may be what’s going on in the economy. You’ve got something going on in the 20th century, certainly consistent with what’s been going on in the 20th century to a first order without overstating that case, but it’s broadly consistent with the stylized facts of growth. But that would suggest AI is again, just more of the same. We just keep automating it.

Here’s a simulation from our paper. This is steady state growth. You look on the x axis, we’re looking over five centuries, you get steady state growth, even as what’s happening with automation. Here’s the green line — you’re ultimately automating almost everything. You’re just sort of slowly automating everything, you never quite get to the end. And you just get constant growth and you can get a constant capital share. Okay, not a rising capital share. So actually, this is an idea that I’ve been developing in a new paper, which is almost done, this sort of seeing how far we can go along this, this line.

Okay, but let’s go a different tack, because a lot of people who observe artificial intelligence are excited by the possibility that maybe it will accelerate growth. And many futurists make these claims that we could even get some massive acceleration, something like a singularity. So we explore that in this paper as well. What would you have to believe for this to happen? So we consider two different topologies of a growth explosion, what we call a type one growth explosion, where the growth rates were going to depart from this steady state early 21st century experience, and we’re going to see a slow acceleration in growth, maybe to very, very high levels, they’ll call that a type one growth explosion. And the other would be a type two, where we mean a literal and a mathematical sense singularity, where you go to infinity in productivity and income, in some finite point in time in the future, you actually literally have a singularity, where you go to you go to infinity. You can actually surprisingly, using sort of standard growth, reasoning and automation, you can get either of those outcomes. Alright, so the first one is a simple example. And they’re more but one example of the first one is when you do achieve complete automation, so not just you kind of keep automating at a rate and never quite finish. Now we’re going to fully automate. Here’s my first equation, y GDP, k capital. That’s the automation capital. That’s all the combine harvesters, and the supercomputers, and the AI. And then a is the kind of the quality of the capital, the productivity of one unit of capital. All right. So this is fully automated. In other words, there’s no labor there, there’s no l, labor is now irrelevant to production of GDP, we can do the whole thing just with machines. That’s what that’s saying. It just depends on k and the quality of the l, which we call a. If you look at that, what’s the growth rate in y it’s going to be the growth rate in y and the growth rate in a, technology level, plus the growth rate in capital. Now, the thing about capital, which is really interesting and different from labor, which Chad’s going to be going over in his paper, is that with capital, you can keep making more and more of it, right? Because of how you make capital, you invest in it, you build it, and that comes out of GDP. So think about this equation, if I push up capital, I get more output. And then with more output, I can invest more. Okay. And then more importantly, if I push with the level of technology, I get more and more for every unit of capital, that increases GDP, I can invest more and keep building more capital. Okay, so the growth rate actually turns out to be what’s below. I’m ignoring depreciation. But basically, you can see that as long as you keep pushing up if you can keep pushing up the level of technology, so you keep improving the AI, you keep improving computers, the growth rate is going to track with a, it’s going to keep going up and up and up and up. And this is a type one growth explosion, so called why it’s an AK model. It’s a standard model, an endogenous and early standard model and endogenous growth theory. If we can automate everything, this suggests, in fact, that we can have a very sharp effect on the growth rate, that’s a very strong view of what one view of what AI might do.

Interestingly, another place to put AI, as I alluded to, in the very beginning, is you could put it into creativity and innovation itself. And if you do that, things can really take off. Alright, so this is a knowledge production function. a. is the rate of change of the level of technology, the quality of the capital. And if I fully automate how we produce that, again, there’s no labor in this equation, it just depends on capital. And then the state of technology itself a, and that’s going to act a lot like the second equation, which is that the growth in a is going to depend on the level of a to some parameter phi. And that’s like positive feedback, I push up a growth goes up in a, which causes growth in y to go up, I push up growth, and then a goes up and keeps going like this, okay? And that actually will produce if you saw the differential equation, it does produce a true mathematical singularity, it’ll be some point in time t star, which is definable, at which we will achieve infinite productivity. All right. Now, maybe that sounds like a fantasy. And it would be a fantasy because there may be certain obstacles that happen. I’ll just go very quickly through a couple.

One obstacle is that you just simply can’t automate everything, right? So both of those models assume you can get to a lot of automation, right? Maybe automation there is actually very hard. Maybe it was easy to automate sugar beets, but there are just certain cognitive tasks, for example, with regard to AI, that are going to be very, very hard to automate. If we never get to full automation, we can still get growth to go up. But we’re never going to get these kinds of singularities in these models. Okay, in the simplest form. So if you think that there’s some kind of bottleneck tasks that we can’t automate, and then we’re going to get we’re not we’re not going to get these labor free, full automation singularities, you have to believe that to some extent that we can truly automate all these things. And of course, that’s an open question with AI, but how far it can go in goods and services production and in sort of creative innovative activity.

A second a second constraint kind of the latter two constraints in some sense come from the universe itself, which is this differential equation at the top if that parameter phi is greater than zero, it will give you a singularity you will get one. You fully automate idea production and you will get one in finite time. But the question then is really whether we believe that parameter phi is actually larger than zero. What does that say? It’s saying that if it’s greater than zero, then when I increase A, I increase the level of technology in the economy, I make future growth faster, right. But if phi is less than zero, what happens when I raise the level of existing technology, and phi is less than zero, I make future growth slower, it takes away that positive feedback loop. And then you don’t get a singularity. And there are good reasons to think that phi might be less than zero, we don’t know. But there are reasons to think it is because there’s only so many good ideas in the universe, and we came up with calculus, and we came up with the good ones early, and the remaining ones are hard to discover, or just there aren’t that many good ones left. And so if you think we’re kind of fishing out the pond, right, think of AI as changing the fishermen, we get better fishermen on the edge of the pond. But if the pond itself is running out of fish, big fish for us have new ideas, it doesn’t matter how good your fishermen are, there’s nothing left in the pond to catch. And then there’s some other I have another AI version called the burden of knowledge. But regardless, there are some ideas in the existing economic growth literature about science and innovation that suggests phi may be less than zero. And that’s just going to turn off that singularity.

And then the third one, which is somewhat related, is that there just might be bottleneck tasks. And this kind of comes back to Baumol’s cost disease as he’s reasoning, but more at a task level. So for example let’s say that GDP here is actually a combination of our output and all these tasks. And the most simple form, let’s say it’s the minimum. So this is a real bottleneck, you’re only as good as your weakest link. It’s one version of a simple version of Baumol’s cost disease. So if it’s the min function, it doesn’t matter how good you get at every task, the only thing that matters is how good you are at your worst task. Right. So in other words, we might be really, really good at agriculture. But at the end of the day, we’re really bad at something else. And so that’s what’s holding us back.

I think that this is actually quite instructive. Because think about Moore’s Law, people get so excited about Moore’s law and computing. And a lot of people who believe in singularities are staring at the Moore’s Law curve. And it’s just incredibly dramatic, exponential, rapid, rapid, rapid increase in productivity, which is mind boggling in a way. At the same time, this has been going on for a long time that Moore’s law, and if you look at economic growth, we don’t see an acceleration. Right? If anything, we probably see it slow down. And that suggests that no matter how good you get at computer’s, there are other things holding us back, like it still takes as long to get from one point on a map to another based on available transportation technologies, that’s not really changing. I go back to the Baumol theme, if things really depend on what we’re sort of what is essential, but hard to improve, we can actually take our computing productivity to infinity, literally, and it just doesn’t matter. It’ll help, it’ll make us richer, it’s good. But it won’t fundamentally change our growth prospects unless we can go after the hard problems, or the hard ones to solve.

To conclude these are a whole series of models, obviously, we do this at much greater length in the paper, if you’d like to read it, you can put AI in the production of goods and services. If you can’t fully automate, you just kind of slowly automate, you kind of it looks like more of the same, it’s sort of a natural way to go. But if you can get to full automation, where you don’t need labor anymore, you can get a rapid acceleration in growth through the so called, what we call a type one singularity. When you put AI in the ideas production function in the creation of new knowledge, you can get even stronger growth effects. And that, in fact, could even lead to one of these true mathematical singularities, sort of, in science fiction. But there are a bunch of reasons in both cases to think that we might be limited because of either automation limits, because their search limits and that creative process, really, with regard to the knowledge production function, or more generally, in either setting, with natural laws, like I didn’t say it a lot, but like, the second law of thermodynamics seems like a big constraint on energy efficiency, that we’re actually pretty close to, in current technology. And if energy matters, then that’s going to be a bottleneck, even if we can get other things to sort of skyrocket in terms of productivity. And so I think a theme that Chad and I certainly came to writing this paper was the kind of interesting idea that ultimately growth seems determined, potentially not by what you are good at, but by what is essential, yet hard to improve. And that that is kind of important for us to keep in mind, when we all get excited about where we are advancing quickly. Then we go back to the aggregate numbers, and we don’t see much progress. This is just like a pretty useful way potentially, to frame that and begin to think about it, maybe we should be doing a lot of thinking about what we’re bad at improving, and why that is, if we really want to understand future growth. Okay, so I went pretty quickly, but hopefully I used my time, I didn’t spill over too much beyond my time and look forward to the discussions from thanks, Rachael and Phil in advance. I look forward to Chad’s comments as well. Thank you.

Anton Korinek  24:27

Thank you, Ben. The timing was perfect. And to all our participants, let me invite you to submit questions through the Q&A field at the bottom of the screen. After all the presentations, we’re going to continue the event with the discussions of the points that you are raising; and incidentally to the speakers, if there are some questions, clarification questions, for example, where you can type a quick response, feel free to respond to the Q&A in the Q&A box directly.

Let me now turn it over to Chad. Chad is the STANCO 25 Professor of Economics at the Stanford Graduate School of Business. He is noted for his research on long-run economic growth. In particular, he has examined theoretically and empirically the fundamental sources of growth in incomes over time and the reasons underlying the enormous differences in standards of living across countries. In recent years, he has used his expertise in macroeconomic methods to study the economic causes behind the rise in health spending and top income inequality. He is the author of one of the most popular textbooks of Macroeconomics, and his research has been published in the top journals of economics. Chad, the floor is yours.

Chad Jones  25:50

Wonderful, thanks very much Anton. It’s really a pleasure to be here. I think Anton did a great job of introducing this session and pairing these two papers together. As he said, a lot of growth theory historically looked back and tried to understand how constant exponential growth can be possible for 100 years. The first paper that Ben presented kind of looked at automation, artificial intelligence and possibilities for growth rates to rise and even explode. This paper is going to look at the opposite possibility. And ask, could there be an end of economic growth? And I think all these ideas are worth exploring. And I guess my general perspective is part of the role of economic theory is to zoom in on particular forces and study them closely. And then at the end of the day, we can come back and ask, Well, how do these different forces play against each other? So that’s kind of the spirit of this paper.

So a large number of growth models work this way: basically, people produce ideas, and those ideas are the engine of economic growth. So the original papers by Paul Romer and Howard and Grossman Hellman work this way, the sort of semi endogenous growth models that I’ve worked on and Sam Kortum, and Paul Seger show, basically, all idea driven growth models work this way people produce ideas and ideas drive growth. Now these models typically assume that population is either constant or growing exponentially. And for historical purposes, that seems like a good assumption. An interesting question to think about, though, is what does the future hold? From this perspective, I would say before I started this paper, my view of the future of global population, which I think is kind of the conventional view, is that it was likely to stabilize 8 or 10 billion people a hundred years from now or something. Interestingly, there was a paper, a book published last year by Bricker and Ibbitson called Empty Planet. And this book made a point that after you see it, is very compelling and interesting. They claim that maybe the future is actually not one where world population stabilizes. Maybe the future is one where world population declines, maybe the future is negative population growth. And the evidence for that is remarkably strong, I would say, in that high income countries already have fertility rates that are below replacement. So the total fertility rate is sort of a measure in the cross section of how many kids are women having on average. And obviously two is a special number here, if women are having more than two kids on average, then populations tend to rise, if women are having fewer than two kids on average, then the population will decline and maybe it’s 2.1 to take into account mortality, but you get the idea. The interesting fact highlighted by Bricker and Ibbitson and well known to demographers is that fertility rates in many, many countries, especially advanced countries are already below replacement. So the fertility rate in the US is about 1.8 in high income countries as a whole 1.7, China 1.7, Germany 1.6, Japan, Italy and Spain even lower 1.3 or 1.4. So, in many advanced countries, fertility rates are already well below replacement. And then if we look historically, again, we kind of all know this graph qualitatively fertility rates have been declining. So take India, for example, in the 1950s and 60s, the total fertility rate in India with something like six, women had six kids on average, and then it fell to five and then to four and then to three, and the latest numbers in India, I think are 2.5 or 2.4. But the perspective you get from this kind of graph is, well, if we wait another decade or two, even India may have fertility below replacement rates, fertility rates have been falling all over the world. And maybe they’re going to end up below two.

So, the question in this paper is what happens to economic growth, if the future of population growth is that it’s negative rather than zero or positive, right? And the way the paper is structured, it considers this possibility from two perspectives. First, let’s just feed in exogenous population growth, let’s just assume population growth is negative half a percent per year forever, feed that into the standard models, and then see what happens. And the really surprising thing that happens is you get a result that I call, in honour of the book, the empty planet result. And that is that not only does the population vanish with negative population growth, that the global population is disappearing, but while that happens, living standards stagnate. So this is quite a negative result: living standards stagnate for a vanishing number of people. And it contrasts with the standard growth model result that all these growth models that I mentioned earlier, half, which I’m now going to call an expanding cosmos result. But it’s basically a result that you get exponential growth in living standards. So living standards grow exponentially, at the same time the population grows exponentially. On the one hand, you have this sort of traditional expanding cosmos view of the world. And what this paper identifies is, hey, if these patterns in fertility continue, we may have a completely different kind of result, where instead of living standards growing for a population that itself is growing, maybe living standards stagnate for a population that disappears.

Then the second half of the paper, and I only have a chance to allude to how this works, says, Well, what if you endogenize the rate of fertility? What if you endogenize the population growth? Do you learn anything else? And you can get an equilibrium that features negative population growth, that’s good, we can get something that looks like the world. And the surprising result that comes out of that model is that even a social planner, if you ask, what’s the best you can do in this world, choose the allocation that maximizes the utility of everyone in the economy. And with population growth, the question of who is everyone is essentially in question. But the result there is that a planner who prefers this expanding cosmos result can actually get trapped by the empty planning outcome. And that’s a surprising kind of result, it might seem like it doesn’t make any sense at all, but I’ll try to highlight how it can happen.

I’m going to skip the literature review in the interest of time, I’ve already kind of told you how I’m going to proceed. Basically, what I want to do is look at this negative population growth in the sort of classic Romer framework, and then in a semi endogenous growth framework, and then go to the fertility results.

Let me start off by illustrating this empty planet results in a set of traditional models. So make one change in traditional models, instead of having positive population growth, or zero population growth, have negative population growth and see what happens. That’s, that’s the name of the game for the first half of the paper. To do that, let me just remind you what the traditional results are in a really simplified version of the Romer model. I’m sure you all know, but the model this is based on and this paper by Romer won the Nobel Prize in Economics a couple of years ago. So this is a very well-respected, important model in the growth literature. So the insight that got Romer, the Nobel Prize was the notion that ideas are not rival. Ideas don’t suffer the same kind of inherent scarcity as a good. So if if there’s an apple on the table, you can eat it, or I can eat it, apples are scarce, bottles of olive oil are scarce, coal is scarce, a surgeon’s time is scarce. Everything in economics that we’re traditionally studying is a scarce factor of production. And economics is the study of how you allocate those scarce factors. But ideas are different. If we’ve also got the fundamental theorem of calculus, one person can use it, a million people can use it, a billion people can use it, and you don’t run out of the fundamental theorem of calculus the same way you’d run out of apples or computers.

And so that means that production is characterized by increasing returns to scale, there’s constant returns to objects, here just people, increasing returns to objects and ideas taken together. And this parameter sigma being positive measures the degree of increasing returns to scale. Then where do ideas come from? In the Romer model, there’s a basic assumption that says that each person can produce a constant proportional improvement in productivity. So the growth rate of knowledge is proportional to the number of people and then the Romer model just assumes that population is constant. This is the assumption I’m going to come back and relax in just a second. So if you solve this model income per person, lowercase y is just GDP divided by the number of people that’s just proportional to the number of ideas, right? The amount of knowledge, each improvement in knowledge raises everyone’s income because of non rivalry, that’s the deep Romer point. And in the growth rate of income per person, depends on the growth rate of knowledge, which is proportional to population, right? So this is a model where you can get constant exponential growth in living standards, with a constant population. And if you look at this equation, you realize, well, if there’s population growth in this model, that gives us exploding growth in living standards. We don’t see exploding growth and living standards historically. And we do see population growth. So there’s some tension there. And that’s what the semi endogenous growth models are designed to fix that I’ll come back to in a second.

In the meantime, what I want to do is change this assumption that population is constant, and replace it with an assumption that the population itself is declining at a constant exponential rate. So let Ada denote this rate of population decline. So think of Ada as 1% per year, half a percent per year, the populations falling a half a percent per year. And then what happens in this model? Well, if you combine the second third equations, you get this, this law of motion for knowledge. And this differential equation is easy to integrate, right? It says the growth rate of knowledge is itself falling at a constant exponential rate. And not surprisingly, if the growth rate is falling exponentially, then the level is bounded. That’s what happens when you integrate this differential equation, you get the result that the stock of knowledge converges to some finite upper bound A*. And since knowledge converges to some finite upper bound, income per person does as well. And you can calculate these as functions of the parameter values. And it’s interesting to do that I do a little bit of that in the paper. But let me leave it for now, by just saying, what we did is just by by changing this assumption, that population was constant, making it population growth negative, you get this empty planet result, you get that living standards asymptote, they stagnate at some value, y* as the population vanishes, that’s the empty planet.

Now that I look at this other class of models, the semi-endogenous growth class of models and what was interesting about these models is that in the original framework, the Romer style models and the semi endogenous growth models lead to very different results. In the presence of positive population growth, these models yield very different outcomes. And what’s kind of interesting is that with negative population growth, they yield very similar outcomes. Okay. So again, let me go through it in the same kind of order as before, let me present the traditional result with positive population growth, and then change that assumption and show you what happens when population growth is negative. So same goods production function, we’re taking advantage of Romer’s non rivalry here. And I’m making basically one change, if you want set lambda equal to one that doesn’t really matter. I’m introducing what Ben described in the earlier paper, as as this sort of “ideas are getting harder to find” force, right, the fishing out force, right? And beta kind of measures the rate at which ideas are getting harder to find it says, the growth rate of knowledge is proportional to the population, but the more ideas you discover, the harder it is to find the next one, right? Beta measures the degree to which it’s getting harder. So beta, think beta, some positive number, and then let’s put in population growth at some positive rate and it’s exogenous. Same equation, income per person is proportional to the stock of ideas raised to some power, the stock of ideas is itself proportional to the number of people. And that’s an interesting finding here, which is, the more people you have, the more ideas you produce, and the more total stock of knowledge you have, and therefore the richer the economy is. People correspond to the economy being rich in the long run by having lots of ideas, not to the economy growing rapidly. That’s what happens, versus the earlier models. And then, if you take this equation, and you take logs and derivatives of it, it says that the growth rate of income per person depends on the growth rate of knowledge, which in turn depends on the growth rate of people. The growth rate of income per person is proportional to the rate of population growth, where the factor of proportionality is the degree of increasing returns to scale in the economy, essentially. And so this model, you can have positive population growth, being consistent with constant exponential growth and living standards. So this is the expanding cosmos result, right, we get exponential growth and living standards for a population that itself grows exponentially, maybe it fills the earth, maybe it fills the solar system, maybe it fills the cosmos, right, that’s the kind of taken to the implausible extreme maybe result of this model.

Let’s do the same thing. Suppose we change that assumption that population growth is positive, to one of population growth being negative, again, that kind of remarkably, I would say, looks like the future of the world that we live in, right, based on the evidence that I presented earlier. So once again, we’ve got this differential equation, you substitute from the negative population growth equation again, and you see that not only does the growth rate of knowledge decline exponentially because of this term, but it falls even faster. So the growth rate of knowledge falls even faster than exponentially. So of course, the stock of knowledge is still going to be bound. This is another differential equation that’s really easy to integrate. And you get that once again, the stock of knowledge is bounded, and you can play around with the parameter values and do some calculations. In the interest of time, let me not do that.

Let me instead say, what we see, let me just sort of summarize, is so first, as a historical statement, fertility has been trending downward, we went from five kids to four kids to three kids to two kids, and now even less in rich countries. And an interesting thing about that is from the microeconomic perspective, from the perspective of the individual family, there’s nothing at all special about having more than two kids or fewer than two kids: it’s an individual family’s decision, and some families decide on three, some families decide on 2, 1, 0, whatever. But there’s nothing magic about above two versus below two, from an individual family’s perspective. But the macroeconomics of the problem makes this distinction absolutely critical. Because obviously, if on average, women choose to have slightly more than two kids, we get positive population growth. Whereas if women decide to have slightly fewer than two kids, we get negative population growth. And what I’ve shown you on the previous four or five slides, is that that difference makes all the difference in the world to how we think about growth and living standards in the future. If there’s negative population growth, that could condemn us to this empty planet result, where living standards stagnate as the population disappears, instead of this world we thought we lived in, where living standards were going to keep growing exponentially along with the population. And so this relatively small difference matters enormously when you project growth forward. The sort of fascinating thing about it is, it seems like as an empirical matter, we’re much closer to the below two view of the world than we are to the above two view the world. So maybe this empty planet result is something we should take seriously. That’s, I would say that the most important finding of the paper.

Let me go to the second half of the paper, just very briefly, and I won’t go through the model in detail. It’s admittedly subtle and complicated, and took me a long time to understand fully, but I do want to give you the intuition for what’s going on. I write down a model where people choose how many kids to have. And in the equilibrium of this model, the idea part of kids is an externality. So we have kids, because we love them. And in my simple model, people ignore the fact that their kids might be the next Einstein and Marie Curie or Jennifer Doudna, I guess, now with the Nobel Prize for CRISPR. And if they might create ideas that benefit everyone in the world. The individual families ignore the fact that their kids might be Isaac Newton. And so the planner is going to recognize that social welfare recognizes that having kids creates ideas. And so the planner wants you to have more kids than you and I want to have, there’s an externality in the simple model along those lines. And, admittedly, this is a modeling choice people would writing down these kind of fertility models for a while and there are lots of other forces and you can get different results. I don’t want to claim this as a general result, rather, I see it as illustrating an important possibility. As I mentioned, the key insight that you get out of studying this endogenous fertility model, is that the social planner can get trapped in the empty plant, even a social planner who wants this expanding cosmos, if they’re not careful. I’ll try to say what I mean by if they’re not careful, they can get trapped in the empty planet. So how to understand that.

In this model, population growth depends on the state variable x, which you can think of is knowledge per person. It’s a to some power divided by n by some power, let me just call it knowledge per person. And we can parameterize the model so that the equilibrium, women have fewer than two kids. And so population growth is negative. If population growth is negative look at what happens to x. I’ve already told you that a converges to some constant, and n is declining. And so x is going off to infinity. So in the equilibrium x is rising forever. What about in the optimal allocation, the allocation that maximizes some social welfare function? Well, the planner is going to want us to have kids not only because we love them, but because they produce ideas that raise everyone’s income. The key subtlety here is suppose we start out in the equilibrium allocation, where x is rising and population growth is negative. And ask: when do we adopt the good policies that raise fertility? The planner wants you to have more kids? Do we adopt the policies that raise fertility immediately? Do we wait a decade? Do we wait 50 years, do we wait 100 years? That’s the ‘if you’re not sufficiently careful’. The point is, if society waits too long to switch to the optimal rate of fertility, well, then x is going to keep rising. And the idea value of kids gets small as x rises, because remember, x is knowledge per person. As x rises, we have tons of knowledge for every person in the economy. So the marginal benefit of another piece of knowledge is getting smaller and smaller. So the idea value of kids is getting smaller and smaller. And because we’ve already said that the loving your kids force still leads to negative population growth, well, even if you add a positive idea value of kids, the planner might still want negative population growth, if you wait too long. If you wait for the idea value of kids to shrink sufficiently low, then even the planner who, ex ante, preferred the expanding cosmos, gets trapped by the empty planet. So what this says is that it’s not enough to worry about fertility policy, we have to worry about it sooner rather than later. And here’s just a diagram.

I think I’m almost out of time, let me just conclude. So what I take away from this paper, is that fertility considerations are likely to be much more important than we thought, this distinction between slightly above two and slightly below two, that from an individual family standpoint just barely seems to matter. From an aggregate standpoint, from a macroeconomic standpoint, is a big deal. It’s the difference between the expanding cosmos and the empty planet. As I mentioned, when I started, this is not a prediction. It’s a study of one force. But I think it’s much more likely than I would have thought, before I started this project. And there are other possibilities. Of course, we’ve talked about one with AI producing ideas so that people aren’t necessary: important in my production function is that people are a necessary input. You don’t get ideas without having people and maybe AI can change that, that’s something we should discuss in the open period. There are other forces: technology may affect fertility and mortality, maybe we end up reducing the mortality rate to zero so that even having one kid per person is enough to keep population growing, for example. Maybe evolutionary forces favor groups that have high fertility for some reason, maybe it selects for those genes. And so maybe this below replacement world we look like we’re living in, maybe that’s not going to happen in the long run. But anyway, I think I’m out of time, let me go ahead and stop there.

Anton Korinek  48:33

Thank you very much, Chad. And let me remind everybody of the Q&A again. Our first discussant of these ideas is Rachael Ngai. Rachael is a professor of Economics at the London School of Economics and a research associate at the Center for Economic Performance, as well as a research affiliate at the Center for Economic Policy Research. Her interests include macroeconomic topics such as growth and development, structural transformation, as well as labor markets and housing markets. Rachael, the floor is yours.

Rachael Ngai  49:11

Thank you Anton. Thank you very much for having me discuss these two very interesting papers. There’s a lot of interesting content in both, but because of time, what I will focus on is the aspect related to the future of economic growth and the role played by artificial intelligence, declining population growth. Now, when we talk about artificial intelligence, there are many aspects, there will be political aspects, philosophical aspects, which I will not have time to talk about. Today, I will purely focus on the implication for future of economic growth.

Okay, so economic growth is about the improvement in the living standard. When we think about the fundamental source of growth as both Ben and Chad point out, it’s about technology progress. Technology progress can happen through R&D, or experience, when we are doing something and we get better at doing something. But the key thing for technological progress is that it requires brain input. So far, for the last 2000 years or so, the main brain input is the human brain. So here are some examples we already have mentioned. That how their research outputs have improved the living standard for mankind over the last 2000 years. Now, Chad’s paper is very interesting, and it is bringing up something that is really important. So here is the figure that basically repeats what Chad has shown us from the United Nations, about how many children women have, as you seen in high income country, has already fallen below the replacement ratio, which is about two, and for the world as a whole it is also falling. And in fact, United Nation predicts that in 80 years, population growth will be stagnant. So there will be zero population growth. And that means going forward, we will see negative population growth. What Chad has convincingly shown is that, when that is happening, we might get the empty planet result, which is the result that living standards will be stagnant and the human race will start to disappear.

And this is really an alarming result. And the reason for this is because the profit incentive of having children – we love children – does not take into account that the children are producing ideas that are useful for technological progress. So clearly, there’s a role for policy here, which Chad mentioned earlier as well. So we could try to introduce some policies that help to stimulate people to have more children. And the problem is, if we wait too long, then the empty planet result cannot be avoided. So that is something really, really worrying.

Then it goes to Ben’s paper, which gives you the tentative scenario, which is to say: what if we have the following situation. And they suppose we think of the human brain or that man is basically like machine. So artificial intelligence can replicate human brain. In fact, in Chinese, we say the computer we translate as electrical brain. So it’s really saying can the electrical brain really replace a human brain. If it can, then what we will obtain is that we can avoid that nation, which is the empty planet itself. And even more, we might be able to move through to a technological singularity, where the artificial intelligence can self improve, and the growth can explode.

Now, I think we are all kind of convinced by that the singularity result seems quite impossible, because one simple thing one can say is that many essential activity cannot be done by AI, and because of that, which is sometimes called the Baumol’s effect, because of that, you will not get the situation where growth explodes. So let me focus on the situation whether AI can solve the problem that Chad mentioned, which is the stagnation result. So how possible is it really, that we can have AI completely replace humans in generating technological progress? Meaning in that R&D production function, we do not need humans anymore, we can just have AI in it. How is that possible?

So here’s the brief timeline of the development of artificial intelligence, which is quite remarkable fact, which started in 1950. So over the last 70 years, a lot of progress has been made. Okay, there’s a lot of great discovery. But is it enough? And what do we look for into the future? So there’s a report by the Stanford University called Artificial Intelligence Index Report. What this shows are a few points I want to highlight. One is that the human brain itself is actually still needed in improving AI. So for the last 10 years 2010 to 2019. What we’ve seen is published paper about artificial intelligence has increased by 300%. And the papers online before they were published, has increased by 2,000%. So there’s a huge increase in how we researchers are trying to improve AI. And at the same time, we also see a lot of students choose to go to university to study AI. So looks like we still need quite a lot of human brains to pour into making the artificial intelligence to replace the human brain. So there is progress being made in many area. But there is a lot of questions here, AI is good for searching pattern using the observed observed data. Okay, so that is basically how artificial intelligence works with big data. But can it really work like human brains on intuition and imagination.

Now, on the right hand side here, I took one example from this annual report, which is to show a video to the machine and ask the machine to recognize what is going on in that video. When you show the video of some high activity thing, for example, like Zumba dancing, the precision rate is very high, the machine really picks up the activity very easily. But if you look at these other activities, for example, here is show the hardest activity is drinking coffee. So presumably, when people enjoy their coffee, they do not do much special movement. And there’s no special characteristic for the machine to pick up very easily. So the precision rate is less than 10%. And there has been very little progress over the last 10 years. So my take on this is that it’s still quite a long time for the artificial intelligence to completely replace the human brain. And it really matters a lot to see. If the world is going to have stagnant population 80 years, do we have enough time to make artificial intelligent replace human growth? So when you think about the future growth? Here’s the question, which is less costly and more likely, producing human brain or producing human green light, artificial intelligence? Can we, humans, with the help of artificial intelligence actually create an Einstein-like artificial intelligence? It to me, I don’t know, it seems quite difficult. But on the other hand, if we go back to Chad Jones’ paper, is saying that we need policy, we need policy to increase fertility. But it’s not an easy part on its own. As you’re thinking woman today face a trade off between career concerns and having children. So just by giving childcare subsidy on maternity leave, these are costly policies, and most of the time, it might not work. So when we think about fertility, of course, there’s lots of theories about fertility. Here, I’m just going to focus on a few things.

What, what is behind this? So if you look historically, how can we have very high fertility in the past, which is like five children per woman. So because there is a big role play by family farms, so family farms on the right hand side, here is some data from the AL, which show you how the fraction of woman working on family farms has been declining over time. Now family farms are very special, it creates demand for children, because children can help on the farm. And it also allows a woman to combine home production and work. But the process of urbanization and structural transformation have come along with the disappearance of family farms. Modern day, when a woman has to go to work, it really means leaving home. So making it incompatible to combine home production and work. So you look at home production. Here I show you a picture of the home production time per day, and market production time per day for women and for men. So the first bar is women, the second bar is men. And these two bar represent the world. What we see here is something really striking: woman’s home to home hour and childcare time is triple men’s. So for every one hour men does for home production, women have to do three hours. Now that itself this kind of picture may give young woman especially a pause when choosing whether to get married and to have children. While we see that women’s education is rising, and there is rising concern for gender equality.

So let me just conclude with this on the future fertility. So I hope I sort of convinced myself the artificial intelligence will take some time, but if we don’t change anything in 80 years, population growth will go negative. We need to really think about how we can do something about fertility. Childcare subsidies and maternity leave will not be enough. One possibility, maybe it will help women to choose to have more children is that if there’s more possibility of outsourcing home production to the market, but that really will play on the development of the service economy. Now, of course, the social norm is important as well, the social norm around the role of a mother can play a crucial role for woman’s decisions to become a mother. But the social norms themself are changing over time. And they will change and you will respond to technology and policy. So some hope there is, if this thing are all working, perhaps we can revert the trend of fertility to bring it up above the replacement level before or together with the artificial intelligence. And that will be the future of growth hope. Thank you very much.

Anton Korinek  60:59

Thank you very much, Rachael. Our next discussant is Philip Trammell. Phil is an economist at the Global Priorities Institute at the University of Oxford. His research interests lie at the intersection of economic theory and moral philosophy, with specific focus on the long term. And as part of this focus, he is also an expert on long-run growth issues. And incidentally, he has also written a recent paper on growth and the transformative AI together with me in which we synthesize the literature related to the theme of today’s webinar. Phil, the floor is yours.

Phil Trammell  62:32

Thank you, Chad, Ben and Rachael. And thank you, Anton, for giving me this chance to see if I can keep up with the Joneses. Some of what I say will overlap what’s already been said. But yeah, hopefully I have something new to say. As Anton said at the beginning, when thinking about growth, economists are typically content to observe, as Kaldor first famously did, that growth has been roughly exponential at 2 to 3% a year since the Industrial Revolution. And so they’ll assume that this will continue, at least over the timescales that they care about. Sometimes they do this bluntly, by just stipulating an exogenous growth process going on in the background, and then studying something else. But even when constructing endogenous or semi-endogenous growth models; that is, ones that model the inputs to growth explicitly, research and so on. A primary concern of these models is usually to match this stylized description of growth over the past few centuries. For example, the Aghion, Jones and Jones paper that Ben presented is unusually sympathetic to the possibility of a growth regime shift and acceleration. But even so, it focuses less on scenarios in which capital becomes highly substitutable for labor and tech production, ones that overcome that Baumol effect, on the grounds that as long as that phi parameter Ben mentioned is positive, which I think the authors believed at the time, then capital accumulation is enough to generate explosive growth, which is not what we’ve historically observed. And restrictions along these lines appear throughout the growth literature. As a result, alternate growth regimes currently seem to just be off most people’s radar. For example, environmental economists have to think about longer timescales than most economists, but they typically just assume exponential growth, or a growth rate that falls to zero over the next few centuries. A recent survey of economists and environmental scientists just asked: when will growth end? As if that roughly characterized the uncertainty, and those with an opinion about half said within this century, and about half said never. No one seems to have filled in a comment saying they thought it would accelerate or anything like that. Plus when asked why it might end, insufficient fertility wasn’t explicitly listed as a reason, and no one seems to have commented on its absence.

But on a longer timeframe, accelerating growth wouldn’t be ahistorical, the growth rate was far lower before the Industrial Revolution. And before the agricultural revolution, it was lower still. So some forecasts on the basis of these longer-run trends have predicted continual acceleration to growth, sometimes in the near future, multiplied by a factor of 20 again, it might be 40% growth a year or something. Furthermore, radically faster growth doesn’t seem deeply theoretically impossible, I don’t think. Lots of systems do grow very quickly. If you put mold in a petri dish, it’ll multiply a lot faster than 2% a year, right.

So more formally, the Ben paper finds that you can get permanent acceleration under this innocent seeming pair of conditions. First, you need capital that can start doing research without human input, or can substitute well enough to overcome that Baumol effect. And second, you need phi at least zero, the fishing out effect. Not not too strong. Yeah, just to just to recap what here’s what phi at least zero means. When you have advanced tech, on the one hand, it gets easier to advance further, because you have the aid of all the tech you’ve already developed. And on the other hand, it gets harder because you’ve already picked all the low hanging fruit. Phi less than zero means the second effect wins out. Okay, so as you can see, these two conditions are basically a way of formalizing the idea of recursively self-improving AI, leading to a singularity, and then translating that into the language of economics.

That’s a great contribution in formalization in its own right, but the really nice thing about it, is that it lets us test these requirements, the singularitarian scenario. So as Ben noted a recent paper estimates phi to be substantially negative, or using Chad’s notation, beta to be positive, implying that even reproducing and self-improving robot researchers couldn’t bring about a real singularity, like a type one or type two. But they could still bring about a one time growth rate increase, as long as they can perform all the tasks involved in research.

In any event, this is just one model. There, there are plenty of others. Anders Sandberg here put together a summary of these back in 2013, of what people have come up with at the time. And Anton and I did the same more recently to cover the past decade of economist’s engagement with AI. But I think the most significant contribution on this front is just the paper that Ben presented. It solidifies my own belief for whatever little it’s worth that an AI growth explosion of one kind or another, even just a growth rate increase rather than a singularity is not inevitable, but not implausible. And it’s at least a scenario we should have on our radar.

This is all very valuable for those of us interested in thinking about the range of possibilities for long-run growth. For those of us also interested in trying to shape how the long-run future might go though, what we especially want to keep an eye out for our opportunities for very long run path dependence, right, not just forecasting. In fact, I think almost a general principle for those interested in maximizing their long term impact would be to look for systems with multiple stable equilibria which have very different social welfares in them and where we’re not yet locked into one and then to look for opportunities to steer toward a good stable equilibrium. So we have to ask ourselves, does the development of AI offer us any opportunities like this? If so, I don’t think the economics literature as yet identified it identified them actually. As Ben Garfinkel here has pointed out, a philanthropist who saw electric power coming decades in advance might not have found that insight to be a decision element. It just doesn’t really help you do good. There could be long term consequences of the social disruption AI could wreak, or of who first develops AI, and like takes over the world or something. And most dramatically, if we do something to prevent AI from wiping out the human species, that would certainly be a case of avoiding a very bad and very stable equilibrium. But scenarios like these aren’t really represented in the economics literature on AI.

By contrast, path dependency is a really clear implication of Chad’s paper. We may have this once and forever opportunity to steer civilization from the empty planet equilibrium to the expanding cosmos equilibrium by lobbying for policies that maintain positive population growth and thus maintain a positive incentive to fund research and fertility. To my mind, this is a really important and novel insight. And it would be worth a lot more papers to trace out more fully, just under what conditions it holds. But I think it’s pretty robust. So the key ingredient is just that if there’s too much tech per person, the social planner can stop finding it worthwhile to pay for further research. For the reasons Chad explained, fertility has proportional consumption costs, you have to bring about a proportional population increase, people have to give up a certain fraction of their time to have the children but it would no longer be producing proportional research increases, because there’s this mountain of ideas, you can hardly add much to in proportional terms. So as long as this dynamic holds, you’ll get that pair of equilibria.

So for example, in the model, peoples’ utility takes this quirky form you see here, where c is average consumption of the time, and n is how many descendants people have alive at a time. But you might wonder, what, if people are more utilitarian, what if they’re perhaps number-dampened time-separable utilitarians like this? Well, if their utility function takes this form, as Chad points out in the paper, actually, we get the same results. And the utility functions are basically just monotonic transformations of one another. So they represent the same preference ordering is how you can see that. Anyway, likewise, in the model, people generate innovation just by living. This is equivalent to exogenously stipulating that a constant fraction of the population has to work as researchers full time. But what if research has to be funded by the social planner, at the cost of having fewer people working in final good output and thus lower consumption? Well, then, at least if my own scratch work is right, we still have our two stable equilibria. And in fact, in this case, the bad one stagnates even more fully. Research can zero out even though it’s not like everyone has died off, because it’s just not worth allocating any of the population to research as opposed to final good production.

Finally, sort of like Rachael is saying, I think there’s an important interaction between the models. If we’re headed for the empty planet equilibrium, the technology level plateaus. But the plateau level can depend on policy decisions at the margin, right, like research funding or just a little bit more fertility, even if it doesn’t break us out of equilibrium. And the empty planet result doesn’t hold if capital can accumulate costlessly and do the research for us. So maybe all that matters is just making sure we make it over the AI threshold and letting the AI take it from there. All right.

Well, to wrap up if we care about the long run, we should consider a wider spectrum of ways long run growth might unfold, not just those matching the Kaldor facts the last few centuries. If we care about influencing the long run, we should also look for those rare pivotal opportunities to change which scenario plays out. To simplify a lot, the Ben paper helps us with the former, showing how a gross singularity via AI may or may not be compatible with reasonable economic modeling. And the Chad paper helped us with the latter, showing a counter intuitive channel through which we could get locked into a low-growth equilibrium, sort of ironically via excessive tech per person. And a policy channel, that could avert it. He focuses on fertility subsidies, destroying technological ideas would do the trick too because it would shrink the number of ideas per person. But hopefully the future of civilization doesn’t ultimately depend on longtermists taking to book burning. And yeah, hopefully all this paves the way for future research on how we can reach an expanding cosmos. Thank you.

Anton Korinek  76:19

Thank you Phil, and thank you all for your contributions, and to everyone who has posted so many interesting questions in our Q&A. Now, luckily, many of them have already been answered in writing, because we are at the end of our allocated time. So let me perhaps just let both of our speakers have 30 seconds to give us a quick reaction to the discussion. Ben, would you like to go first?

Ben Jones  76:51

Sure, I will. Thanks, everyone, for all the great questions in the Q&A. Thanks, Rachael and Phil, for very interesting discussions about us writers in the pair of these papers. And I think the distinction of whether you can automate the ideas production function or not. I mean, that’s kind of where, what do we believe about that? In terms of which very different trajectory we end up on, I think it’s a super interesting question for research. I guess one last comment, I think the singularity type people, they tell a story something like you get a computer one algorithm as good or better than a human. And because you can then have huge increasing returns to scale from that invention of that algorithm, that AI, you think you just keep repeating it over and over again, as instantiations on computing equipment, then you kind of can get to sort of infinite input into or very high input growth into the idea production function. And I mean, that’s where you get this really, really strong singularity, I think, at a more micro statement of what’s going on. But I think the point that Chad and I are making, another way to think about it, is that you’re not going to repeat the human. What we’re going to do, it’s sort of like that we had a slide rule, and then we had a computer, and we have centrifuges, we’ve got automated pipetting, we’re going to research just like production as a whole set of different tasks. And probably what’s going to happen is we’re going to slowly continue to automate some of those tasks. And the more you automate the more you leverage the people who are left, and you can throw capital at those automated tasks. And I think that is the way that’s still a way doesn’t get you to singularities necessarily, but it’s the way potentially past the point Chad is making. And I think it’s really interesting, I think this work collectively helps us really think about where the rubber hits the road, in terms what we have to believe and where the action will be, in terms of the long run outcomes.

Anton Korinek  78:36

Thank you Ben. Chad?

Chad Jones  78:38

Yeah, so let me thank Phil and Rachael for excellent discussions, those were really informative. And I think the one thing I took away from your discussion and from pairing these two papers together is the point that you both identified, so I’ll just repeat it, I think it’s important.  An interesting question is, does the AI revolution come soon enough to avoid the empty planet? And I think that’s really when you put these papers together, the thing that jumps out at you the most, and as Phil kind of mentioned, and Ben was just referring to, small improvements can help you get there and so maybe it’s possible to leverage our way into that, but it’s by no means obvious. It’s been pointed out if you’ve got this fixed pool of ideas, then the AI improves the the fishers but doesn’t change the pool. And so I think a lot of these questions deserve a lot more research. And so I think, Anton, thanks for putting this session together. It was really great and very helpful.

Anton Korinek  79:34

Thank you, everyone, for joining us today and I hope to see you again soon at one of our future webinars on the governance and economics of AI. Bye.



Source link

21May

GovAI Annual Report 2019 | GovAI Blog


The governance of AI is in my view the most important global issue of the coming decades. 2019 saw many developments in AI governance. It is heartening to see how rapidly this field is growing, and exciting to be part of that growth.

This report provides a summary of our activities in 2019.

We now have a core team of 7 researchers and a network of 16 research affiliates and collaborators. This year we published a major report, nine academic publications, four op-eds, and our first DPhil (read Oxfordese for PhD) thesis and graduate! Our work covered many topics:

  • US public opinion about AI
  • The offense defense balance of AI and scientific publishing
  • Export controls
  • AI standards
  • The technology life-cycle of AI domestic politics
  • A proposal for how to distribute the long-term benefits from AI for the common good
  • The social implications of increased data efficiency
  • And others…

This, however, just scratches the surface of the problem, and we are excited about growing our team and ambitions to better make progress. We are fortunate in this respect to have received financial support from, among others, the Future of Life Institute, the Ethics and Governance of AI Initiative, and especially from the Open Philanthropy Project. We are also fortunate to be a part of the Future of Humanity Institute, which is dense with good ideas, brilliant people, and a truly long-term perspective. The University of Oxford similarly has been a rich intellectual environment, with increasingly productive connections with the Department of Politics and International Relations, the Department of Computer Science, and the new AI Ethics Institute.  

As part of our growth ambitions for the field and GovAI, we are always looking to help new talent get into the field of AI governance, be that through our Governance of AI Fellowship, hiring researchers, finding collaborators, or hosting senior visitors. If you’re interested, visit www.governance.ai for updates on our latest opportunities, or consider reaching out to Markus Anderljung (

ma***************@ph********.uk











).

We look forward to seeing what we can all achieve in 2020.

Allan Dafoe
Director, Centre for the Governance of AI
Associate Professor and Senior Research Fellow
Future of Humanity Institute, University of Oxford

Research

Research from previous years available here.

Major Reports and Academic Publications
  • US Public Opinion on Artificial Intelligence by Baobao Zhang and Allan Dafoe. In the report, we present the results from an extensive look at the American public’s attitudes toward AI and AI governance. We surveyed 2,000 Americans with the help of YouGov. As the study of public opinion toward AI is relatively new, we aimed for breadth over depth, with our questions touching on: workplace automation; attitudes regarding international cooperation; the public’s trust in various actors to develop and regulate AI; views about the importance and likely impact of different AI governance challenges; and historical and cross-national trends in public opinion regarding AI. Featured in Bloomberg, Vox, Axios and the MIT Technology Review.
  • How Does the Offense-Defense Balance Scale? in Journal of Strategic Studies by Ben Garfinkel and Allan Dafoe. The article asks how the offense-defense balance scales, meaning how it changes as investments into a conflict increase. Simple models of ground invasions and cyberattacks that exploit software vulnerabilities suggest that, in both cases, growth in investments will favor offense when investment levels are sufficiently low and favor defense when they are sufficiently high. We refer to this phenomenon as offensive-then-defensive scaling or OD-scaling. Such scaling effects may help us understand the security implications of applications of artificial intelligence that in essence scale up existing capabilities.
  • The Interests behind China’s Artificial Intelligence Dream by Jeffrey Ding in the edited volume “Artificial Intelligence, China, Russia and the Global Order”, published by Air University Press. This high-level overview of China’s AI dream, places China’s AI strategy in the context of its past science and technology plans, outlines how AI development intersects with multiple areas of China’s national interests, and discusses the main barriers to China realizing its AI dream.
  • Jade Leung completed her DPhil thesis Who Will Govern Artificial Intelligence? Learning from the history of strategic politics in emerging technologies, which looks at how the control over previous strategic general purpose technologies – aerospace technology, biotechnology, and cryptography – changed over the technology’s lifecycle, and what this might teach us about how the control over AI will shift over time.
  • The Vulnerable World Hypothesis in Global Policy by Nick Bostrom. This paper introduces the concept of a vulnerable world: roughly, one in which there is some level of technological development at which civilization almost certainly gets devastated by default, i.e. unless it has exited the ‘semi‐anarchic default condition’. It was originally published as a working paper in 2018.

A number of our papers were accepted to the AAAI AIES conference (which in the discipline of computer science is a standard form of publishing), taking place in February 2020:

  • The Offense-Defense Balance of Scientific Knowledge: Does Publishing AI Research Reduce Misuse of the Technology? by Toby Shevlane and Allan Dafoe. The existing conversation within AI has imported concepts and conclusions from prior debates within computer security over the disclosure of software vulnerabilities. While disclosure of software vulnerabilities often favours defence, this article argues that the same cannot be assumed for AI research. It provides a theoretical framework for thinking about the offense-defence balance of scientific knowledge.
  • The Windfall Clause: Distributing the Benefits of AI for the Common Good by Cullen O’Keefe, Peter Cihon, Carrick Flynn, Ben Garfinkel, Jade Leung and Allan Dafoe. The windfall clause is a policy proposal to devise a mechanism for AI developers to make ex-ante commitments to distribute a substantial part of profits back to the global commons if they were to capture an extremely large part of the global economy via developing transformative AI.
  • U.S. Public Opinion on the Governance of Artificial Intelligence by Baobao Zhang and Allan Dafoe. In the report, we present the results from an extensive survey into 2,000 Americans’ attitudes toward AI and AI governance. The results are available in full here.
  • Near term versus long term AI risk framings by Carina Prunkl and Jess Whittlestone (CSER/CFI). This article considers the extent to which there is a tension between focusing on the near and long term AI risks.
  • Should Artificial Intelligence Governance be Centralised? Design Lessons from History by Peter Cihon, Matthijs Maas and Luke Kemp (CSER). There is need for urgent debate over the question over how the international governance for artificial intelligence should be organised. Can it remain fragmented, or is there a need for a central international organisation? This paper draws on the history of other international regimes to identify advantages and disadvantages involved in centralising AI governance.
  • Social and Governance Implications of Improved Data Efficiency by Aaron Tucker, Markus Anderljung, and Allan Dafoe. Many researchers work on improving the data efficiency of machine learning. What would happen if they succeed? This paper explores the social-economic impact of increased data efficiency on e.g. market concentration, malicious use, privacy, and robustness.
Op-Eds & other Public Work
  • Artificial Intelligence, Foresight, and the Offense-Defense Balance, War on the Rocks, by Ben Garfinkel and Allan Dafoe. AI may cause significant changes to the offense-defense balance in warfare. Such changes that essentially scale up existing capabilities are likely to be much easier to analyze than changes that introduce fundamentally new capabilities. Substantial insight into the impacts of AI can be achieved by focusing on this kind of quantitative change. The article summarises the work of How Does the Offense-Defense Balance Scale? in the Journal on Strategic Studies by the same authors.
  • Thinking about Risks from Artificial Intelligence: Accidents, Misuse and Structure, Lawfare by Remco Zwetsloot and Allan Dafoe. Dividing AI risks into misuse risks and accident risks has become a prevailing approach in the AI safety field. This piece argues that a third, perhaps more important, source of risk should be considered: structural risks. AI could shift political, social and economic structures in a direction that puts pressure on decision-makers — even well-intentioned and competent ones — to make costly or risky choices. Conversely, existing political, social and economic structures are important causes of risks from AI, including risks that might look initially like straightforward cases of accidents or misuse.
  • Public Opinion Lessons for AI Regulation Brookings Report by Baobao Zhang. An overwhelming majority of the American public believes that artificial intelligence (AI) should be carefully managed. Nevertheless, the public does not agree on the proper regulation of AI applications, as illustrated by the three case studies in this report: facial recognition technology used by law enforcement, algorithms used by social media platforms, and lethal autonomous weapons.  
  • Export Controls in the Age of AI in War on the Rocks by Jade Leung, Allan Dafoe, and Sophie Charlotte-Fischer. Some US policy makers have expressed interest in using export controls as a way to maintain a US lead in AI development. History, this piece argues, suggests that export controls, if not wielded carefully, are a poor tool for today’s emerging dual-use technologies such as AI. At best, they are one tool in the policymakers’ toolbox, and a niche one at that.
  • GovAI (primarily Peter Cihon) led on a joint submission with the Center for Long Term Cybersecurity (UC Berkeley), the Future of Life Institute, and the Leverhulme Centre for the Future of Intelligence (Cambridge) in response to the US government’s RFI Federal Engagement in Artificial Intelligence Standards.
  • A Politically Neutral Hub for Basic AI Research by Sophie-Charlotte Fischer. This piece argues that a politically neutral hub for basic AI research, committed to the responsible, inclusive, in addition to peaceful development and use of the new technologies should be set up.
  • Ben Garfinkel has been doing research on AI risk arguments, exemplified in his Reinterpreting AI and Compute, a number of internal documents (many of which are shared with OPP), his EAG London talk, and an upcoming interview on the 80,000Hours Podcast.
  • ChinAI Newsletter. Jeff Ding continues to produce the ChinAI newsletter, which now has over 6,000 subscribers.
Technical Reports Published on our Website
  • Stable Agreements in Turbulent Times: A Legal Toolkit for Constrained Temporal Decision Transmission by Cullen O’Keefe. Much of AI governance research focuses on the question of how we can make agreements or commitments now that have a positive impact during or after a transition to a world of advanced or transformative artificial intelligence. However, such a transition may produce significant turbulence, potentially rendering the pre-transition agreement ineffectual or even harmful. This Technical Report proposes some tools from legal theory to design agreements where such turbulence is expected.
  • Standards for AI Governance: International Standards to Enable Global Coordination in AI Research & Development by Peter Cihon. AI standards work is ongoing at ISO and IEEE, two leading standards bodies. But these ongoing standards efforts risk not addressing policy objectives, such as a culture of responsible deployment and use of safety specifications in fundamental research. Furthermore, leading AI research organizations that share concerns about such policy objectives are conspicuously absent from ongoing standardization efforts. This Technical Report summarises ongoing efforts in producing standards for AI, what their effects might be, and makes recommendations for the AI Governance / Strategy community.
Select publications by our Research Affiliates

Public engagement

Many more of our public appearances (e.g. talks, podcasts, interviews) can be found here. Below is a subset:

Team and Growth

The team has grown substantially. In 2019, we welcomed Toby Shevlane as a Researcher, Ben Garfinkel and Remco Zwetsloot as DPhil scholars, Hiski Haukkala as a policy expert, Ulrike Franke and Brian Tse as a Policy Affiliates, in addition to Carina Prunkl, Max Daniel, and Andrew Trask as Research Affiliates. 2019 also saw the launch of our GovAI Fellowship, which received over 250 applications and welcomed 5 Fellows in the Summer. We will continue to run this Fellowship in 2020, running a Spring and Summer cohort.

We continue to receive a lot of applications and expressions of interest from researchers across the world who are eager to join our team. In 2020, we plan to continue our GovAI Fellowship programme, engaging with PhD researchers particularly in Oxford, and hiring additional researchers.



Source link

21May

Carles Boix and Sir Tim Besley on Democratic Capitalism at the Crossroads


Carles Boix is the Robert Garrett Professor of Politics and Public Affairs in the Department of Politics and the Woodrow Wilson School of Public and International Affairs at Princeton University. In 2015, he published Political Order and Inequality, followed by Democratic Capitalism at the Crossroads: Technological Change and the Future of Politics, the subject of our webinar, in 2019.

Sir Tim Besley is School Professor of Economics of Political Science and W. Arthur Lewis Professor of Development Economics in the Department of Economics at LSE.  He is also a member of the National Infrastructure Commission and was President of the Econometric Society in 2018.  He is a Fellow of the Econometric Society and British Academy.  He is also a Foreign Honorary Member of the American Economic Association and the American Academy of Arts and Sciences. In 2016 he published Contemporary Issues in Development Economics.

You can watch a recording of the event here or read the transcript below:

Allan Dafoe:

Okay, welcome everyone. I hope you can all hear and see us. Today we have the privilege of having Carles Boix and Sir Tim Besley talk to us about insights from Carles’s recent book, Democratic Capitalism at the Crossroads:Technological Change and the Future of Politics. Carles is the Robert Garrett Professor of Politics and Public Affairs in the Department of Politics and the Woodrow Wilson School of Public and International Affairs at Princeton University. He has published important work in political economy and comparative politics, particularly around the role of institutions in shaping economic growth and inequality. Carles’s work has been especially impactful on my career choice, I’m happy to share in 2003 when I was considering what fields to move into — economics, political science, sociology, — I read Carles’s book on Democracy and Redistribution and I was at the time and remain deeply impressed by the sweep and importance of its argument, but also its theoretical parsimony and breadth of empirical support, I highly recommend it. This book, in fact led me to see political science as a discipline compatible with asking big important questions such as, as will be discussed today the impact of advances in AI on political institutions inequality.

Following Carles’s talk, Sir Tim Besley will offer some reflections on the theme. Tim is the School Professor of Economics of Political Science and W. Arthur Lewis Professor of Development Economics in the Department of Economics at LSE.  He is also a member of the National Infrastructure Commission and was President of the Econometric Society in 2018.  Tim has published in top journals in economics, and I’m also happy to see in top journals in political science on topics spanning political violence, state capacity, economic development and their interactions.

Following this, we will have a 30 minute discussion. At the bottom of the screen you can see ask a question, please do click on that and articulate your questions and comments. And then also vote up those questions that you most think worth discussing. We may not get to too many or any of them, but I will certainly try to look at them and incorporate them into the conversation. Okay, so with that out of the way, it’s, again, our pleasure to have Carles introduce some of the themes and insights from his book.

Carles Boix:

Okay, so, thank you so much for your inviting me to this and for this very kind presentation and thank you to Tim for the you know, being willing to discuss the book, and of course, to all those attending. I think It’s going to be a bit difficult, like talking to a dark room. And normally we are used to seeing what the audience thinks, at least when we give seminars or talks. So I’ll try my best. I hope that the future is not like this and it’s going to be like a short shock to all of us and we can reconvene in a normal way. So I’m going to use some slides and to let me look for them.

So basically, this is a book that I published last year. And it’s basically driven by a double motivation. The first one is what I think is a long term intellectual question, which is the compatibility of democracy and capitalism. This is a long running question that in the 19th century was basically answered in a very pessimistic way, both by the right and the left. So Marx thought democracy and capitalism were incompatible. Having one person one vote, could not be together with the protection of what he called the interests of the bourgeoisie. But it was also answered in a pessimistic way by conservatives and in fact, by liberals like john Stuart Mill, who made the point that they would only be compatible, provided the population was educated at a high level. Then after these periods of pessimism, what we see is in the 20 century, at least In the second half of the 20th century, a period of optimism, the coining of the term democratic capitalism, the possibility of having representative democracy, markets or regulated markets, but it’s still free markets and a kind of strong welfare state. And then today, what we see is a moment of questioning of the compatibility of these two things. And so that’s where the book comes. And these links, the sort of long term intellectual question with today’s politics, that’s the second motivation, which I perhaps I think it’s what has driven most of you to come to to attend this talk. And the politics of today, at least in the advanced world, is one where you we see a lot of mistrust towards politicians, growing abstension, polarisation and the rise of what many call populist parties.

So this process or this ideational change that I’m talking about during the 19th century, 20th century and today also coincides or comes parallel to economic change. So what I’m showing here basically is over time for a few countries for which we have some data the US, Britain and Japan, the evolution of their level of income inequality measured by the Gini index. And what we see is high levels of inequality in the 19th century, a sharp decline by the middle or the end of the first third of the 20th century, and then approximately around 1970, 1980 a growing inequality in those countries, so three periods. And my answer in the book is that in a way, this is driven by technological change, which then has consequences on the labour market and in politics, these technological changes basically, in let’s say industrial capitalism, produced or generated by a stretch for more productivity, an increase in efficiency leads to basically a process of capital labour substitution that results in what I would call different production models, especially in what regards to what is the kind of labour that is complimentary to capital. So, in the book I distinguish, and I discuss three moments that are not precise but that kind of dominate each century, what I call the Manchester model, where basically what we see is a process of mechanisation. At least of some industries. The paradigm of them would be the textile industry with the use of mostly unskilled labour, the decline of demand for artisans, an increasing use of unskilled labour, that then coincides or leads, there is discussion on that in the literature, with declining wages and a more unequal distribution of income.

By the beginning of the 20th century or at the end of the 19th century, a set of technological changes such as electricity, what is called the use of parts that are interchangeable that can be used in many different industrial processes, and the production system that has a sequential layout, literally, assembly line in Detroit, or in the batch production machine, for example, tobacco machines that basically result in the decline of very unskilled labour and semi skilled labour becoming complimentary. These transformations in the labour market, I claim in the book are associated with growing wages for everyone with a distribution of income this equalising and with the formation of a class that in the British sociological literature was labelled the affluent worker or having an affluent working class. Then by the 1970s and 80s a bunch of changes, mostly an increasing computational capacity and a decline in computing costs leads to a new transformation that I center in Silicon Valley.

So, everything in the book moves west from Manchester to Detroit, from Detroit to Silicon Valley. That all these changes in computation lead to an automation of basically routine jobs, and in a way also fosters a new globalisation a real globalisation that includes a set of countries that were in the periphery of capitalism into the world economy, China, East Asian countries with important effects on wages in Europe and North America. The complimentary labour at this stage is high skilled labour. And as volunteers such as the board or have shown the hollowing out of the labour market, a polarisation of the labour market, a decline in what were considered middle class jobs. Here are good jobs in the past. So a growing inequality.

These changes have an impact on politics. In the 19th century, the incompatibility of democracy and capitalism led to basically very restrictive suffrage regimes. But in the 20th century, what we see is after a period of strife and World War One and Two and the crisis of the 30s, a process of democratic consolidation, what sociologists and political scientists called the end of ideology period, and base and pivotal politics with parties competing for the median voter. This has been in a way replaced today by conflictual politics, by polarisation and by what some call a possible real crisis of democracy and democratic backsliding. I’m going to focus on the rest of the talk on basically this political side of the story. Which basically occupies the last two chapters of the book.

Let me say that when we look at democracy, at least in advanced countries, which are basically where I pay attention to in the book with some references to developing countries at the end of the book, and at the end of this talk, when we look at the politics of democratic countries, what we observe is a process of, as I said before, depolarization. So here I show how left or centre left, certain right, centre right parties, on average, were located in terms of their position in ideological left right, using party manifestos for which we have information about available at least since the end or the beginning of the Cold War period. What we see is a process as I said of depolarization, so the lines, both of them converge to words, more moderate positions, both the centre right parties here in blue, and the social democratic parties in red, they become very similar or relatively similar on average for of course, in the late 1980s. But what is also very interesting about this graph is that, after that process of convergence, they remain basically, at the stable and close to each other.

This is comes in contrast with a process of political disaffection at the mass level. So, here what I show is the proportion of people that respond that “politicians care about what people like the respondent, like me,” what we see is a growing or a decline of trust in politicians. The longest series is the US series represented in black, for which we have data going back to the mid 1960s. At that time, about 60% of Americans thought that politicians care about what people like them thought. There was a decline that coincided with the Vietnam War, a slight increase in the 1980s, and then a big our fall in are in the 1980s. With a very short spike related to the to the Twin Towers attacks at the beginning of the 2000s. Today, only 20% of Americans think that politicians care about what they think. But the same process happens in Germany in blue, in France, in red in Britain, already at low levels in the 1970s. Now at only 10% of British voters think that politicians care about them, or at least this was for 2014. It’s not the same everywhere. So here I show a case where the trend goes just opposite. This is the case of Finland, where trust is at about 40%. So much higher than any of the big countries. There are others small countries like the Netherlands that have have a similar levels of of trust, or at least in 2015.

This process of political disaffection is correlated or comes with growing abstension. So here I show in green, the levels of abstention in the US excluding the South which is a different or was a different thing. Abstention was extremely high as a result of exclusion of African Americans in Europe is in black. And what we see is that extension till the early 1970s was at 15%. extremely low. At that time, the study of abstention in the scholarly literature was a non issue. So it’s very hard to find any one that was interested in that topic. But today, abstention in Western Europe stands at about 33% not that different from the US in the East Coast and in the West.

The abstention is concentrated in particular sectors. So here I show three countries Finland, for which we have real data because they have a registration system that allows us to track all individuals and surveys from France and the United Kingdom and I show the rate of abstention divided by cohorts. So young, middle and senior. And then by income quintile, just here, the top quintile, the middle quintile, and the bottom quintile. And what we see, basically, is a extremely high level of abstention among young people, but also among those that are in the low income levels. So certainly groups that are have experienced, I think, economic shocks, and that in some economies the young are more excluded from labour markets somehow are turning out much less.

When we think about the stability of party platforms I was showing before and this kind of growing alienation of the population the question basically brings to us is a puzzle, why is that, given the degree of dissatisfaction that was building up among voters for a long time, European mainstream parties, so centre right centre left parties, those in favour of the consensus of democratic capitalism didn’t react. In part they did, but they did not as much. And it’s surprising because when we look at the support among the electorate, what we see is that starting in the 1970s, the percentage of support for centre right, and centre left parties, which was at about 70% had been like that, since the end of World War Two started to decline today. They together have the support of about 45% of the electorate, so it’s a big decline. It’s corresponds with increasing abstention, of course. But the question is why didn’t these parties respond?

And I think that there are several explanations to that. It could be that fiscal constraints such as a growing population of pensioners, globalisation prevented them constrained mainstream parties from acting more strongly to in response to this decline in trust, but I think that part of it was also a question of electoral incentives. When we look at the proportion of votes that mainstream parties got in elections, basically until the crisis of 2007, 2008, their level of support was very stable at around 80% and it has only been in the last 10,15 years that as a proportion of voters, they have started to bleed votes. This has led to a changed political landscape.

So, here in a sort of very stylized way I represent how we may want to think about electoral politics during what I would call the trade capitalism or the golden age of capitalism. Or if you will, the post war period. The graph has two dimensions one is about compensation. So basically taxes from low to high. And then there is a vertical dimension these would be globalism, trade, immigration, from being very much against globalism, so at zero to being very much in favour of globalisation. And then I represent the location of voters in blue, it would be were more or less the location of middle class voters. And in red the location of working class voters, this is these curves are not the circles or are not indifference curves is just more or less representing what is the bulk of the middle classes and the working classes at that time. And then two parties right and left located are closer to the let’s say, median voter. Here clearly the preferences of voters are only different or heterogeneous along the compensation dimension. Globalisation at that time was not that important. Again, I’m talking about advanced democracies. And it was not important because there was globalisation in the sense of high integration of advanced countries. But the developing countries were not really competing with industrial workers in advanced countries, this has changed progressively. And there has been a process whereby the middle classes have become more heterogeneous, with some part of the great work of those voters moving towards embracing even globalisation even more, those and some are less. And then the working class affected by all the transformations of automation and globalisation, moving towards a position that is protectionist, if you will. Here, what I’m talking about is preferences. So in a way these preferences have been framed, or structure the narratives have been constructed by politicians themselves.

And so in this new world, what we see is the growth of an alternative instead of the old right and left of the past, what we see is the growth of this thing called populism, which is a term that I refuse to use without quote marks, because it’s a difficult term to define. And I would rather talk about these as anti globalisation or nationalist movements. And so in response to this movement that insists on the second dimension, the trade and immigration and mentioned then the response, I think, at least from the left to get many of its voters back seems to be a more polarised position. So that would be a movement from L to L prime. So here we would have if we think about the US, Trump versus Sanders or something like that. So that’s how these economic transformations have transformed the politics  of the advanced world. And when we look at some data on vote for populist parties, what we see is, again, I here I show the proportion of people voting for populist parties in the mid 2000s divided by income quintiles. So, basically, in the bottom quintiles, the proportion of voters voting for populist parties, more than double sometimes triples the vote for populist parties among those at the top income quintile. Of course, this is mediated by electoral institutions in proportional representation systems, populist parties have had a much easier life in terms of getting the vote and becoming a viable party that is voted by by the electorate in countries like the UK with a majoritarian system. Can parties like the UKIP didn’t do well in elections. But of course, everything exploded in the around the question of Brexit referendum.

So, let me now kind of finish by thinking a bit about or talking a bit about what’s next, which is kind of the discussion that is all over the place. And here what we find there’s a division between techno optimists and what we want to call techno pessimists. This is not a new discussion, we go back to the 19th century, the 20th century, and we have the two positions there. So Keynes, as well known from his well known as, say, economic possibilities for our grandchildren, has a optimistic position about the effects of technology. And basically, consider that we three hours a day, we probably will have enough and we will be able to do many other things, such as what Marx thought communism would bring. So, you know, hunting in the morning and fishing in the afternoon or the other way around. We also have the pessimistic Marx, at least for capitalism, who thought that at the end of the day, technological change would lead to monopolistic capital to the immiseration of the working class and two columns of workers going down into Manhattan and Silicon Valley, I suppose, to burn everything.

So what is my position? Well, really, I think we do not know the future. Right? So here I’m showing a graph that I find I don’t know if funny but at least entertaining and revealing. This is data from a paper by two authors Armstrong and Sotala that look at papers and books with predictions about when artificial intelligence may replace human activity. And so what I plot here is I use their plot, in fact, is when did they predict in the horizontal axis that AI would replace human activity in the sense of when did they publish the paper and then in the vertical axis, what was the year they predicted AI would replace human activity. And what we see is predictions all over the place. That go from for the latest from 2020. So papers published in 2010 thought that we would already be replaced. To some that put the date at the end of this century. In fact, there are some authors like the father of the singularity movement, Kurzweil, who have made predictions for different years over time. So if we do not know what may happen, the only thing I think we can do is to consider different scenarios, as they are defined by preparing parameters.

That’s what I do at the end of the book. I consider four things that I think will determine where are we going to go. Three of them are mostly economic, if you will, and one is political. Those parameters are labour demand, the supply of labour, how concentrated capital may be, and the responses, the political responses to all these in the north, but also in the south.

So I will just spend my few minutes last minutes on political responses by saying that my assumption through the discussion is that what has changed from the 19th century. So if you remember the first graph I showed was about the evolution of inequality from high to then declining, and now, again, growing. What differs today is that in the 19th century, even the most advanced countries, the industrial countries were poor in comparison with what we are now, right so, a country like the US had a per capita income in 1870 at around $2500 are in 1990 constant dollars Today, the per capita income is about $30,000. So this change, in a way has to have an important impact in terms of the room of manoeuvre that we have to do many things that we could not have done two centuries ago or 150 years ago.

So what may be the impact of these technological changes? As I said at the beginning, there has been a lot of talk about the crisis of democracy about democratic backsliding. And so when we look at the data, what we find is that in terms of the number of democratic breakdowns since 1800 till now, there were no democratic breakdowns basically in the 19th century, because there were no democracies. Then we see a lot of democratic breakdowns in the 1920s, 30s and then again during the 20th century, but when we look at us today, and given that there are many more democracies, what we see is at least till 2015, which is the last year for which I have reliable data, democratic breakdowns have not gone up. So that’s an optimistic thing or information or piece of information we have. What about the probability of democratic breakdowns? Here, what I show is what is the probability that any democracy today breaks down four different levels of income from these, the horizontal axis is in thousands of dollars, so 5000 10,000 15,000 20,000 and the probability of democratic breakdown here is calculated as the number of democratic breakdowns over democracies for different levels of income from all the data we have in the last 200 years.

And for poor countries, the probability of democratic breakdown at any given a year is about 6%. For very rich countries, it’s about basically zero percent. So, in principle, the question is that when we look at economic development, more developed countries are more likely to be democratic, or at least for those that are democratic, not to break down.

And so, there are many different explanations that have been given. It could be that economic development leads to a as a result of declining marginal utility of income makes the rich indifferent about the the distributional mechanisms that come with democratic voting, it could be that the economic development is correlated with a set of ideational commitments, such as toleration, and so on that then lead to a more willingness to have democracy or, and that’s the explanation that I think I prefer, which is that without excluding the others, but I think it’s more important is that development has led to a different incentive, a different structure of the economy has led to at least till now, more equality that then has been used on the conflict, social conflict, and has allowed democracy to flourish.

So basically, this argument in some way to be the quick arm runs like this. In the old times, before industrialization in most countries were governed by monarchy and nobility, what also would refer as stationary bandits. And as a result of the way they governed, the distribution of wealth was unequal. And in that context, democracy was impossible. Those elites would blocked the introduction of representational mechanisms. Then with industrialization, a new system appears what we may want to call open economies with free markets and with technologies that equalised growth. So 20th century capitalism. When we look at the distribution of cases in terms of [inaudible] country years for those cases, we have information with the vertical axis being the Gini index and the horizontal axis being income.

In the 20 century, what we see is a mass of countries, that among poor countries, there is a lot of heterogeneity in terms of levels of inequality. But for high levels of development, what we see is much lower levels on average of inequality. That is the story, I think of the 19th century and especially late 20th century capitalism. And so of course, the question is, what is I think in the back of our minds, is, what if these open economies, so, industrial capitalism, as we know it, free markets generate increasingly unequal economies? What are the chances of democracy in this context of yes, much higher levels of prosperity, but also much more inequality? Now, I think that it’s difficult to know it will happen, it will may well be that this inequality may lead to elite capture and to democratic backsliding and so to mass resentment, so it will basically undermine the foundations of democracy. On the other hand, and I here want to be optimistic or at least sound optimistic is that provided that the allocation of returns is thought to be fair, so markets function in a way that do not give advantage to those that have more. And given our already high income levels, it may result in democratic stability.

What do I mean by that? Well, we know that in lab settings, participants generally allocate resources in equal shares, but we also know that they distribute them unqually when in these lab settings when people are engaging tasks by that require effort. We also know from observational data from surveys, that inequality to people is unacceptable when it is at high levels, but some inequality is acceptable. So if we maintain a system that is fair, and then democracy, even if there is inequality may well survive. And I think that in the past democracy was impossible, meaning before the 19th century or before the 20th century, because besides most of the population being poor, the distribution of assets was extremely unfair. The second part of it is that because we have high incomes, this should allow us to develop institutions to respond to shocks to compensate losers to make sure that people are helped to adjust to these new technologies. Now, for this to happen for this democracy to be of quality and to not disappear, we need institutional reforms, in my opinion, to prevent an excessive concentration of economic and political power to avoid crony capitalism. Here, I think that some democratic accountability mechanisms, such as, of course, financial reform in the US, but also the introduction of ideas, such as the electoral voucher should help. I think that breaking up large companies, I don’t want to get into the Zuckerberg Musk dispute, but I think that making sure that those that have become very advantaged or benefited from these technological changes need to be regulated and perhaps, are divided into several companies that also should help.

A third thing that I think is not that discussed that electoral participation matters, that the fact that a lot of people are not turning out and that these abstainers basically come from low income young people, these are changes the median voter and changes the median representative and has an impact on policy. So changes in participation are fundamental mobilisation may be fundamental to reverse the things that are not good for democracy.

And finally, a fourth thing, which is country size. I think that small countries for some reason, because of the tight relationship between representatives and and voters are better positioned to respond to the excessive concentration of economic and political power of the capitalism of today. This is something that I just suggest. It’s not that I’m completely sure, but I think it’s something that should be investigated a bit more by all of us.

So let me finish by saying that in the book, what I challenge is the standard story. We have a standard story that says that modernization led to the end of history, and instead what I suggest is that we should think about capitalism as having different technologies, with different effects on employment on wages. And so these may explain the kinds of political honeymoons and political moments of conflict that we have seen in the last 200 years. So thank you so much.



Source link

21May

Michael C. Horowitz on When Speed Kills: Autonomous Weapon Systems, Deterrence, and Stability


In this talk, Michael draws on classic research in security studies and examples from military history to assess how AWS could influence two outcome areas: the development and deployment of systems, including arms races, and the stability of deterrence, including strategic stability, the risk of crisis instability, and wartime escalation. He focuses on these questions through the lens of two characteristics of AWS: the potential for increased operational speed and the potential for decreased human control over battlefield choices.

You can watch the full talk here



Source link

21May

GovAI Annual Report 2020 | GovAI Blog


A few words from the director:

In my view, the governance of AI will become among the most important global issues. 2020 saw many continued developments in AI governance. It is heartening to see how rapidly this field continues to grow, and exciting to be part of that growth.

This report provides a summary of our activities in 2020.

We now have a core team of 9 researchers and a network of 21 affiliates and collaborators. We are excited to have welcomed Visiting Senior Researchers Joslyn Barnhart and Robert Trager. This year we published two major reports, 15 academic publications, an AI governance syllabus, and 5 op-eds/blog posts. Our work covered many topics:

  • Theory of Impact for AI governance
  • The Windfall Clause
  • Cooperative AI
  • Clarifying the logic of strategic assets
  • National security and antitrust
  • AI and corporate intellectual property strategies
  • AI researcher responsibility and impact statements
  • Historical economic growth trends
  • AI and China
  • Trustworthy AI development
  • And more…

As I argued in AI Governance: Opportunity and Theory of Impact, we are highly uncertain of the technical and geopolitical nature of the problem, and so should acquire a diverse portfolio of expertise. Accordingly, our work covers only a small fraction of the problem space. We are excited about growing our team and have big ambitions for further progress. We would like to thank Open Philanthropy, the Future of Life Institute, and the European Research Council for their generous support. As part of the Future of Humanity Institute, we have been immersed in good ideas, brilliant people, and a truly long-term perspective. The University of Oxford, similarly, has been a rich intellectual environment, with increasingly productive connections to the Department of Politics and International Relations, the Department of Computer Science, and the new Ethics in AI Institute.

We are always looking to help new talent get into the field of AI governance, be that through our Governance of AI Fellowship (applications are expected to open in Spring 2021), hiring researchers, finding collaborators, or hosting senior visitors. If you are interested in working with us, visit www.governance.ai for updates on our latest opportunities, or consider reaching out to Markus Anderljung (

ma***************@ph********.uk











).

We look forward to seeing what we can all achieve in 2021.

Allan Dafoe
Director, Centre for the Governance of AI
Associate Professor and Senior Research Fellow
Future of Humanity Institute, University of Oxford

Research

You can find all our publications here. Our 2019 annual report is here; 2018 report here.

Major Reports and Academic Publications
  • “Open Problems in Cooperative AI” (2020). Allan Dafoe, Edward Hughes (DeepMind), Yoram Bachrach (DeepMind), Teddy Collins (DeepMind & GovAI affiliate), Kevin R. McKee (DeepMind), Joel Z. Leibo (DeepMind), Kate Larson, and Thore Graepel (DeepMind). arXiv. (link)
    Problems of cooperation—in which agents seek ways to jointly improve their welfare—are ubiquitous and important. They can be found at scales ranging from our daily routines—such as highway driving, scheduling meetings, and collaborative work—to our global challenges—such as arms control, climate change, global commerce, and pandemic preparedness. Arguably, the success of the human species is rooted in our ability to cooperate. Since machines powered by artificial intelligence are playing an ever greater role in our lives, it will be important to equip them with the capabilities necessary to cooperate and to foster cooperation. The authors see an opportunity for the field of Artificial Intelligence to explicitly focus effort on this class of problems which they term Cooperative AI. As part of this we co-organized a NeurIPS workshop: www.cooperativeAI.com
  • “The Windfall Clause: Distributing the Benefits of AI for the Common Good” (2020). Cullen O’Keefe (OpenAI & GovAI affiliate), Peter Cihon (GitHub & GovAI affiliate), Carrick Flynn (CSET & GovAI affiliate), Ben Garfinkel, Jade Leung, and Allan Dafoe. Proceedings of the 2020 AAAI/ACM Conference on AI, Ethics, and Society (AIES20). (report) (article summary).
    The windfall clause is a policy proposal to devise a mechanism for AI developers to make ex-ante commitments to distribute a substantial part of profits back to the global commons if they were to capture an extremely large part of the global economy via developing transformative AI. The project was run by GovAI, and inspired the Partnership on AI’s launch of their Shared Prosperity Initiative.
  • “The Offense-Defense Balance of Scientific Knowledge: Does Publishing AI Research Reduce Misuse of the Technology?” (2020). Toby Shevlane and Allan Dafoe. Proceedings of the 2020 AAAI/ACM Conference on AI, Ethics, and Society (AIES20). (link).
    The existing conversation within AI has imported concepts and conclusions from prior debates within computer security over the disclosure of software vulnerabilities. While disclosure of software vulnerabilities often favours defence, this article argues that the same cannot be assumed for AI research. It provides a theoretical framework for thinking about the offense-defense balance of scientific knowledge.
  • “U.S. Public Opinion on the Governance of Artificial Intelligence” (2020). Baobao Zhang and Allan Dafoe. Proceedings of the 2020 AAAI/ACM Conference on AI, Ethics, and Society (AIES20). (link).
    The report presents the results from an extensive survey into 2,000 Americans’ attitudes toward AI and AI governance. The full results were published in 2019 here.
  • “Social and Governance Implications of Improved Data Efficiency” (2020). Aaron Tucker (Cornell University), Markus Anderljung, and Allan Dafoe. Proceedings of the 2020 AAAI/ACM Conference on AI, Ethics, and Society (AIES20). (link).
    Many researchers work on improving the data efficiency of machine learning. What would happen if they succeed? This paper explores the social-economic impact of increased data efficiency on e.g. market concentration, malicious use, privacy, and robustness.
  • “Institutionalising Ethics in AI: Reflections on the NeurIPS Broader Impact Requirement” (Forthcoming). Carina Prunkl (Ethics in AI Institute & GovAI affiliate), Carolyn Ashurst, Markus Anderljung, Helena Webb (University of Oxford), Jan Leike (OpenAI), and Allan Dafoe. Nature Machine Intelligence.
    Turning principles into practice is one of the most pressing challenges of artificial intelligence (AI) governance. In this article, we reflect on a novel governance initiative by one of the world’s most prestigious AI conferences: NeurIPS.
  • “Who owns artificial intelligence? A preliminary analysis of corporate intellectual property strategies and why they matter” (2020). Nathan Calvin (GovAI affiliate) and Jade Leung (GovAI affiliate). GovAI Working Paper. (link).
    This working paper is a preliminary analysis of the legal rules, norms, and strategies governing AI-related intellectual property (IP). It analyzes the existing AI-related IP practices of select companies and governments, and provides some tentative predictions for how these strategies and dynamics may continue to evolve in the future.
  • “How Will National Security Considerations Affect Antitrust Decisions in AI? An Examination of Historical Precedents” (2020). Cullen O’Keefe (OpenAI & GovAI affiliate). GovAI Technical Report. (link).
    Artificial Intelligence—like past general purpose technologies such as railways, the internet, and electricity—is likely to have significant effects on both national security and market structure. These market structure effects, as well as AI firms’ efforts to cooperate on AI safety and trustworthiness, may implicate antitrust in the coming decades. Meanwhile, as AI becomes increasingly seen as important to national security, such considerations may come to affect antitrust enforcement. By examining historical precedents, this paper sheds light on the possible interactions between traditional—that is, economic—antitrust considerations and national security in the United States.
  • “The Logic of Strategic Assets” (2020). Jeffrey Ding and Allan Dafoe. Forthcoming. Security Studies. (link).
    This paper asks what makes an asset strategic, in the sense of warranting the attention of the highest levels of the state. By clarifying the logic of strategic assets, it could move policymakers away from especially unhelpful rivalrous industrial policies, and can clarify the structural pressures that work against global economic liberalism. The paper applies this analysis to AI.
  • “Beyond Near- and Long-Term: Towards a Clearer Account of Research Priorities in AI Ethics and Society” (2020). Carina Prunkl (Ethics in AI Institute & GovAI affiliate) and Jess Whittlestone (CFI). Proceedings of the 2020 AAAI/ACM Conference on AI, Ethics, and Society (AIES20). (link).
    This article considers the extent to which there is a tension between focusing on the near and long term AI risks.
  • “Beyond Privacy Trade-offs with Structured Transparency” Andrew Trask (DeepMind & GovAI affiliate), Emma Bluemke (University of Oxford), Ben Garfinkel, Claudia Ghezzou Cuervas-Mons (Imperial College London), Allan Dafoe. Proceedings of the 2020 AAAI/ACM Conference on AI, Ethics, and Society (AIES20). (link).
    Many socially valuable activities depend on sensitive information, such as medical research, public health policies, political coordination, and personalized digital services. This is often posed as an inherent privacy trade-off: we can benefit from data analysis or retain data privacy, but not both. Across several disciplines, a vast amount of effort has been directed toward overcoming this trade-off to enable productive uses of information without also enabling undesired misuse, a goal we term ‘structured transparency’. In this paper, we provide an overview of the frontier of research seeking to develop structured transparency. We offer a general theoretical framework and vocabulary, including characterizing the fundamental components — input privacy, output privacy, input verification, output verification, and flow governance — and fundamental problems of copying, bundling, and recursive oversight. We argue that these barriers are less fundamental than they often appear. We conclude with several illustrations of structured transparency — in open research, energy management, and credit scoring systems — and a discussion of the risks of misuse of these tools.
  • “Public Policy and Superintelligent AI: A Vector Field Approach” (2020). Nick Bostrom, Allan Dafoe, and Carrick Flynn (CSET & GovAI affiliate). Ethics of Artificial Intelligence, Oxford University Press, ed. S. Matthew Liao. (link).
    The chapter considers the speculative prospect of superintelligent AI and its normative implications for governance and global policy.
  • “Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims” (2020). Miles Brundage et al. arXiv. (link)
    This report suggests various steps that different stakeholders in AI development can take to make it easier to verify claims about AI development, with a focus on providing evidence about the safety, security, fairness, and privacy protection of AI systems. Implementation of such mechanisms can help make progress on the multifaceted problem of ensuring that AI development is conducted in a trustworthy fashion. The mechanisms outlined in this report deal with questions that various parties involved in AI development might face. (Note: The work was led by researchers at OpenAI and there were 59 contributing authors to this report. Of these, 3 were GovAI researchers and 6 were GovAI affiliates).

Other Academic Publications
  • “The Suffragist Peace” (2020). Joslyn N. Barnhart (UCSD), Robert F. Trager (UCLA), Elizabeth N. Saunders (Georgetown University) and Allan Dafoe. International Organization. (link)
    Drawing on theory, a meta-analysis of survey experiments in international relations, and analysis of cross national conflict data, the paper shows how features of women’s preferences about the use of force translate into specific patterns of international conflict. When empowered by democratic institutions and suffrage, women’s more pacific preferences generate a dyadic democratic peace (i.e., between democracies), as well as a monadic peace. The analysis supports the view that the enfranchisement of women is essential for the democratic peace. The results were summarised in
    Foreign Affairs, by the same authors.
  • “Coercion and the Credibility of Assurances” (Forthcoming). Matthew Cebul (University of Michigan), Allan Dafoe, and Nuno Monteiro (Yale University). Journal of Politics. (link).
    This paper offers a theoretical framework exploring the causes and consequences of assurance credibility and provides empirical support for these claims through a nationally-representative, scenario-based survey experiment that explores how US citizens respond to a hypothetical coercive dispute with China.
  • “Coercion and Provocation” (Forthcoming). Allan Dafoe, Sophia Hatz (Uppsala University), and Baobao Zhang. The Journal of Conflict Resolution. (link).
    In this paper the authors review instances of apparent provocation in interstate relations and offer a theory based on the logic of reputation and honor. Using survey experiments they systematically evaluate whether provocation exists and what may account for it and employ design-based causal inference techniques to evaluate their key hypotheses.
  • “The biosecurity benefits of genetic engineering attribution” (2020). Gregory Lewis … Jade Leung (GovAI affiliate), Allan Dafoe, et al. Nature Communications. (link).
    A key security challenge in biotechnology involves attribution: determining, in the wake of a human-caused biological event, who was responsible. The article discusses a technique which could be developed into powerful forensic tools to aid the attribution of outbreaks caused by genetically engineered pathogens.
Opinion Articles, Blog Posts, and Other Public Work
  • “AI Governance: Opportunity and Theory of Impact” (2020). Allan Dafoe. Effective Altruism Forum. (link).
    This piece describes the opportunity and theory of impact of work in the AI governance space from a longtermist perspective. The piece won an Effective Altruism Forum Prize and was the most highly voted post of September.
  • “A Guide to Writing the NeurIPS Impact Statement” (2020). Carolyn Ashurst (Ethics in Ai Institute), Markus Anderljung, Carina Prunkl, Jan Leike (OpenAI), Yarin Gal (University of Oxford, CS dept.), Toby Shevlane, and Allan Dafoe. Blog post on Medium. (link).
    This guide was written in light of NeurIPS — the premier conference in machine learning — introducing a requirement that all paper submissions include a statement of the “potential broader impact of their work, including its ethical aspects and future societal consequences.” The post has garnered over 14,000 views, more than the approximately 12,000 abstract submissions received by the conference.
  • “Does Economic History Point Toward a Singularity?” (2020). Ben Garfinkel. Effective Altruism Forum. (link).
    Over the next several centuries, is the economic growth rate likely to remain steady, radically increase, or decline back toward zero? This piece investigates the claim that historical data suggests growth may increase dramatically. Specifically, it looks at the hyperbolic growth hypothesis: the claim that, from at least the start of the Neolithic Revolution up until the 20th century, the economic growth rate has tended to rise in proportion with the size of the global economy. The piece received the Effective Altruism Forum Prize for best post in September.
  • “Ben Garfinkel on scrutinising classic AI risk arguments” (2020). Ben Garfinkel. 80,000 hours podcast. (link)
    Longtermist arguments for working on AI risks originally focussed on catastrophic accidents. Ben Garfinkel makes the case that these arguments often rely on imprecisely defined abstractions (e.g. “optimisation power”, “goals”) and toy thought experiments. It is not clear that these constitute a strong source of evidence. Nevertheless, working in AI governance or AI Safety still seems very valuable.
  • “China, its AI dream, and what we get wrong about both.” (2020). Jeffrey Ding. 80,000 hours podcast. (link)
    Jeffrey Ding discusses his paper “Deciphering China’s AI Dream” and other topics including: analogies for thinking about AI influence; cultural cliches in the West and China; coordination with China on AI; private companies vs. government research.
  • Talk: “AI Social Responsibility” (2020). Allan Dafoe. AI Summit London. (link)
    AI Social Responsibility is a framework for collectively committing to make responsible decisions in AI development. In this talk, Allan Dafoe outlines that framework and explains its relevance to current AI governance initiatives.
  • “Consultation on the European Commission’s White Paper on Artificial Intelligence: a European approach to excellence and trust” (2020). Stefan Torges, Markus Anderljung, and the GovAI team. Submission of the Centre for the Governance of AI. (link)
    The submission presents GovAI’s recommendations regarding the European Union’s AI strategy. Analysis and recommendations focus on the proposed “ecosystem oftrust” and associated international efforts. We believe these measures can mitigate the risks that this technology poses to the safety and rights of Europeans.
  • “Contact tracing apps can help stop coronavirus. But they can hurt privacy.” (2020). Toby Shevlane, Ben Garfinkel and Allan Dafoe. Washington Post. (link)
    Contact tracing apps have reignited debates over the trade-off between privacy and security. Trade-offs can be minimised through technologies which allow “structured transparency”. These achieve both high levels of privacy and effectiveness through the careful design of information architectures — the social and technical arrangements that determine who can see what, when and how.
  • “Women’s Suffrage and the Democratic Peace” (2020). Joslyn Barnhart, Robert Trager, Elizabeth Saunders (Georgetown), and Allan Dafoe. Foreign Affairs. (link)
    Presenting the ideas from “The Suffragist Peace”.
  • “Artificial Intelligence and China” (2020). Jeffrey Ding, Sophie-Charlotte Fischer, Brian Tse, and Chris Byrd. GovAI Syllabus. (link).
    In recent years, China’s ambitious development of artificial intelligence (AI) has attracted much attention in policymaking and academic circles. This syllabus aims to broadly cover the research landscape surrounding China’s AI ecosystem, including the context, components, capabilities, and consequences of China’s AI development.
  • “The Rapid Growth of the AI Governance Field” (2020). Allan Dafoe and Markus Anderljung. AI Governance in 2019 — A Year in Review: Observations from 50 Global Experts, ed. Li Hui & Brian Tse. (link)
    This report was contributed to by 50 experts from 44 institutions, including AI scientists, academic researchers, industry representatives, policy experts, and others.
  • “The Case for Privacy Optimism” (2020). Ben Garfinkel. Blog post. (link).
    This blog post argues that social privacy — from the prying eyes of e.g. family, friends, and neighbours — has increased over time, and may continue to do so in the future. While institutional privacy has decreased, it may be counteracted by the increase in social privacy.

Events

Webinars

Workshops co-organized by GovAI
  • Cooperative AI Workshop at the NeurIPS 2020 conference. Speakers included: James D. Fearon (Stanford), Gillian Hadfield (University of Toronto), William Isaac (Deepmind), Sarit Kraus (Bar-Ilan University), Peter Stone (Learning Agents Research Group), Kate Larson (University of Waterloo), Natasha Jaques (Google Brain), Jeffrey S. Rosenschein (Hebrew University), Mike Wooldridge (University of Oxford), Allan Dafoe, Thore Graepel (Deepmind).
  • Navigating the Broader Impacts of AI Research Workshop at the NeurIPS 2020 conference.  Speakers: Hanna Wallach (Microsoft),  Sarah Brown (University of Rhode Island), Heather Douglas (Michigan State University), Iason Gabriel (DeepMind, NeurIPS Ethics Advisor), Brent Hecht (Northwestern University, Microsoft), Rosie Campbell (Partnership on AI), Anna Lauren Hoffmann (University of Washington), Nyalleng Moorosi (Google AI), Vinay Prabhu (UnifyID), Jake Metcalf (Data & Society), Sherry Stanley (Amazon Mechanical Turk), Deborah Raji (Mozilla), Logan Koepke (Upturn), Cathy O’Neil (O’Neil Risk Consulting & Algorithmic Auditing), Tawana Petty (Stanford University), Cynthia Rudin (Duke University), Shawn Bushway (University at Albany), Miles Brundage (OpenAI & GovAI affiliate), Bryan McCann (formerly Salesforce), Colin Raffel (University of North Carolina at Chapel Hill, Google Brain), Natalie Schluter (Google Brain, IT University of Copenhagen), Zeerak Waseem (University of Sheffield), Ashley Casovan (AI Global), Timnit Gebru (Google), Shakir Mohamed (DeepMind), Aviv Ovadya (Thoughtful Technology Project), Solon Barocas (Microsoft), Josh Greenberg (Alfred P. Sloan Foundation), Liesbeth Venema (Nature), Ben Zevenbergen (Google), Lilly Irani (UC San Diego).
  • We hosted a CNAS-FHI Workshop on AI and International Stability in January.

Selected publications by research affiliates
  • “Economic Growth under Transformative AI: A guide to the vast range of possibilities for output growth, wages, and the labor share” (2021). Philip Trammell (GPI) and Anton Korinek (UVA and GovAI affiliate). Global Priorities Institute Working Paper. (link)
  • “Neither Private Nor Fair: Impact of Data Imbalance on Utility and Fairness in Differential Privacy” (2020). Tom Farrand, Fatemehsadat Mireshghallah (UCSD), Sahib Singh (Ford), Andrew Trask (DeepMind & GovAI affiliate). Proceedings of the 2020 Workshop on Privacy-Preserving Machine Learning in Practice. (link)
  • “COVID-19 Infection Externalities: Trading Off Lives vs. Livelihoods” (2020). Zachary A. Bethune (University of Virginia) and Anton Korinek (University of Virginia & GovAI affiliate). NBER Working Paper. (link)
  • “Nonpolar Europe? Examining the causes and drivers behind the decline of ordering agents in Europe” (2020). Hiski Haukkala (University of Tampere & GovAI affiliate). International Politics. (link)
  • “All the News that’s Fit to Fabricate: AI-Generated Text as a Tool of Media Misinformation” (2020). Sarah E. Kreps (Cornell), Miles Mcain (Stanfaord), and Miles Brundage (OpenAI & GovAI affiliate). SSRN. (link)
  • “Messier than Oil: Assessing Data Advantage in Military AI” (2020). Husanjot Chahal (CSET), Ryan Fedasiuk (CSET), and Carrick Flynn (CSET & GovAI affiliate). CSET Issue Brief. (link)
  • “The Chipmakers: U.S. Strengths and Priorities for the High-End Semiconductor Workforce” (2020). Will Hunt (CSET) and Remco Zwetsloot (CSET & GovAI affiliate). CSET Issue Brief. (link)
  • “Antitrust-Compliant AI Industry Self-Regulation” (2020). Cullen O’Keefe (OpenAI & GovAI affiliate). Working Paper. (link)
  • “Have Your Data and Use It Too: A Federal Initiative for Protecting Privacy while Advancing AI.” (2020). Roxanne Heston (CSET) and Helen Toner (CSET & GovAI affiliate). Day One Project. (link)
  • “Americans’ Perceptions of Privacy and Surveillance in the COVID-19 Pandemic.” (2020). Baobao Zhang (Cornell & GovAI affiliate), Sarah Kreps (Cornell), Nina McMurry (WZB Berlin Social Science Center), and R. Miles McCain (Stanford University). PLoS ONE. Replication files. Coverage in Bloomberg and IEEE Spectrum; shared with the World Health Organization. (link)

Team and Growth

Our team has grown substantially. In 2020 we welcomed Robert Trager and Joslyn Barnhart as Visiting Senior Research Fellows and Eoghan Stafford as a Visiting Researcher. We ran another round of the GovAI Fellowship and welcomed 7 Fellows, with an acceptance rate of around 5%.  Our management team also evolved, with Alexis Carlier joining as a Project Manager following Jade Leung’s departure.

We continue to receive a lot of applications and expressions of interest from researchers across the world who are eager to join our team. In 2021, we plan to continue our GovAI Fellowship programme, engaging with PhD researchers primarily in Oxford, and hiring additional researchers.



Source link

21May

Daron Acemoğlu, Diane Coyle, and Joseph Stiglitz on COVID-19 and the Economics of AI


Daron Acemoğlu is an economist and the Elizabeth and James Killian Professor of Economics and Institute Professor at the Massachusetts Institute of Technology (MIT), where he has taught since 1993. He was awarded the John Bates Clark Medal in 2005 and co-authored Why Nations Fail: The Origins of Power, Prosperity, and Poverty with James A. Robinson in 2012.

Diane Coyle, CBE, OBE, FAcSS is an economist, former advisor to the UK Treasury, and the Bennett Professor of Public Policy at the University of Cambridge, where she has co-directed the Bennett Institute since 2018. She was vice-chairman of the BBC Trust, the governing body of the British Broadcasting Corporation, and was a member of the UK Competition Commission from 2001 until 2019. In 2020, she published Markets, State, and People: Economics for Public Policy.

Joseph Stiglitz is an economist, public policy analyst, and a University Professor at Columbia University. He is a recipient of the Nobel Memorial Prize in Economic Sciences (2001) and the John Bates Clark Medal (1979). He is a former senior vice president and chief economist of the World Bank and is a former member and chairman of the US President’s Council of Economic Advisers. His most recent book, Measuring What Counts; The Global Movement for Well-Being came out in 2019.

You can watch a recording of the event here or read the transcript below:

Allan Dafoe:

Welcome to our inaugural webinar on the governance and economics of AI. It is extremely exciting to see so many audience members from around the world. I see in the chat Portugal, Shanghai, Brazil represented, so that’s great. I am Allan Dafoe, the director of the Centre for the Governance of AI which is organizing this series. We are based at the Future of Humanity Institute at the University of Oxford. For those of you who don’t know about our work, we study the opportunities and challenges brought by advances in AI so as to advise policy to maximize the benefits and minimize the risks. We understand AI as broadly referring to the cluster of technologies associated with machine intelligence, especially the recent progress in machine learning, but also including advances in computing power, sensors, robotics and our digital infrastructure. The term governance, which may not be familiar to many of you, refers both descriptively to the ways that decisions are in fact made about the development and deployment of AI, but also to the normative aspiration that those decisions emerge from institutions that are effective, equitable and legitimate.

We have a special interest in understanding the long run impact of artificial intelligence.  Over the past few years, it has become increasingly common for economists to identify AI as a general-purpose technology or GPT, as I expect we’ll hear about more today. If AI turns out to be anything like previous transformative GPTs such as electricity and the internal combustion engine, then we can expect massive changes in our culture, politics, and in the character of war.

More speculatively, AI might even turn out to be something more than another GPT in a long line of GPTs. A number of scholars, including those attending today, have begun to explore more radical possibilities and their associated challenges such as massive labor displacement, extreme inequality, rapidly accelerating economic growth, and the maintenance of human oversight of highly intelligent artificial systems. This webinars series will continue these conversations.  In the coming months, we will host a conversation on challenges for US-China cooperation and the governance of AI, on the impact of AI on democracy, on forecasting methodology and insights for trends in AI, as well as many more discussions of the economics of AI.

This series is put on in partnership with Anton Korinek of the University of Virginia — who is sharing the screen with me — who will be moderating today’s event. Anton is one of the leading economists who has been thinking seriously about the economic implications of advanced AI. Anton first came to my attention because of his excellent paper coauthored with Joseph Stiglitz, also here with us today, on the implications of AI for income distribution and unemployment. In this paper they discuss with subtlety and insight the many challenges to making technological progress broadly beneficial due to failures in insurance markets for technological displacement, and the costs and feasibility of redistribution. From my conversations with Anton, I’ve learned a lot more about the economics of AI and I encourage you all to follow his work. I will now turn the mic over to Anton to introduce and moderate this event.

Anton Korinek:

Let me thank the GovAI team, Markus Anderljung and Anne le Roux, for making this event possible. Let me also thank Allan for hosting us and for the kind introduction. I have followed Allan’s work for a number of years. What I find really admirable is that he focuses on how to put into practice many of the policy proposals that economists like myself only consider in theory.

In the economics of AI, a big theme is that smart machines may be a substitute for human labor rather than complementing us. And that this may be unlike what earlier technological revolutions entailed. The fear is that this will progressively lead to a decline in relative and perhaps even absolute demand for labor, driving down wages, and when wages cannot fall, causing unemployment. This would exacerbate inequality, poverty, and social and political tension.

Well, just like doctors learn the most about the human body when it is sick or injured, economists learn the most about the economy when it is in crisis.

When Allan and I first spoke about this webinar series, we felt that it would be a fitting theme for our inaugural event to invite three of the world’s top thinkers on the economics of AI to share with us what they have learned from the ongoing pandemic and what lessons this provides us for how we as a society can prepare for the advent of ever smarter machines.

Aside from devastating health effects, Covid-19 has led to hundreds of millions of jobs lost around the world — probably one of the largest negative labor demand shocks in human history, although it was a policy induced and temporary one. It has also led to unprecedented government actions to support the jobless while simultaneously giving rise to significant political tension. So one important question is what can we learn to prepare for potential future labor demand shocks that may arise from automation?

Another issue is that Covid-19 has also spurred a massive technological transition into the virtual world in which the marginal cost of distribution is zero. So instead of holding an in-person conference on our topic today (and I would very much enjoy being with you in person), we have to live stream this event on the web. There are some obvious benefits: it democratizes the attendance, but it also risks exacerbating the superstars phenomenon and exacerbating inequality in our world. Another really important question is: what we can learn from Covid-19 about a future that is increasingly digital? More broadly, let me ask our panelists: what lessons have you learned from the pandemic that we can carry over to the governance of AI?

Without further ado, let me introduce three superstars who are our panelists today, Daron Acemoğlu, Diane Coyle, and Joseph Stiglitz.

Daron Acemoğlu is the Elizabeth and James Killian Professor of Economics and Institute Professor at the Massachusetts Institute of Technology where he has taught since 1993, and he is also a winner of the John Bates Clark medal. Has coauthored a book on Why Nations Fail: The Origins of Power, Prosperity, and Poverty.

Daron Acemoğlu:

It’s a great pleasure to be here even though we cannot all be in the same room. And I think Anton gave an excellent introduction to what I wanted to say, which is that we are living through a transformative moment and this is true for many dimensions of our lives, but two that are particularly important are the future of technology, especially related to AI, and the future of institutions. Both because the current state of institutions is shaping how we react to the crisis, but this is also a window for potentially transformative changes for the future. I’m going to spend my eight minutes or so equally on these two points. First on the AI.  During this hour of need, we are all grateful for the digital technologies that enable us not to be completely isolated from the rest of the world, but there are also dangers as well as opportunities for how we use AI. To understand that, I think it’s useful to look at what has happened in the labor market and why over the last three decades.

Here [indicating a slide] I’m showing the labor share in the US, but the pattern is similar in other OECD countries. But the US is simpler and sharper. You see a huge decline in the labor share in national income from around 2000. There is some decline going on before then, but it’s small, especially when you look at industry share, composition adjusted, it’s a very, very remarkable decline of almost 10 points in the course of about 15 years. So what’s going on with this? Well, the explanation that Pascual Restrepo and I have pushed for over the last several years with our researchers that this is mostly about automation. Partly AI, but really mostly the forerunners of AI.

One way of seeing that is in the next three graphs, and those are going to be the background on which I’ll put some thoughts on the future of AI and on this current crisis. Here, when you look at the left graph, what you see is the private sector wage bill growth in the United States. That’s so I can measure inclusive measure of labor demand growth in the private sector in the US. It’s a remarkable picture. It will be more remarkable if I also showed you wage inequality and the wages of the bottom, but it’s essentially a picture of shared growth for about four decades, which is that labor demand is growing for about above 2% a year, every year, very steadily. Wages are more or less keeping up, and then when you come to the post 1990 period, which is the right panel, you see a completely different picture.

First, the growth of labor demand becomes anemic and then it essentially stops after 2000. There’s really no growth in the wage bill or overall labor demand in the US private sector. Where is this coming from? Pascual and I argue this is from sources like monopsony, monopoly, rent sharing have also played a role. But mostly this is about the types of technologies that we have adopted. Particularly if you look at the again the four decades after world war II, the red line is what we call technological displacement, technologies that are reducing the labor share substituting machines —  mostly numerically controlled machines, specialized software, robotics, and very recently AI — that substitute these types of machines for labor. The blue is when you find new ways of doing tasks that increase labor demand. These are industries where the labor share is actually going up. You see that the blue and the red are roughly balanced and the yellow in the middle is essentially the sum of the two. So, the reason why labor demand is growing very steadily and the labor share is constant is because automation technologies are being counterbalanced by new tasks and other human friendly technology. Fast forward to the last 30 years, you see a completely different picture. The blue line here is now about 30% to 40% slower than the previous one. So there’s much less of these human friendly technologies. The displacement curve is much faster, about 30-40% faster than what it was before 1987. We are doing much more automation and much less human friendly technology. Why is that?

Well, there are a number of reasons for this. I don’t have the time to get into all of them right now, but I want to highlight one of them. This is not the most important one, but it’s one of the top three, but it’s easy to talk about and it highlights the other point that I want to make, which is the inefficiency of this capital labor substitution. If you look at the US tax code, labor taxes have been roughly constant, but capital taxes, especially on software and equipment, which are the purple and the red curves here, have been much, much lower essentially getting into the zero territory. So we are subsidizing the use of capital while at the same time taxing the use of labor. And that’s encouraging a lot of automation and much of that marginal automation is actually not super productive.

So against this background, what are we going to experience in this crisis period? I think one of the things that we are already seeing, we don’t have hard data on this, but the surveys are very clear, firms are using more and more AI in order to substitute for workers because the lockdown is making the labor supply even harder for firms, and the demand for machines is increasing exponentially. So against the background of very fast and perhaps already excessive automation, now there is a danger that we’re going to go and repeat exactly this pattern, not enough use of AI for helpings human and too much for replacing rather than the more balanced pattern of the four decades after World War II. But of course, if you’re going to hope anything will work out and not the worst case scenario of how we use technology, and what the implications of that for wages, unemployment, labor share, income distribution, we have to turn to institutions.  Institutions can actually help us redirect technology in the right way. I have argued in a lot of my work over the last two and a half decades that the path of technology is not preordained. It is firms, workers, and scientists’ choices and especially regulators’ choices in redirecting technology, and in this instance AI technology, that is going to play a critical role.

So can we hope that we have the right institutions to guide us in the right way? Actually, that’s when any type of cautious optimism that one might have becomes more jaded because we have actually seen a spectacular failure of institutions during the current crisis. This is really a combination of two things: one is that we have seen an erosion of expertise, technology, and autonomy in institutions. I think the sorry state of the CDC — which was actually very successful a short while ago during the Ebola crisis — which has been an utter failure during this crisis is related to that. But we are also seeing the role of institutions become much, much more difficult because of a collapse of trust in institutions. If you look at the trust in government and state from the world value survey, you end up with a very paradoxical and disturbing pattern. You know, in autocracies such as China, Turkey, Singapore, you have relatively high trust in state institutions and government and in democracies, including some places that have done extremely well during this crisis including Taiwan and South Korea, but also in the United States, you have very low and falling trust in institutions. That’s really making life much more complicated.

But then what does the future hold? There’s no doubt in my mind the Covid-19 crisis has created what Jim Robinson and I have called a critical juncture. There will be changes in institutions because their inadequacy has been laid bare. There are many possible futures for these institutions. I have outlined that in some talks and articles, but since time is short, let me not go into each one of them in detail. We may do nothing, which will be completely tragic. We may try to emulate China, which would also be tragic because we couldn’t emulate their good parts, such as a competent bureaucracy living for 2,500 years under an authoritarian hierarchical system. We would end up with emulating their bad parts such as a lack of respect for civil liberties and autocracy and repression. We could turn to large tech companies motivated by the failure of our government and perhaps they are better for us than the failing government. But I think there is another option which is to remake our welfare state.

The current crisis has highlighted that we need new responsibilities for the state combating inequality, climate change, pandemics, better regulation. But I think a lot of people are worried whether that’s going to happen starting from the current sorry state. They’re worried as Hayek was after the Beveridge report, which led him to write Road to Serfdom, whether once the state becomes very powerful, economically much larger, much more administratively in control of wages and allocation of resources, if that’s going to be a tenable state. Well, this is actually my last slide and I’ll conclude that this is actually what James Robinson and I tackled in our new book, The Narrow Corridor. We came up with a framework for arguing why Hayek was actually wrong and there was a way for society to adapt to greater state power as long as they deepen democracy, and we outlined the dynamics of that. Since time is short, let me not get into the details of that with the hope that somebody will ask during the Q&A session and I can provide more explanation about what the main thesis is and why despite all of the difficulties that we’re facing, I’m saying that a little bit of cautious optimism might be possible. Let me conclude here and pass it to other panelists and come back to these issues during the Q&A session. Thank you.

Anton Korinek:

Thank you very much, Daron, for your insightful remarks. And let me now turn it over to Diane. But before doing so, let me also make an announcement to all of us attending the webinar. Please feel free to click on the link “ask a question” at the bottom of your screen and to add any questions that you may have for our panelists. You can also upvote existing questions that other people have already posed.

Diane Coyle is an economist and former advisor to the UK treasury and the Bennett professor of public policy at the University of Cambridge where she has also codirected the Bennett Institute since 2018. She was vice chairman of the BBC trust, the governing body of the British broadcasting corporation and was a member of the UK competition commission from 2001 to 2019 and she has just published a book on Markets, State, and People: Economics for Public Policy.

Diane Coyle:

Thank you. Hello everybody. It’s a great pleasure to have this opportunity. We panelists haven’t coordinated beforehand, but I think what I’m going to say complements Daron’s comments without overlapping with them. I’ve got about eight minutes, and I want to make three main points. The first is that the crisis has crystallized tensions around expertise and to what extent modelling can or should inform policy choices. I think we need to reflect on the lessons for AI because machine learning systems are technocrats par excellence. The second point is that the companies that can operate AI at scale are being strengthened by the crisis and they’re going to emerge even more powerful. So we need to double down on the policies that will make them accountable and make the markets in which they operate contestable. The third point is that data is everything and we’re going to need to understand much better the creation and the distribution of value in data value chains and the trade-offs between private and collective benefit.

Let me start with the first of those: expertise. Machine learning systems have been programmed and trained to act just like Homo Economicus. They maximize some well specified objective functions subject to constraints, and they use the rules of logic. In the pre-crisis world, this was already problematic when machine learning systems were starting to be deployed in public policy decisions. We live in a very complex socioeconomic system, there are multiple conflicting aims and tradeoffs. Just as target setting can distort public sector behavior, AIs can game the objectives that they’re set. And there are what political scientists call “incompletely theorized agreements,” what others might call political fudge, which means that quite often we don’t want to specify too clearly what objective we’re aiming for in order to achieve some consensus on actions. We’re already seeing during the crisis the kinds of problems that economists are very familiar with using models to forecast. You get model drift and you get what’s called the Lucas critique where structural breaks mean that the relationships that you’ve modelled breakdown. In some domains, this doesn’t matter — if you’re thinking about algorithms determining online shopping offers, the fact they’ve broken down doesn’t matter, they’ll fix those quickly. There are also some quite reasonably narrowly specified domains in which AI is proving really useful at the moment in biomedical discovery. So I think the main lesson of the pandemic is actually about the limitations of the kind of model we can build and train. And I think we’re really far from the complex interactions that policymakers need to consider now. The genetics and other aspects of the virus itself, individual and group susceptibility, social and economic conditions, behavioral responses to the pandemic and lock down policies, responses to climate and so on. This is really, this is really complicated.  I think this is a real lesson in learning the limitations of what we can and should be trying to do with AI in policy.

The second point is about power. Before the crisis, a number of countries around the world were recommending tougher competition and regulatory policies toward big tech. Now big tech is getting even bigger. This is a moment for governments to hold their nerve, as our society is dependent on digital companies as never before. And that means that they even more than before, they need to be held accountable for the market power and also political power that they hold. We need a lot more thought about the governance of AI. And I welcome Allan’s comments about that in his introduction. Can we avoid a geopolitical arms race? What are the national and global institutions that will deliver accountability? With some of my colleagues in the Bennett Institute, we’re starting a project looking at the history of different governance frameworks for new technologies. It’s not straightforward. It depends on the cost of entering the technology. If the technology changes the governance frame, what needs to change as well? It depends on the market context and what the interaction between public and private sectors looks like with developing new technologies and so on. I was a member of Jason Furman’s panel in the UK looking at competition in digital markets. Just before the lockdown, the government announced that our recommendation for a digital markets unit to be set up will go ahead and that needs to happen. It needs to happen in other countries. I think the more we can get regulatory alignment and alignment of competition policies between countries, the more effective we will all be. We also need to reflect about the skills base and national capabilities in AI. If you’re sitting as I am in Europe and London, being in between the United States and China — who have the leading capabilities at the moment — when they seem to be embarking on a phase of geopolitical rivalry and AI is part of that, that’s not a very comfortable position. And so for everybody, for a number of reasons, thinking about sharing the skills needed to use AI and deploying it and building up those national capabilities will be important.

My final set of comments is about data. It’s one of the barriers to entry that we identified in the Furman review. We recommended looking at and enforcing interoperability and enforcing rules about APIs and some data sharing. The key issue emerging in the pandemic is health and location data. I think that’s been really unfortunately shaped by the narrative that data is personal. Almost no data is personal. There might be quite a lot that we want to be kept private, but that’s a different matter because the information content of data is almost always relational and contextual. Daron has an excellent paper about negative externalities of potential privacy loss from the provision and sharing of data. But there are also substantial potential positive externalities from aggregation and the sharing of data with colleagues. I have a policy paper on that and we are working on an academic paper also. In the current context of the pandemic, my health and location status has substantial implications for other people as a very large externality. That to my mind outweighs concerns about privacy, not data security, but privacy in the sense of not sharing data at all in the context of the huge civil liberties removal from lockdowns in various countries. We shouldn’t have to be relying on the good will of companies like Google and Apple to provide limited data on what’s happening during the lockdown, and the APIs that they are developing. The democratic public interest in this is too large. I think that the Covid example is an instance of a much broader debate that we need to start having about data, about the positives and the negatives, the individual and social, about how to capture that value and how to distribute the benefits. Also about what kinds of institutions can be trusted to govern data and data access both in terms of security and privacy, but also in terms of the rights of access to various forms of information that can be used for the good of individuals and the good of the public. And I will stop there. Thank you.

Anton Korinek:

Thank you so much for your insightful remarks. Let me now hand the microphone over to Joseph Stiglitz, who is an economist, public policy analyst and University Professor at Columbia University. He is the recipient of the Nobel Prize in Economics in 2001 and the John Bates Clark medal in 1979. He is also a former Senior Vice President and Chief Economist of the World Bank and the former member and Chairman of the US president’s Council of Economic Advisors. His most recent book is Measuring What Counts; The Global Movement for Well-Being. So Joe, the floor is yours.

Joseph Stiglitz:

Thank you very much, Anton. It’s really good to be here. I again join the others in saying that I wish it could be in person. I agree with the point that you made in the beginning that we’ve learned a lot about our society, about our economy, about our government from this pandemic. It’s like pathology in medicine, you learn a lot from putting the system under stress. And I think at least in the United States, we found things didn’t go quite as well as we would have hoped. We’ve seen a lack of resilience. Our private sector, our markets couldn’t even produce masks and protective gear and distribute them to where they were needed. We’ve seen the importance of government. We all turn to government in times of disaster. And this is clearly a time of disaster. We’ve seen that 40 years of denigrating the role of government has actually worked. It’s worked in weakening the institutions. Daron pointed out the weakening of the CDC, which had been a very strong institution, the abandonment of the White House Office of Pandemics. We had created institutional structures designed to prepare us for a pandemic, but then a weakening of institutions led to the abandonment of those institutions.

As we look at what has happened, it’s natural to think about this in relationship to other crises, and we can see some shared underlying factors. In the 2008 crisis, the last crisis, we saw a weakening of the state, with financial deregulation being one of the conditions leading to the crisis. Then again we saw short-sighted behavior on the part of the banks leading to the crisis. Here, it’s short-sighted behavior on the part of firms relating to an economic system that lacks resilience.

We’ve also seen in this crisis that this is not an equal opportunity disease. It goes after those with poor health. Those with poor health are disproportionately people who are poor, especially in the United States where we have not recognized the right of access to healthcare as a basic human right. I’ll make some comments later on that. Wealth inequality is clearly part of the preconditions that have exposed the United States so strongly to the disease, and one of the reasons why we’ve had the highest rate of death. The problem of inequality is going to be exacerbated by the crisis. And I’ll try to explain that.

But the topic of the seminar is about AI. AI is a major structural change in the economy. One of the things that we’ve seen over a long period of time, and that is going to be exacerbated by the pandemic, is that markets don’t handle these kinds of large structural changes well. That’s not one of the strengths of markets, and it inevitably requires government assistance to manage that. What we’ve seen is maybe not so optimistic. There are three things which are going to reinforce, I hope, what has already been said. The first is that the long-standing weaknesses of the American economy, but also other economies, have been exposed. The second is that there’s a clear possibility of further adverse effects from the pandemic. But third, echoing what’s been said, that it’s not inevitable. It’s a matter of policy. And then the final question which I won’t get to, I hope we get to in the Q&A, is one that Daron raised at the end of his discussion: the question is whether we actually do what we could do. And that is a matter of how our democratic institutions respond.

Let me begin by noting one aspect of the pandemic, that it has led to a fundamental shift to the cost of labor versus machines or robots. Daron pointed out very clearly that what has been happening is a shift in technology, labor-replacing versus labor-augmenting innovation. That is one of the reasons why the labor market is not working well, why the output share of labor has gone down. This pandemic has emphasized, even increased, the virtues of robots. Robots don’t get the coronavirus (though, obviously, computers do get computer viruses). And there is an ongoing war in both spheres with some uncertainty and some hope that the good guys, the antivirals, will win over the virals. Robots, even if they do get viruses, don’t need to be socially distanced. And all of this adds to the shadow price of labor. It makes labor less attractive relative to capital. And that will exacerbate, I worry, some of the trends that Daron talked about. There was an interesting article this morning in the New York Times about a city in the UK where robots are being used for deliveries. They had already set up a company before the pandemic, but after found a vastly new market. If this is so, it will mean the problems with unemployment and inequality that we’ve been facing before Covid-19 will be even worse.

There is a failure to design an adequate response in the United States to growing unemployment. The unemployment rate in the United States is clearly already at 20%, and a broader measure of unemployment, which we call U-6, is clearly north of 25%. And this growing unemployment is in spite of massive spending — almost $3 trillion fiscal support, and an equivalent amount of monetary support. What is equally disturbing is an unwillingness on the part of some to continue to support this spending, even though it’s obviously needed. That’s obviously very worrying, it’s a clear sign that some aspects of our solutions may not be working as well as they should.

At one level, one can say it’s not a surprise that things didn’t work out as well as we would have hoped. Everything had to be done in a rush. But the fact is that countries all over the world had to do it in a rush. In some countries, the institutions actually worked. That’s a hopeful side. In New Zealand, not only did they avoid the massive increase of unemployment that they had in the United States, the disease was almost brought down to zero, and there’s strong social cohesion. Other democracies have done so as well. As Daron said, some of the more authoritarian countries have also brought down disease numbers, but in ways that obviously wouldn’t be acceptable to us. But the good news is that there are countries like New Zealand and South Korea who are democracies that have brought it down, have gotten the disease under control.

What is most disconcerting is the marginally different perceptions, the beliefs about the disease and its consequences and what to do about it, reflecting and deepening pre-existing divides. And that goes to the point that Daron and Diane emphasized – the importance of trust in science, trust in experts. In large parts of our society there is that lack of trust, and that’s been exposed very strongly by the pandemic. To me that suggests that we may not be able to respond appropriately to the enormous social and economic challenges that AI may present going forward. Now, some have suggested that the pandemic will, in the short run, reduce the problems posed by AI and robotization because it is causing onshoring. But I think that’s an overly optimistic note. Onshoring will be done by robots or by machines more broadly. The jobs that have been lost to robotization and de-industrialization won’t be regained.

In fact, as I said earlier, the short-run impacts are going to be just the opposite. Those who can do remote work will work remotely – the high-tech workers have been relatively little affected. We’ve gone on with our teaching on Zoom. It’s the others, the people who work in the restaurants that have faced job losses. In the short run, the problems of inequality to which I refer are likely to get worse. The disease has exposed and is likely to exacerbate these inequalities.

In the United States, the disease has also exposed the weaknesses of our whole system of social protection. The fact that America has the least adequate system of social protection, such as paid sick leave. This really illustrates the worries about our institutions. Congress recognized the importance of paid sick leave. We don’t want people who are sick with COVID-19 going to work. And since almost half of all Americans are living paycheck to paycheck, if they get sick and there is not paid sick leave, they have to go to work. Congress passed a law requiring paid sick leave just for COVID-19, but then, under the lobbying of major companies, those companies with more than 500 workers were exempted. Now that reflects a kind of short-sightedness on the part of the companies — you can say it also reflects a lack of humanity. It reflects an inadequacy in our political process that they would let this group win the day. These companies employ almost 50% of all workers in the private sector.

Another example is that we have asked workers to go to work without protective gear. We have an agency within the government called OSHA, that’s supposed to protect workers, but it has still not issued regulations concerning the disease. I referred earlier to the lack of resilience in our economy, but it’s a lack of resilience for which the poor pay the highest price. Indeed, the rapid restructuring of the economy, accelerating change already going on, such as in retail, will create a pool of unemployed that would, even in a normal recession, take some time to work off.

The next point I want to make is the same point that Diane emphasized: the restructuring of the economy has advantaged large digital firms. Firms which have large elements of monopoly power, related in part to superstar and network effects. The problem of the lack of competition in this key sector – something that I talk about in my book, People, Power and Profits – is getting worse as a result of the crisis. So too will the problems of inequality, which are linked to this monopoly power, and to some of the other effects I talked about. In the medium term, we shouldn’t have a problem there, but our politics may lead us to have one: We will need massive investments for the green transition; there are gaps left by underinvestment in the last 20 years in our infrastructure. These gaps should necessitate more job creation than we will be losing. But that will require government revenue, and that’s the question – will we have the political will to make these investments?

I have even more worries about Africa. The cheap labor that enabled export growth in manufacturing goods was at the center of the development strategy in East Asia. And that won’t be working in Africa. As I said, we shouldn’t have a problem in the medium term. And in the longer term too, we shouldn’t have a problem. We should be able to use our tax system and intellectual property rights system to ensure the benefits are shared by all. This is particularly important in light of COVID-19, which can be viewed as a large negative technology shock. Negative technology shocks or similar events give rise to distributive battles: who will bear the cost of the reduced standard of living? Such distributive battles can be particularly ugly in countries lacking a certain degree of underlying social solidarity such as we’ve seen in the United States.

I want to end on a couple more positive notes. The first is that we should be able to steer innovation. Steering innovation to what has been called intelligence-assisting innovation rather than labor-replacing innovation. Maybe that itself is a problem which AI could be trained to do. Daron emphasized the problems of misguided incentives of encouraging labor-replacing innovation. There are others too: The fact that monetary policy has kept the cost of capital down to a negative real interest rate obviously exacerbates the problem of the incentives to have human-replacing robots. But if we can steer innovation in another direction, then the problems that we have with AI will be mitigated.

The second more positive note is that government has never intervened more strongly in the economy. Never has there been so much spending and so much lending, where in the midst of this pandemic, they’re making life and death decisions over enterprises.

We are shaping the economy or failing to do so. The choices we make now will have long-lasting effects. So we have the potential to use conditionality on public lending programs in ways that can really reshape our economy and make us better able to handle the problems of inequalities we’re facing, and some of the governing problems we’re facing. The problem is, will we have the institutions that will direct this money to try to create the post-pandemic society and economy that we like? So far in the United States the answer is no. So far in other countries, the answer is partially yes. Let me stop there and we can have a discussion.

Anton Korinek:

Thank you so much Joe. Let me now also bring all panelists on screen. We have just heard three really thoughtful perspectives on the effects of the pandemic on our economy and also how to think about our societal response to other large shocks.

I thought I would start the panel discussion by posing a perhaps somewhat personal question. What has surprised you over the past few months, and are there any specific lessons that you feel you have learned that give you a new perspective on how easily or how our economy and our society can adapt to large shocks? And how should this inform how we react to the prospect of ever more automation?

Daron Acemoğlu:

Let me make one remark which will be a partial answer and also a riff off what Diane said because I think it’s going to be an illustration of the power and the dangers of technology and our governance challenges. Before this crisis I was probably close to one extreme on issues of privacy, in that I saw the control of data by governments and the control of data by companies as a real threat to democracy. I have partially changed my mind in the way that Diane already anticipated. It is clear in the midst of the pandemic that data sharing, use of data on infections, and contact tracing are all critical for saving lives. So how do you square that with the issues that I worried about? In fact, I think this is a critical test case for some of the issues that both the other panelists and I talked about. I’ve been somewhat frustrated by conversations I’ve had over the last few weeks with computer scientists, who a year ago would have not paid sufficient attention to issues of privacy and their importance to democracy, who now object to use of data sharing in order to combat the pandemic. I think all of these conflicted responses are an implication of our inability to visualize and understand and imagine a better governance for data.

I think in an ideal world, what we would say is that of course right now we have to use all the data we can in order to combat the pandemic. But then do that with a proactive plan for doubling down on protecting privacy as soon as the pandemic is over. That means both controlling the use and abuse of data by governments and controlling and containing the use and abuse of data by companies. Now the question I think is that some people come down with a very different conclusions because they have different views on what is feasible institutionally. For example, if you believe that once you open the gates to companies or governments using private data, you can never take that back, you’re going to be much more cautious. If you think that our institutions are so badly failed at the moment that we can never double down on protecting privacy and strengthening democracy, you might have a very different view. I think this privacy issue is a test case and I do actually still retain a cautious optimism that recognizing the issues and publicly debating them and understanding what sorts of institutions can deal with them will open the way to a better governance of data. That is actually very related to the governance of AI.

One of the questions that I saw is do we need broad institutions that protect us in terms of inequality, public safety, and democracy, or do we need technology-specific institutions? I do very much believe that broad institutions are the first line of defense, but we do need technology-specific governance structures and data is one of them. And AI is another, both because of its ability to change the political discourse, to transform privacy and political activism by individuals, and also because of its labor market effects. Because I think replacing labor with machines, sometimes it’s very productivity enhancing, but it also has external effects because it really damages the very fabric of society. It has to be balanced out with other social objectives. Thank you.

Diane Coyle:

I think Daron is absolutely right to point out that this is a critical moment for thinking about the kinds of institutions that we trust to handle data and technology more generally. So that kind of thinking is really important. But to answer your question more directly, the thing that struck me is the way that people sit in intellectual silos, even in the face of a major crisis like this. We’re a self-selected sample of people who talk to computer scientists a lot, and so already crossing disciplinary boundaries in that way. I’ve been quite struck in all the discussions I’ve observed in UK government and elsewhere that medics talk about medical issues, geneticists talk about genetic issues, the epidemiologists and the economists, maybe they’re starting to talk to each other. This really highlights to me the importance of thinking about ways to integrate social science and different strands of science because the problems that we’re facing — be it the covid pandemic or climate change or geopolitical disruption — these don’t fit into narrow silos. That surprised me and concerned me. I hope we can also take this opportunity to do some more of that joining up because if we’re putting a lot of effort into medical innovation only and not into the social context of the institutions so that people would trust the health system that will deliver it, then we’re going to fail in tackling this crisis.

Joseph Stiglitz:

I agree very much that the key is creating institutions. I’m optimistic that we can create them, but let me express a concern that one has to go a little bit beneath that. The question is why isn’t there trust in our institutions? And why should there be some skepticism? Well that goes back to, you might say, the word power or inequality in our society. If we think that Facebook is in one way or another going to write the rules, we’re not going to feel comfortable with the rules that come out. And if we think our society has a lot of inequality, which it does, and that we have a political system where that economic inequality translates into political inequality, then we’re not going to trust the institutions that emerge out of the political process that are supposed to protect us. They’ll be protecting the one 10th of 1%. That’s why I’ve always said at the root, we have to begin by dealing with the underlying problems of inequality, the problems of ensuring that we have competition, and of course, that’s interactive. How do we do that without good institutions?

Let me give one more example that’s a little bit different from the data problem, but one that is of great concern to me both before the pandemic, but made very clear by the pandemic, and that’s misinformation. The concern about spreading misinformation about the pandemic response, which has been a major problem. What’s interesting about that is before the pandemic, Facebook and other technology companies said they didn’t have the technology to address problems of misinformation. None of us really believed it because AI has the technology, not necessarily to do it perfectly, but to do it reasonably well. Then finally, when it became clear that it was our country’s health was at risk, they did come forward and say they were going to take down misinformation about responses to pandemic. But they feel very hesitant to take down misinformation about the pandemic put up by political leaders. So again, a political and institutional decision which is obviously a problem.

Finally, let me say, in response to your question about what has surprised me, one of the things that surprised me was the willingness to come up with a sizeable response, on the one hand, and the magnitude of the failures in the design of response on the other, which I find quite colossal. It wasn’t like they didn’t know about the alternatives that were being discussed. And the third is the willingness of one of the two parties not to have a comprehensive program and not to have a sustained program, saying we ought to pause now.  The social divisions in our society that are forming over this issue are actually a surprise.  We can’t even, on this particular issue, come to some agreement about reality.

Anton Korinek:

Thank you Joe. A number of people have posed questions in our question box around a familiar theme that automation will on the one hand create more abundance, but on the other hand, we are concerned about whether the resulting prosperity will be shared or whether will just benefit the few. What is your take on this question and do you view it differently in a post-COVID world? Are you more optimistic or maybe more pessimistic on how we can resolve this tension? Let me maybe go in inverse order now, let me start with Joe and then Diane and Daron.

Joseph Stiglitz:

Absolutely, the fact that we have more resources means in principle every group in our society could be better off. I alluded very briefly in my introductory remarks to the fact that we can use intellectual property rights, taxation, we have lots of incentives that we can use to make sure that the benefits are shared. Part of that is competition policy to make sure that you don’t have an agglomeration of market power. There are lots of things that we could do.  I guess I have an ambiguous reaction coming out of the pandemic whether we will. On the one hand, I certainly get a very strong feeling that a lot of people have realized that the pandemic has exposed the magnitude of inequality in our society and a lot of discussions of inequality, unfairness, and a lot of resolve to deal with that. On the other hand, the point I made before, the kind of divisions in our society that have led some of the people who should be the most strong advocates of pro-equality policies to actually resist the kinds of policies that would enable us to more effectively deal with the problems.

Diane Coyle:

If you think about the 19th century, technological change and automation brought about a long period of great inequality and low wage growth. Then if you think about the 1950s and 60s, which saw a lot of automation, we had the opposite outcome. We had reduced inequality, lots of good jobs for middle class people, and rapid wage growth. And so the question is how can you steal yourself into that mid 20th century pattern rather than that late 19th century pattern? One of the keys for me is about the skills that you need. And anybody who deals with the big data sets and AI now knows that actually handling the data is a craft skill, and people don’t have any very systematic ways of passing on that skill. It’s a learning by doing system. You learn it at the feet of the master and you gain those skills yourself. And so what we need to do is both make the technology itself more routine and change the provision of the supply of labor, the people with skills, and make sure that it becomes less of an inequality machine than it has been to date. And the pandemic is probably an opportunity to start to create some of those skills and think about that because governments are going to have to think about it, how to avoid the scarring on the large groups of young people coming into the labor market and needing to find themselves a good career and a good job prospects. So on balance, I think I’m probably a little bit optimistic about that, but this is very uncertain. Who knows?

Daron Acemoğlu:

I think this is a really interesting question. I’ve thought a lot about it. I’m going to just ever so slightly disagree with the other panelists in the sense that I think even though automation is an enormous engine for productivity growth, it is also potentially very disastrous if the attitude is “automate everything in sight.” And the reason for that is threefold.

First, it isn’t actually true that automation always increases productivity. Automation has the promise of increasing productivity, but if it involves substituting machines that are only slightly more profitable than labor, it doesn’t increase TFP. And if there are policy distortions such as the ones I hinted at, and there are many others related to labor market structure, it may actually reduce TFP.

Second, my belief on the basis of my work and data analysis, is that periods such as the one that Diane said, middle class wage growth, broadly shared prosperity, stable or sometimes even declining inequality, though they coincide with automation, critically depend on other technological changes, periods that have mostly automation and no other technological changes have never brought that kind of prosperity. And the reason why economists have often not been as clear on this is because we have imposed on the data a way of looking and models that have only one type of technology, blinding ourselves to the critical question of which types of technologies are doing what. So automation can increase productivity but it’s generally a force towards greater inequality and slower wage growth. It needs to be counterbalanced by other technology.

That brings me to reiterate what I said earlier, technology policy, redirecting technological change away from just automation, especially for AI which has so much promise to be complementary to humans, is critical. My reason for being very worried about “let’s just do AI on everything and get rid of the troublesome humans which are now proving to be more troublesome because they can get Covid-19” is because I think we have also no great experience of generating shared prosperity based on redistribution. There was a question on predistribution and I very much agree with that question. Predistribution is critical. So it’s very difficult both for political reasons but also for social reasons to create a harmonious, well-functioning democratized society when everybody depends on bread and circuses out of the hands of the governments or this new version UBI.

We really need people to be earning less unequally distributed wages, and that means middle class wages generated by the technological working conditions, and bargaining situations in the workplace. That’s going to become more and more difficult if automation just gets out of control. Of course, redistribution helps, especially for a social safety net, providing public services and keeping in touch through progressive means, what types of incentives that are at the top of the distribution, but it can never replace the market system generating more equal wages. And that will never be possible if we double down on automation because if you just have more and more automation technologies, bargaining power cannot survive. If workers ask for higher wages, firms will just shift to machines that are getting better, whereas humans are not getting better. So it’s absolutely critical that we have ways of investing our ingenuity, especially in the field of AI, to make humans more productive as well, not just machines. Thank you.

Anton Korinek:

Thank you, Daron. And you have just touched upon the next question that a member of the audience has posted which was on predistribution versus redistribution. So I wanted to ask Diane and Joe if they could also share their thoughts on this question with us.

Diane Coyle:

I don’t really have a lot to add on that. I mean the point about increasing labor skills is the point about predistribution shaping the configuration of labor supply and demand. And that’s exactly why I put emphasis on that in my previous answer. One other point to make perhaps is looking back again at that history, the role of institutional innovation. So we’ve been talking about automation, but we might also want to think about in this context what kinds of new institutions might we want to see emerging out of this? And they might not be able to deliver financial redistribution or pre-distribution, but they can alter things like the distribution of social capital, the distribution of natural capital among people. And, you know, income matters a lot, finance matters a lot, but these are also other assets that people really need.

Joseph Stiglitz:

I just want to say I strongly agree with what both Diane and Daron said. I think there should be a focus on pre-distribution or what used to be called just market income. I want to just add that it’s a comprehensive issue of the rules of the game and the investments. What do I mean by that? We’ve talked about competition policy, also corporate governance policy. One of the sources of inequality are the CEOs being able to shape what the firm does and getting more for themselves and making decisions about labor-saving innovation versus other kinds of innovation. If we add more representation of workers on boards, we might get different decisions. They might not view labor as an irritant but as part of the objective of society.

One of the striking things about the pandemic, as I mentioned, was that employers did not provide sick leave or protective gear. In many cases it was only the unions that succeeded in getting that kind of protective gear. So that’s an extreme manifestation of the lack of social responsibility or short-sightedness on the part of corporations. And I mentioned before, monetary policy, which changes the incentives of labor, of intelligence-assisting innovation, which strengthens the productivity of labor and increases the demand for labor rather than labor-replacing innovation. There are a whole set of policies that go to shape how our economy works, and which affect the market distribution of income. And we ought to be focusing a lot more on that.

Anton Korinek:

Thank you. We are almost at the end of our webinar and time is always way too short. But I wanted to ask our panelists if they are willing to leave us with just a very short thirty second parting thought on this theme of what we can learn from the pandemic for the future of governing AI. And let me go through in alphabetic order again. So with Daron first.

Daron Acemoğlu:

Let me agree with one thing that Joe said, which is the ability of most Western societies and beyond to actually respond to the crisis with large stimulus packages to deal and in a general agreement within society despite a lot of misinformation that you have to deal with this problem both at the level of containing the virus and healthcare systems being bolstered, I think are hopeful signs that when push comes to shove, there will be some agreement on key issues. That is the only sort of straw we can cling to in terms of remaking institutions in the future.

Diane Coyle:

I certainly think the mood has changed. People are ready for a different kind of system. They’re very aware of the many inequalities that have been exposed and exacerbated by this crisis. So that makes this an opportunity. And there is a cliche, don’t let a good crisis go to waste, grab that opportunity. My concern about it is that we have already recently let a good crisis go to waste in 2008. We did far less than I expected coming out of that. All of us who are engaged in this debate really need to make sure we grab that opportunity now.

Joseph Stiglitz:

I agree very strongly again and in fact, maybe it’s one of those instincts where the second time around you actually learn the lesson that you should have learned the first time around. And the lesson is very much that we need a better balance of the market and the state. We put too much on the view that markets will solve all problems, and we didn’t realize how you need to have regulation, you need to have public investment in science, you need to have good institutions, you need to have trust in experts, you need to build up trust in these institutions rather than bringing them down. You won’t be able to get that unless you have societies with more solidarity, and that kind of solidarity will only be achieved if we get a society with more shared prosperity, more equality. So that agenda of equality is both the object of what we’re trying to get, but also a necessary condition to get the kind of society that we want.

Anton Korinek:

Thank you very much, Daron, Diane, Joe, for sharing your thoughts with us. Thank you to everybody in the audience who has joined us today. I should also let you know that all three of our panelists today have agreed to give a full webinar on more specific topics in the coming months. So please check back on our website frequently as we announce future events. I hope to see all of you back with us soon. Goodbye.



Source link

21May

Access to Documents Relating to the Environment – Even in Light of Dooming Controversy? – European Law Blog


By Jesse Peters and Tessa Trapp

Blogpost 27/2024

Transparency and environmental policy are two key issues in the upcoming European Parliament elections. In this regard, the General Court’s (‘the Court’) ruling on 13 March 2024 in the case of ClientEarth and Leino-Sandberg v Council provides some highly relevant insights. The Court annulled two Council decisions refusing to disclose the Council Legal Service’s opinion on the 2021 proposal to amend the Aarhus Regulation. While the Court’s critical approach to the Council’s justifications for secrecy is to be applauded, and the outcome of the case is certainly to be welcomed, this post suggests that an alternative route to reach the same conclusion would have been more desirable. The Court now seems to deliberately gloss over the document’s potential legal and political significance, turning a blind eye to the heated and ongoing debate on the Union’s (non-)compliance with the Aarhus Convention. Instead of downplaying the relevance of the document’s content, we argue that a more principled emphasis on demanding openness in the realm of environmental policy would have led the Court to the same outcome but would have also made the Union’s transparency framework more robust, in line with the objectives of the Aarhus Convention.

The EU and the Aarhus Convention

The requested document was produced by the Council’s Legal Service in the process of amending the Aarhus Regulation, which presents one aspect of the Union’s implementation of the Aarhus Convention. The Aarhus Convention is an international agreement, which the Union approved in 2005, aiming to improve public access to information, public participation in decision-making, and access to justice in environmental matters. The Aarhus Regulation, adopted in 2006, applies the various provisions of the Convention to the Union institutions. At the time, the internal review mechanism of Article 10 of the Regulation was considered the most promising creation, which allows non-governmental organisations and other natural and legal persons to request reconsideration of certain administrative acts or omissions by the adopting institution. Through this administrative review mechanism, the Union aimed to provide a legal avenue for applicants who do not qualify for standing under Article 263(4) TFEU due to the restrictive criteria of direct and individual concern. The Union thereby aimed to meet the requirements of Article 9(3) and (4) of the Aarhus Convention, which obliges to allow members of the public broad access to effective review mechanisms to challenge acts and omissions that contravene environmental law.

In 2011, the Aarhus Convention’s Compliance Committee (ACCC) already indicated that the restrictive scope of challengeable acts via the internal review mechanism of the Aarhus Regulation might not be sufficient to ensure the Union’s compliance with the Convention’s access to justice obligations. Due to the refusal of the Union courts to depart from their restrictive case law on the standing of natural persons under Article 263(4) TFEU established in Plaumann (and clarified later for example in Greenpeace, Danielsson, UPA, Jègo-Quéré, or Carvalho), as well as their narrow interpretation of relevant provisions of the Aarhus Regulation (for example in Stichting Milieu, LZ or Trianel), the ACCC eventually adopted a decision in 2017, confirming the Union’s non-compliance with Article 9(3) and (4) of the Convention.

The main aspects of the Union’s non-compliance were that only acts of individual scope, adopted under environmental law, and having legally binding and external effects could be challenged via the internal review mechanism (see the ACCC’s 2017 Decision, particularly paras 94-104) and that members of the public other than NGOs could not request such review (paras 92-93). This led to most internal review requests being declared inadmissible.

Following this established non-compliance, the Commission proposed amendments to the Regulation, which would now allow for the challenge, within the internal review mechanism, of acts and omissions regardless of their personal scope that more generally contravene environmental law, and that have legal and external effects (for more detailed considerations of these amendments, see for example Brown, Leonelli, or Pagano). In February and again in July 2021, the ACCC assessed these particular proposed changes positively. An agreement on the amendments was reached in the trilogue negotiations in July 2021, and in October 2021, the amendments were officially adopted in Regulation (EU) 2021/1767.

The Document Request and the Judgment 

It is within this revision and negotiation process that the legal opinion at the core of the dispute in ClientEarth and Leino-Sandberg v Council comes into play. The currently only partially available version of the requested document contains a (legal) analysis of the findings of non-compliance of the ACCC, as well as a proposal for next steps to be taken, also in light of the (at the time) upcoming Meeting of the Parties to the Aarhus Convention (MoP). The crucial question then is why the Council, after providing only very restricted access to the requested legal opinion, still refuses to grant full access to this document. This question is all the more pertinent as the relevant negotiations have been closed and the changes to the Regulation have already long been adopted, leading the Court to quickly dismiss the argument that disclosure could undermine an ongoing decision-making process (Judgment, para 100).

The Council feared that full disclosure of the document would have two negative consequences for the Union. In its view, disclosure would threaten its ability to receive high-quality advice from its Legal Service because disclosing the full analysis invites external pressure and litigation due to its broad scope. Furthermore, disclosure would in the eyes of the Council hurt the Union’s ability to act effectively on the international stage. Both of these concerns relate to grounds protected by the Access to Documents Regulation, which contains exceptions to the general rule that Union institutions need to disclose documents.

The Legal Advice Exception

With regard to the Council’s first concern, the main dispute centred on the question of whether the document contained information sensitive enough to argue that disclosing it would endanger the Council’s ability to receive frank, objective, and comprehensive advice. Ever since the ECJ’s Turco ruling, institutions withholding access under this ground need to do more than describe an abstract worry. Instead, they need to “give a detailed statement of reasons” why they believe the legal advice in question is “of a particularly sensitive nature or [has] a particularly wide scope” (para 69).

To that effect, the Council in this case cited ‘external pressure’ and the large number of cases brought before the Union courts as evidence of the contentious nature of the subject matter (Judgment, paras 63 and 71). In such a controversial area, disclosing a broad legal discussion of the Union’s compliance with the Aarhus Convention in light of the proposed amendments could add fuel to the fire, and in turn, make members of the Council Legal Service hesitant to present their honest opinions in the future.

The Court deemed the argument based on the existence of ‘external pressure’ completely unsubstantiated (Judgment, para 65). This observation is to be applauded, given that the ‘external pressure’ in question amounted to nothing more than quite measured comments by NGOs and academics, including on this blog (Council Replypara 37). Especially in legislative procedures, it is striking that the Council views critical engagement with the Union’s policies as ‘external interference’ rather than healthy signs of public engagement in the democratic process.

The second concern, regarding the broad nature of the legal analysis, and the related risk of litigation, was taken more seriously by the Court, as it acknowledged the many legal challenges against the Union’s compliance with the Aarhus Convention. However, the Council did not explain specifically how disclosing the document at hand would negatively influence such procedures. Indeed, how could legal advice that was not negative about the Commission’s proposal make it more difficult to defend the eventually adopted Regulation in court (Judgment, para 75)? Finally, the Court stressed that the amendment of the Aarhus Regulation could not and did not entail consequences for the standing criteria laid down by Article 263 TFEU. Thus, disclosing legal advice on the relation between the internal review mechanism and the remedies provided by the Treaties was considered unproblematic (Judgment, paras 84-85).

The International Relations Exception

The second ground for refusal by the Council related to the Union’s international relations. In the case law on this exception, institutions have generally presented two main rationales for secrecy (see Peters and Ankersmit for an overview). The first concerns information that reveals strategic objectives and tactical considerations, because external actors could in turn use that information to the detriment of the Union. The second main reason stems from the fact that certain documents are shared with the Union on a confidential basis and disclosing them could hurt the climate of confidence.

The Council in this case employed the first rationale, stressing that revealing the legal analysis would ‘compromise the Union’s position vis-à-vis the other parties to the Aarhus Convention’ (Judgment, para 107). In line with previous case law such as In ‘t Veld v Council, the Court required more than a mere fear, but rather an argument showing ‘how disclosure could specifically and actually undermine’ the Union’s interest in international relations (Judgment,para 108). Given that the ACCC itself had in fact recommended the adoption of the amendment to the Aarhus Regulation, and the Council’s Legal Service opinion in question was not negative to or critical of the amendment (paras 115-116), the Court failed to see how disclosure could weaken the Union’s position in negotiations with the Convention parties.  

Simply a Piece of Uncontroversial Legal Advice? 

In general, the Court’s critical approach to the Council’s fears signifies a positive development in the case law concerning access to documents. As has been argued before by Leino-Sandberg, Union institutions generally showcase an attitude of ‘exasperation and foot-dragging’ when it comes to publishing legal advice. Moreover, in previous cases, the Court itself has been dangerously deferential to any justification presented under the ‘international relations’-exception. The fact that the Court carefully scrutinised the Council’s arguments and did not take the presented worries for granted is a laudable approach that brings the Union more in line with its own commitment to transparency (Article 1(2) TEU).

Still, the judgment relies on an assumption that can be viewed critically. The Court seems to infer that the concerned legal analysis cannot invite external pressure, litigation, or tough negotiations with Aarhus Convention parties, mainly because it does not take a negative stance towards the legislative proposal. However, based on the available information (and lacking knowledge of the full document), this assumption seems far from self-evident.

While the judgment only contains the positive comments of the ACCC on the 2021 amendments to the Aarhus Regulation (Judgment, paras 10, 18, and 92), the actual negotiations surrounding the Union’s compliance with the Convention are far from settled. Indeed, the ACCC in 2021 determined that while the amended Regulation constituted a ‘significant positive development’, certain remaining hurdles to the Union’s compliance with Articles 9(3) and (4) of the Convention would now depend predominantly on whether the relevant provisions are interpreted consistently with the objectives and obligations of the Convention (see the ACCC’s 2017 Report, paras 117-119).

Moreover, another concrete issue of the Aarhus Regulation’s review mechanism, concerning the impossibility of challenging state aid decisions, was raised in a different complaint and ACCC report, and has not been addressed by the 2021 amendment to the Regulation. In the last MoP in 2021, a new decision on the Union’s compliance on this matter was postponed, as the Union extraordinarily requested more time to “analyse the implications and assess the options available” (see paras 54-55, 57).

It thus appears that the dilemma at the core of the negotiations to which the legal advice of the Council related, seems anything but resolved. While we await the Council to provide the requested document in full in order to know for sure what the content of the advice really is, the various communications from the Council allow some theorising.

What we know for sure is what the secret document does not address, as the Council explained in the hearing in the case that the document (1) does not cover political or strategic aspects of the Commission’s proposal and the Union’s position in the Aarhus Compliance negotiations, (2) does not cover the aspect of the state aid exception, and (3) does not relate to any other future international agreement (Report for the Hearing in Case T-683/21).

Furthermore, reading between the lines of the Council’s rather vague statements in the written reply to the document request and the hearing, one can hypothesise what the document does address. It seems to concern the Union’s compliance with the Aarhus Convention’s access to justice obligations of Article 9(3) and (4) in a much more general way and in relation to the limitations posed not only by the then-to-be-amended Aarhus Regulation but also by the Union’s overarching system of legal remedies under primary law. Indeed, according to the Council, the document “contain[s] an elaborate analysis, including questions relating to primary law”, concerning “the system of internal review as established under this regulation in relation to the system of legal remedies as provided for under Article 263 [TFEU]”, and the “legal feasibility of solutions that the European Union could implement to address the alleged non-compliance with the Aarhus Convention” (Council Reply, paras 50, 52, 69 and 70). As such, even more sensitive, the Council in the hearing explained that the advice seems to cast doubt on the Union’s compliance with Article 9(3) and (4) of the Convention, potentially by interpreting the Aarhus Regulation and Union primary law in a way contrary to what the ACCC was expecting in their 2017 and 2021 reports (Report for the Hearing in Case T-683/21).

Thus, while the Court rejected the Council’s worries in relation to the sensitivity of the requested document, it does not seem unlikely that the Council within this document reflected on intricate matters of Union law and the relationship with international obligations.

A More Principled Way to Reach the Same Conclusion

Although it is thus not implausible that the document contains politically and legally charged information, this does not mean that the Council withheld access to it rightly. While the Court, in line with case law such as ClientEarth (ISDS), coupled its review of the refusal to disclose with the sensitivity or strategic nature of the legal opinions, we argue that a more principled line of argumentation would have been more desirable.

As argued previously by Peters and Ankersmit, the Court could have distinguished policy areas characterised by a zero-sum logic and areas characterised by a positive-sum logic. In the former realm, secrecy is classically viewed as a necessary evil to avoid adversaries from gaining too much insight into the Union’s internal deliberations. As alluded to by the Ombudsman, disclosure of information could indeed be dangerous if certain ‘key strategic interests’ are at play, such as military strategies or critical infrastructure. In contrast, the development of collaborative policies in fields like environmental law is typically spurred on, rather than hurt, by transparency and openness. The typical mutual benefits from cooperation in these areas even hinge on the trust parties obtain by being able to check on each other. Likewise, MoPs are generally open and transparent, whereas the Aarhus Convention also contains a pledge to uphold a high degree of transparency for environmental information (Article 4).

The Court could have interpreted the Access to Documents Regulation in light of these considerations by making this distinction between areas where the need for secrecy differs widely. As a result, the Council’s fears would not justify secrecy. It cannot be said to be in the Union’s interest to hide legal advice as a strategic move to escape critical debates on the Union’s compliance with a crucial pillar of the system of international environmental law, the success of which relies on genuine cooperation and mutual trust amongst the parties. In our view, such a principled approach is to be preferred over implicitly increasing the level of scrutiny in the review, as it makes the Union’s transparency framework more robust, in line with the objectives of the Aarhus Convention.

To conclude, we suggest that the Council’s legal advice at the core of this judgment clearly contains information that the public should be able to access, even if this information continues to have strategic significance. How controversial the content of the previously hidden legal advice actually is, should be clarified soon, when the Council follows up on the judgment and discloses the full document.

The authors would like to thank Professor Päivi Leino-Sandberg for providing us with additional context on the case, as well as the Report for the Hearing in Case T-683/21. This document is not (yet) published online.



Source link

21May

Stephanie Bell and Katya Klinova on Redesigning AI for Shared Prosperity


Stephanie Bell is a Research Fellow at the Partnership on AI affiliated with the AI and Shared Prosperity Initiative. Her work focuses on how workers and companies can collaboratively design and develop AI products that create equitable growth and high quality jobs. She holds a DPhil in Politics and an MPhil in Development Studies from the University of Oxford, where her ethnographic research examined how people can combine expertise developed in their everyday lives with specialized knowledge to better advocate for their needs and well-being.

Katya Klinova is the Head of AI, Labor, and the Economy Programs at the Partnership on AI. In this role, she oversees the AI and Shared Prosperity Initiative and other workstreams which focus on the mechanisms for steering AI progress towards greater equality of opportunity and improving the working conditions along the AI supply chain. She holds an M.Sc. in Data Science from University of Reading and MPA in International Development from Harvard University, where her work examined the potential impact of AI advancement on the economic growth prospects of low- and middle-income countries.

Robert Seamans is an Associate Professor at New York University’s Stern School of Business. His research focuses on how firms use technology in their strategic interactions with each other, and also focuses on the economic consequences of AI, robotics and other advanced technologies. His research has been published in leading academic journals and been cited in numerous outlets including The Atlantic, Forbes, Harvard Business Review, The New York Times, The Wall Street Journal and others. During 2015-2016, Professor Seamans was a Senior Economist for technology and innovation on President Obama’s Council of Economic Advisers.

You can watch a recording of the event here or read the transcript below:

Anton Korinek 00:12

Welcome. I’m Anton Korinek. I’m a professor of economics at the University of Virginia and a research fellow at the Centre for the Governance of AI, which is organizing this event.

Today’s topic is redesigning AI for shared prosperity. We have three distinguished speakers: Katya Klinova and Stephanie Bell, who will be giving the main presentation, followed by discussion by Rob Seamans.

Let me first introduce Katya and Stephanie, and I will introduce Rob right before his discussion. Stephanie Bell is a research fellow at the Partnership on AI, affiliated with the AI and Shared Prosperity Initiative. Her work focuses on how workers and companies can collaboratively design and develop AI products that create equitable growth and high-quality jobs. Stephanie holds a DPhil in politics and an MPhil in development studies from Oxford where [she conducted] ethnographic research which examined how people can combine expertise developed in their everyday lives with specialized knowledge to better advocate for their needs and well-being.

Katya Klinova is the Head of the AI, Labour, and the Economy program at the Partnership on AI. In this role, she oversees the AI and Shared Prosperity Initiative—which has developed the report that the two will be presenting today—and other work streams, which focus on mechanisms for steering AI progress towards greater equality of opportunity and towards improving the working conditions along the AI supply chain. She holds a Master of Science in data science from the University of Reading and an MPA in international development from Harvard where her work examined the potential impact of AI on economic growth for low-income and middle-income countries.

The concern underlying today’s webinar is that AI poses a risk of automating and degrading jobs all around the world, which would create harmful effects for vulnerable workers’ livelihoods and well-being. The question is: how can we deliberately account for the impact on workers when designing and commercializing AI? [How can we ensure] AI benefits workers’ prospects while also boosting companies’ bottom lines and increasing overall productivity in the economy? With this short introduction, let me hand the mic over to Stephanie and Katya.

Katya Klinova 03:15

Anton, thank you so much for hosting us. Thank you to the Centre for the Governance of AI—it is an absolute pleasure to be here. Thanks to everyone who is joining us for this hour to talk about redesigning AI for shared prosperity.

As Anton said, the work that we’re presenting today is part of the AI and Shared Prosperity Initiative. Recently, we released the Initiative’s agenda, which is our plan for research and action. You can download this agenda at partnershiponai.org/shared-prosperity. The [agenda] is not just authored by Stephanie and I. [Rather], it is a result of a multi-stakeholder collaboration by the steering committee, which consists of distinguished thinkers from academia, from industry, from civil society, and from human rights organizations. It was also supported by the research group that included someone very dear to us who is now at the Future of Humanity Institute—Avital Balwit. We want to say thank you to Avital and to everyone who supported this work.

The goal of the AI and Shared Prosperity Initiative is to make sure that AI adoption advances an abundance of good jobs, not just for a select profile of workers, but for workers who have different skills and different demographics all around the world. To advance that goal we decided, under the guidance of the steering committee, on the method of introducing shared prosperity targets. [These shared prosperity targets] are measurable commitments by the AI industry to expand the number of good jobs in the broader economy. These targets can be adopted voluntarily, or they can be adopted with regulatory encouragement.

The agenda that we’re [discussing] today is our plan for developing these targets and thinking through their accompanying questions. The agenda is structured in two parts, which are going to be broadly mirrored by our talk today. We begin by [describing] the background against which AI advancement is happening today. [Next,] we introduce the proposal for shared prosperity targets. Then, we analyse the critical stakeholders and their interests and concerns when it comes to adopting, encouraging, or even opposing the shared prosperity targets. Our presentation [will follow this format] today. [Additionally,] we’ll briefly discuss why set targets [can] expand access to good jobs for AI, the structure of the shared prosperity targets proposal [itself], and the key stakeholders, their interests, and the constraints that they’re facing.

Let’s begin by discussing the motivation for this work. Many of you have seen this graph before. It has certainly been shown in this very seminar before by David Autor. [This graph] shows the polarization of wages, which is especially pronounced for men in the US, though women also very much experience [this polarization]. This graph [reveals] that despite the economy growing almost threefold in real terms since the sixties, not everyone has [benefitted from] that growth. You can see that people with graduate degrees have experienced a [large] wage growth, while other skill groups and educational attainment groups did not. [In fact], wages stagnated or even declined in real terms, which is quite staggering. [This pattern] definitely cannot be called “shared prosperity.” The risk and the worry are that AI will exacerbate or continue this wage polarization, [because] AI can be a skill-biased technology, [meaning it is] biased in favour of people with higher levels of educational attainment.

A [much-discussed] and very important solution is upskilling, or re-skilling, which we should [certainly] invest in. [This requires] educating people to help them navigate changing skill demands in the labour market. Nobody will ever argue against [improving] education [quantity and quality]. However, we need to be aware that if the overall demand for human labour goes down in the long term, upskilling itself will not fix the [core] issue: the scarcity of good jobs. [No matter] how much we retrain people, if there’s a declining number of good jobs in the economy, [retraining] will always be a losing battle.

The graphs you’re looking at are from a paper by Acemoglu and Restrepo, which shows that automation has picked up in the last 30 years, [which is a departure from historical trend]. [Historically,] automation existed: automation is not something that only AI introduced into our economy. [However,] automation was displacing humans from tasks [at the same rate] new tasks [were created]. [However, this has not been true over the last 30 years] and the risk is that AI will continue this trend [of rapid displacement]. The last [concern] that I want to mention is that the impacts [of automation] tend to be global. There are no mechanisms for global redistribution of [automation’s] gains, which tend to be concentrated in a few countries’ firms and accrue to just a handful of individuals.

There is a memorable anecdote that I want to share with you. You’re looking at a picture of self-order kiosks introduced in fast food restaurants. Once the investment [in these kiosks] had been made in California and the rest of the United States, the cost of deploying [this technology] everywhere around the world became so low that no matter how low the wages were in some of the low-income and middle-income countries, workers [simply] couldn’t compete with the technology. The picture you’re looking at was taken in South Africa. Even before COVID, the unemployment [rate] in [South Africa] was 29%; it was not the time to be eliminating formal sector jobs. [However,] the march of technology knows no [limits].

[Given AI’s] global impact and [our current inability to] redistribute AI’s gains globally—either through taxation or other transfers—we need to think ahead and [consider how we can ensure] AI supports a globally inclusive economic future. One [frequent] recommendation, [supported] by a growing [body] of literature, is that AI [should] complement, rather than replace, human labour. While this sounds intuitive, in practice it can be difficult to differentiate between technology that complements labour and [technology] that replaces [labour]. [Our concept of] shared prosperity targets addresses exactly [this question]: how do you differentiate between labour-complementing and labour-displacing technology?

What makes this differentiation hard? In economic terms, the definition assumes that you know exposed outcomes from the technological advancements. A technology is called labour saving if it reduces the overall labour demand in the economy and [a technology] is called labour using if it creates growth in overall labour demand in the economy. [However,] it’s very difficult to know [how a technology will impact] labour demand. Early research in [technology] can be used [in] many different applications down the road. Some of those [applications] can be labour saving and some of those can be labour complementing.

Deployment contexts very much matter. The same application [of technology] can be used for different purposes: in the workplace, something can be used to augment peoples’ productivity or to surveil them. [While the technology is applied in the same way,] how [the technology] is used depends on the values and the orientation of the employer actually introducing [the technology] into the workplace. It’s also difficult to map micro[economic] impacts of a given technology to the macro[economic] trends in the economy, because the economy is a dynamic system with [many] interacting parts. It is very difficult to predict ex ante [how a technology investment will] impact that dynamic system, just like it is difficult to predict how a business or technology investment will impact the climate, because climate is also a very complex, dynamic system. And yet, people came up the idea of tracking carbon emissions as a proxy for their impact on global warming. The shared prosperity targets are inspired by the carbon emission targets, in their pursuit of finding appropriate proxies that would, despite all of these constraints and sources of uncertainty, introduce a good enough measure to [determine] if a product or a system is likely to be labour displacing or labour complementing down the line.

I want to spend some time unpacking the connection between the micro[economic] impact of introducing technology in the given workplace and the macro[economic] consequences in the rest of the economy. Of course, there are direct consequences [of introducing technology]: there are people who [a firm] might be firing or hiring directly as [they] introduce technology in the workplace because now [the firm] needs fewer people of certain skill groups and more people of other skill groups. This is very intuitive.

We [also] want to make sure we’re not missing [technology’s] broader impacts, which can [occur] up or down the value chain. [For example,] after [a firm] introduces a new technology, [they] might require a different volume of interim inputs from [their] suppliers. [These suppliers] in turn might hire or fire workers to reduce or expand their workforce. [These are all examples of] indirect effects.

If introducing a new technology into the production process improves goods’ and services’ quality or lowers their prices, then some of the gains [of technology] are passed along to the consumers. The consumers are now, in real terms, richer: they can spend their free income on something else in the economy, which may create new jobs. We want to keep these indirect impacts in mind when we’re talking about the impact of technology.

Finally, [changes in] labour demand not [only impact] the [size of] the workforce—whether it is expanded or downsized—but also the quality of jobs and their level of compensation. Under lower demand for labour, jobs can become more precarious or [worse] paid, even if the total size of the workforce does not change. The ambiguity that I [described] between labour-displacing and labour-complementing technology gets even more complicated when people start describing their technology as “labour augmenting.” As of today, anybody can claim this title of “worker augmenting,” whether the technology grows productivity of workers and makes them more valuable to the labour market or the technology squeezes [every] last bit of productivity from [workers] using exploitative measures like movement tracking and not allowing [workers] to take an extra break. [The distinction] can be [extremely] blurry.

Shared prosperity targets would allow [genuine] producers of worker-augmenting technology to credibly differentiate themselves: if [producers] adopt ways to measure their impact on the availability of good jobs in the economy, then they would have receipts to show for calling themselves worker augmenting [rather than] worker exploiting. Shared prosperity targets are a proposal for firm-level commitments to produce labour-friendly AI while also keeping broader economic effects in mind.

There are three components that we want to track with shared prosperity targets: the impact on labour income in the form of job availability and distribution, job compensation, job quality i.e. worker well-being, and job distribution. [Job distribution describes whom] new, good jobs are available to and for whom are [good jobs] are getting scarcer. These groups can be split by skills by geographic location or by demographic factors. [Now, I’ll] turn it over to Stephanie to talk about incorporating workers’ voices into designing shared prosperity targets.

Stephanie Bell 20:21

Great, thanks so much Katya. Thank you as well to Anton and to FHI for having us here today to talk about the Shared Prosperity Agenda. We’re really excited to be in this conversation with you all.

In thinking about the next phase of applied research with workers, [we’re considering] the [most important] areas in the context of the shared prosperity targets—that Katya just mentioned—to ensure we’re taking into account workers’ priorities and needs. There’s been substantial research as to what constitutes a good job, or “decent work,” in the words of the ILO. There’s been research much more recently into the impact of artificial intelligence on different aspects of worker well-being and worker power.

Setting [shared prosperity] targets [requires] finding a sufficient amount of depth to address [workers’] real needs while also creating targets that are sufficiently clear and straightforward for companies to implement. [A framework] that covers all [worker concerns] is going to be less useful than one that is focused on workers’ high-priority needs. Our goals include [focusing] on job quality— [which we’ll] track within the shared prosperity targets—by identifying possible mechanisms, as well as required conditions, for workers themselves to participate in AI design and AI development. [Workers have] largely been left out of this process. [We aim to] identify places for workers to participate directly in this process and identify how technologies can not just not harm workers, but [rather actively improve] workers’ ability to do their jobs and also boost their workplace satisfaction. This would be a tremendous advancement in terms of the trajectory of these technologies.

Our approach [relies on] qualitative field research at different field sites around the world. Given the context of COVID, this is largely going to take place digitally, using [approaches] like diary studies, contextual observation, and semi-structured interviews to learn what workers have observed about the implementation of AI in their workplace as well as any [insights they have which will allow us to develop the] job quality component of the shared prosperity targets. Some might question the necessity of actively incorporating workers and [ensuring] we talk to a variety of people in different industries, occupations, and geographies. The rationale is that workers, regardless of their wage, formal skills, training, or credentialing, are experts in their own roles. [Workers] know the most about what it takes to get their jobs done: what the tasks are and how they experience their working conditions. By going directly to workers, we have an opportunity to understand their needs and make sure that we’re addressing their well-being and their power within workplaces, [rather than relying on] managers or company leaders as proxies and potentially misidentifying workers’ interests or missing the nuances of [workers’ experiences]. Finally, [incorporating workers is critical to our] process’ integrity. The entire point of this initiative [is to address workers’ needs]. Leaving [workers] out of the conversation would surely be malfeasance on our part, if we’re trying to make sure that we’re creating a set of targets that really does meet the needs of people whose voices are often left out of these conversations. We frequently have and [witness] conversations about the future of work that never [address] the future of workers, [and we’re trying to remedy this problem through our] work.

24:55

[Let’s transition to the] section of the agenda focused on key stakeholders’ interests and constraints. The first group that we’d like to give an overview of is workers themselves. We have two major areas of concern. [The first] is the impact of AI on worker power and worker well-being. In what ways are these technologies degrading or potentially benefiting workers? Katya mentioned [examples] like worker exploitation, aggressive surveillance, and privacy invasions. Another [area of concern is how] these systems impact workers’ ability to organize on the job, improve their working conditions, and grow their ability to participate in workplace decision-making. The [less contact] workers have with other humans during work, [the fewer opportunities workers have] to discuss their job quality and the harder it is for workers to effect change within their workplace. For example, you can introduce an effective scheduling software system that’s able to anticipate customer demand and then tailor your shift scheduling efficiently. [However,] this can radically disrupt workers lives by calling [workers] in at the last minute, forcing them to rearrange childcare, or causing them to worry that their job is in jeopardy if they aren’t able to match those needs. What we would want is for workers to be able to advocate for themselves—to have the opportunity to have a conversation with their supervisor, to make sure that their job is one that they can perform without having to worry about last-minute disruptions to their lives. However, once these decisions are no longer stemming from human-to-human conversations, you open up the opportunity for what Mary Gray called “algorithmic cruelty” to be the decision-making power within a workplace.

Stephanie Bell 27:05

The second area that we’re focused on is how worker voice [can direct] AI development and deployment. As I mentioned earlier, workers have a tremendous amount of expertise in their own tasks and [insight into how they can] improve their efficiency and productivity. For example, perhaps there are opportunities to improve safety or working conditions using technology. Depending on who’s [raising the concerns that] technology is designed to address, very different technologies are implemented. We believe that we [must] take workers seriously: they are impacted by these technologies and [their insights can be] quite generative and a real benefit to AI development companies.

Then the big question is: what are the mechanisms for change? We’ve identified three major [avenues through which] workers can create opportunities for their participation, the first of which is unions and worker organizations. This is probably an obvious [approach] to this audience, but always worth noting. However, [it is tenuous to rely on] unions and worker organizations as the [sole] avenue for change: around the world, we’re at a historically low unionization rate, which means that workers might not be in a position of power when they’re coming to these conversations.

Second, companies often take into account user and stakeholder research and testing, if not with the actual workers in a given company, than with workers who are in some way similar to them. [Workers could better participate in technological decisions if they had the] opportunity to contribute to the [research and testing] processes in a way that actually had teeth, [for example, by saying,] “This is this is a step too far in terms of its impact on me and my co-workers.” [Alternatively, workers might say,] “Hey, there’s a design feature that you hadn’t thought about, that would be really useful to build.” We see real opportunity for workers to collaborate with AI designers as well as their corporate leadership to be able to create “win-win” situations.

Finally, I think there are opportunities [for worker empowerment] within corporate governance and ownership structures. While this area is less defined in the context of artificial intelligence, historically, there are [successful models] like codetermination, cooperative ownership, shadow boards, and worker boards in which company leaders get the opportunity to have a sense of what workers think of a given product.

The second audience to discuss is businesses. One of the big questions in this work is: what would a business get out of committing [to shared prosperity targets]? As Katya pointed out, there is an opportunity for [businesses] to differentiate and gain credibility, especially when [they create] a genuinely worker-augmenting product as opposed to a worker-exploiting, worker-surveilling, or worker-replacing product.

There are also opportunities in the product development cycle. On the left side [of this slide] is a simplified graphic about the AI product development cycle. Ideally, AI-developing businesses find ways to commercialize research and develop workplace AI products which they sell to AI-deploying businesses—which are frequently a different set of businesses entirely. Those AI-deploying businesses purchase and implement AI products and then offer feedback both through their purchases and through direct communication with the firm from which organizations they buy their technology.

[However, this idealized model] isn’t how the development cycle actually functions. Instead, [a great deal of development] is driven by research breakthroughs. This isn’t necessarily a bad thing. [However,] research breakthroughs [require] use case identification and many of these use case identifications follow the format that Katya has already described: they are quite anti-worker in that they automate tasks even in instances where it doesn’t increase productivity or exploit workers for the sake of [maximizing profits]. While this is not the whole universe of use cases, one reason [for the trend toward worker exploitation] is that businesses are [not] engaging with and listening to [workers about how products impact them]. The more [businesses] build in conversations with workers—and frontline workers in particular—the more opportunity [businesses] have to identify additional use cases and different ways these technologies can be implemented. Oftentimes, [engaging with workers] can [allow businesses and developers to] expand the productivity frontier, [rather than] swapping out a human worker—one form of productivity—for a robot or an algorithm—another form of productivity.

Other business stakeholders are involved as well, [including] researchers, developers and product managers, many of whom get into this kind of work because of the intellectual challenge and the opportunity but don’t want their products to harm other people. [This challenge creates an] opportunity for conversations between workers within tech companies. [Another stakeholder] is artificial intelligence investors. Investors, and particularly large institutional investors, frequently invest in spaces where it [is profitable] to have a robust labour market. Investment in automation technology creates problems for other investments within [these investors’] portfolios. We speak about this in more detail in the agenda.

The last audience that I’ll talk about is government. We saw three major opportunities for government to participate in steering AI [in a better direction] for society and to support workers in addressing their particular challenges. Right now, [there is a great deal] of government investment in basic research and [along] the commercialization chain. [However, this investment doesn’t have] any kind of constraints on technologies whose most obvious use cases are going to be harmful for society and consolidate gains within the hands of a few.

[First,] there are opportunities to assess the way that governments deploy their research funding and procurement processes to support an AI trajectory that is broadly socially beneficial in an economic sense. Second, there’s [been a focus on] identifying opportunities to support workers who would be navigating challenges created by AI. What would be the role of government if the trajectory were different— [if we working toward mitigating AI-related risks]? [In this scenario, we may still implement] reskilling and universal basic income. The question is: how do we [avoid] creating a problem that we have to solve down the road, if [right now] we have an opportunity to [prevent] some of the most devastating impacts? Finally, low-income and middle-income countries have some very specific challenges that they need to work through. As Katya showed with her earlier example, these technologies, once created, have a very low marginal cost to implement anywhere that the company is operating, which could result in massive labour market disruption [without distributed gains] because there are no redistribution mechanisms. We think substantial work needs to take place in this space to ensure that low-income and middle-income countries don’t end up continuing to bear the brunt of the growth of the Global North. Katya, I’ll hand it over to you to cover international institutions and civil society stakeholders.

Katya Klinova 35:47

Thank you, Stephanie. I want to highlight a [section] from our chapter on international organizations and civil society. As Daron Acemoglu once said, “AI can become the mother of all inappropriate technologies for low and middle-income countries.” I showed you a photo from Twitter when I was talking about spill over effects of automation in developing countries because there are no graphs and no data that measure the magnitude or the extent of these spill over effects. We need much more research and attention to understand [these effects well]. If there is one thing we’ve learned from globalization, it’s that the expansion of trade can produce incredibly large gains and those gains can be quite concentrated. There are very real losers from [free trade], and [certain] populations can be hurt very badly. We shouldn’t repeat this story. Now, the expansion of very powerful technology opens up the possibilities for automation and the frontier of the kinds of activities that can be automated in a way that is not globally controllable. We need to be much more attentive to these trends. The role of the international organizations can be very meaningful in balancing, pacing, and understanding the cross-border impact of [technology].

So, with that, we’ll hand it back to Anton. Just to remind everyone: if you’d like to read the full agenda, it is on our website. You can also email or tweet me and Stephanie—please get in touch if you’d like to be involved with this work. Thank you very much.

Anton Korinek 37:53

Thank you so much, Katya and Stephanie, for a very clear and inspiring presentation. Let me invite all the members of the audience to contribute questions for Katya and Steph in the Q&A box. You can also upvote questions asked by others to express your interest in them.

Robert Seamans has kindly agreed to be our discussant. Rob is an associate professor at New York University’s Stern School of Business. His research focuses on how firms use technology in their strategic interactions with each other and also focuses on the economic consequences of AI, robotics, and other advanced technologies. His research has been published in leading academic journals and has been cited in numerous outlets, including the Atlantic, Forbes, HBR, The New York Times, Wall Street Journal, and others. And in 2015-2016, Rob was a senior economist for technology and innovation on President Obama’s Council of Economic Advisers. Let me hand the mic over to Rob.

Robert Seamans 39:38

Anton, thank you very much for inviting me to discuss this paper. Let me start off by saying that I’ve been following the Partnership for AI for a number of years. I think it’s an excellent organization and I like the impact that that the organization has been having. This particular initiative is very important work and very ambitious.

I’m going to start at a fairly high level with a definition. We’re using the term artificial intelligence, AI, which sounds very fancy. I think it’s useful to dumb it down. Here’s my definition of artificial intelligence: [AI] is a group of computer software techniques. At the end of day, AI is highly sophisticated software and its algorithmic techniques rely on a lot of data. I’m not a computer scientist, [AI] is outside the realm of what I can [create]. However, that doesn’t mean that I can’t talk about AI—one does not need to be an expert in a specific technology in order to think through its effects on economy and society. As a perhaps tortured analogy, I don’t know how to build a car; I certainly don’t know how to fix an engine. I would probably even have trouble changing the oil in my car. But that doesn’t prevent me from thinking deeply about how changes in cars might affect the economy and society. The same is true [for AI] and for any technology.

AI and robotic technologies are developing and commercializing rapidly. They’ll likely lead to innovation and productivity growth, which is the good news. But according to some, the effects on human labour are unclear and potentially a cause for concern. I’ll spend half an hour half my time the first part—[the good news]— and half my time on that on that very last bullet point—[the bad news].

First, [I’ll discuss] some basic facts. AI has been developing very rapidly. There have been many breakthroughs. Here is one example: this picture tracks the progress in ImageNet’s ability in image recognition. So, the y-axis shows error rate: the lower you go, the better off you are; the lower you go, the more progress there is. The x-axis shows what’s happening over time. Over time, ImageNet’s [capability in] image recognition is dramatically improving. By 2015 or 2016, ImageNet surpasses human capacity in image recognition. This is one piece of evidence that [suggests] we have rapid breakthroughs [occurring] in the lab. Moreover, these rapid breakthroughs have led to these technologies’ commercialization. The panel on the left shows venture capital funding for mostly US-based AI start-ups—[the data] comes from Crunchbase. You can see a dramatic increase [in funding] starting roughly in 2010. Why is this useful to point this out? Well, venture capitalists have very strong incentives to make sure that they’re getting these investments right. They believe that breakthroughs in the lab have commercial applications which provides some evidence [for the emergence of] commercial applications of AI.

It’s also useful to talk about what’s happening with robotics. Robots, of course, have been around longer than [AI] and to date have had more of an impact [than AI], particularly on manufacturing. There are some things happening with robots that are [useful to understanding] what might happen with AI. I’m going to talk about robots a little bit in my remarks.

The panel on the right looks at worldwide robot shipments. [There were] about 100,000 units sold annually until about 2010. Then, there is a dramatic increase, and by about 2016, three times that amount [were sold annually]. Once again, [this demonstrates] rapid commercialization of a new technology.

While it’s probably too early to say, I would bet there’s going to be a lot of productivity growth as a result of AI, as we’re already seeing with robots. Graetz and Michaels had a fantastic paper came out in The Review of Economics and Statistics in 2018. According to their study, robots added an average of 0.4 percentage points of annual GDP growth in the seventeen countries that they were studying between 1993 and 2007. This was about a tenth of GDP growth for those countries during that time period. I think that’s as good of a benchmark as we can expect. I suspect we’ll get a similar, if not greater, boost from AI, though frankly, it’s still too early to tell.

We can look at robots to get excited about what AI can do. We can also look at prior episodes of automation. All prior episodes of automation, and particularly steam engines and electrification, have led to growth. One of favourite studies is by David Autor and Anna Salomons from 2018 in the Brookings Papers on Economic Activity. They’re looking at episodes of productivity growth and the effect that had on labour. The last column [shows productivity’s] net effect—when you have productivity boosts, you see an increase in labour.

What I find interesting, though, is that there’s a fair amount of heterogeneity in the supply chain. The direct effect is negative, the final demand effect is positive, the upstream effect is positive, and the downstream effect is noisy. There are two big takeaways from this. [The first takeaway] is that the net effect is positive for labour. The second takeaway is that there’s a lot of heterogeneity. We can think about other sources of heterogeneity, for example, within a given firm across different occupations in that firm.

Let’s [consider AI]. Katya and Anton, [in their] paper “AI and Shared Prosperity,” write that future advances in AI, which automate away human labour, may have stark implications for labour markets and inequality. I agree with that statement and I [think] there are [two important components to highlight]. The first is that AI is automating away human labour. And the [second important component to consider is] inequality. [My—albeit nascent—work] may [provide] early evidence [for these points]. [My research] suggests that [at this stage] firms are using AI for augmentation rather than for replacement. However, there’s also early evidence that the augmentation is disproportionately benefiting [only] some [people: automation’s gains are not widely shared]. [This provides evidence for AI’s] heterogenous [impact] across occupations.

Over the past several years I’ve worked with Jim Bessen and a couple of other co-authors to survey AI-enabled start-ups. In the first wave of [surveys], we asked these AI start-ups, “What is the goal of this AI-enabled product that you’re creating? What is this product’s KPI when you’re trying to sell to customers?” [Start-ups] could [choose one or more from a range of possible answers]. Most [start-ups answered that their products were aimed at] making better predictions or decisions, managing and understanding data, and gaining new capabilities. [These answers suggested that technologies] augmented, rather than replaced, human labour. [On the slide,] I’ve highlighted in red the answers “automate routine tasks” and “reduce labour costs,” [as these answers suggest] replacement. However, these [replacement-indicating reasons were not among the] top reasons that these firms gave. [While there may be] some evidence that AI is being used to replace human workers, [most] technologies are being used to augment work.

[Now, let’s consider] inequality. [We’ll turn to] a paper [I wrote] with Ed Felten, a computer science professor at Princeton, and Manav Raj, a PhD student I work a lot with it. We came up with an AI Occupational Exposure Score: for each occupation in the US, we’ve come up with a way to describe how exposed that occupation has been to AI. Now let’s [segment these occupations] into three [categories]: low-income, middle-income, and high-income occupations. Let’s look at employment growth and wage growth over a ten-year period. The positive coefficient for high-income workers’ unemployment growth suggests that as the high-income group is more exposed to AI, they will see larger employment growth. The same holds true for wage growth: as these occupations are more exposed to AI, they will see faster wage growth. [In contrast], the [opposite is true] for low-income workers’ employment: [as AI growth increases, employment growth will decrease]. This suggests AI may be exacerbating inequality.

So, what’s the solution? You’ve heard about one solution from the authors of the [Shared Prosperity Agenda]: they’re developing a framework to enable ethically minded companies to create and deploy AI systems. I think that this is a very good solution. The first question that I have is: “How likely is it that firms would self-regulate to adopt such a framework?” I’ve asked a [similar] question, with Jim Bessen and co-authors, by surveying AI start-up firms. We [surveyed] firms [to learn] how many of them have adopted a set of ethical AI principles. I expected, ex ante, a low number of firms [would have ethical AI principles]. We learned that about 60% of these firms said that they did [have ethical AI principles]. Perhaps unsurprisingly, there’s a fair bit of heterogeneity across different industries.

Coming up with a framework is a useful solution, because firms actually will adopt these. Now, under what conditions might firms be most likely to adopt these principles? Based on correlational evidence from the survey that we did, [represented on the slide in] columns one and two, we found that when AI start-up firms are collaborating closely with a large tech firm—like Microsoft, Google, or Amazon—these smaller firms are much more likely to adopt these AI ethical principles. One potential [strategy this evidence suggests could be effective] is to [develop a] framework [for ethical AI] and specifically target large companies to adopt this framework first so that smaller companies will follow.

Katya and Stephanie highlighted some of the larger macroeconomic and labour market trends [and described] increasing inequality and declining union membership. Other [trends include] falling labour force participation and rising industry concentration. These high-level macroeconomic trends are important to keep in mind because [they differentiate this episode of automation] from prior episodes. The conditions under which electrification or steam [power were introduced] were very different than they are now. [These changing conditions] are useful to keep in mind.

How do we measure corporate shared prosperity targets? While it’s easy to [call] something measurable, it’s much harder to actually measure it. [This is something I hope to] push the authors on.

[Finally, let’s consider] customers [as a lever of change]. Stephanie described the three different mechanisms that could be used to push for change, but she [didn’t discuss] customers or “end-users.” We know that customers are an important stakeholder and can [cause] firms to adopt [ethical] standards. [Consider a] perhaps tortured analogy. When I’m purchasing eggs, I care about how the chickens were treated and I’m willing to pay a [small] premium [for better treatment]. You could imagine the same kind of mechanism at play here. When customers—people like you and me—are willing to pay a little bit more for a product that’s been certified as treating workers [fairly, that can motivate firms to adopt ethical standards]. [Stephanie and Katya] can add customers to the set of stakeholders that they’re thinking about.

Anton Korinek 55:21

Thank you so much, Rob, for the insightful discussion and comments. Let me give Katya and Steph an opportunity to offer their reactions.

Katya Klinova 55:31

Rob, thank you so much. These were great comments that we will use in the future. I couldn’t agree more with you. For the record, we’re not anti-automation; historically, automation has gone [well]. [However,] as you described, the societal and economic conditions [today] are different [than they were historically]. We cannot [uncritically] rely on past successes to be confident about our future success. I completely agree with your point about measurement: [now, the work is to understand how to measure these targets we’ve developed].

Stephanie Bell 56:34

Thank you so much for these comments Rob. I think they’re extremely insightful and helpful as we forge ahead with all of this. I really appreciated your point about customers and their role [in motivating ethical behaviour]. The evidence—at least that I’ve seen—seems mixed about the degree to which people willing to trade off on their surplus of productivity in order to help workers. However, there are people who are willing to buy free-range and cage-free chicken eggs and there are people who prefer Patagonia to North Face as a result of [Patagonia’s] supply chain and environmental principles. I think [there’s certainly a group of people who] are an audience for this work and we’re thinking hard about how we engage [this group].

The other part of your comments that I really appreciated was [your examination of] automation versus productivity and what that [distinction means] for workers. As I’ve [read] the [literature] on AI’s impact on workers, [I’ve learned] the degree to which many so-called augmentation technologies are a new-fangled Taylorism or Fordism. [Augmentation technologies employ a] very old managerial style [that is] technologically enabled to be much more aggressive. [For example,] injury rates in Amazon warehouses that have AI-enabled robotics are much higher [than in warehouses without robots]. [AI isn’t creating a] Terminator scenario— [rather, AIs are] colliding into people on the warehouse floor. [The danger] is about increasing work and job intensity to the point where people are badly injured from repetitive stress injuries. [As we consider] measuring [these metrics, we have to place a] fine line between augmentation and exploitation.

Anton Korinek 58:35

Thank you, Katya and Steph. The agenda that you laid out proposes [a path] to ensure that progress in AI will be broadly shared with workers. I agree that [this is very important] for the short-term and medium-term future.

At GovAI we are also interested in preparing for a long-term future in which potentially all jobs can be performed more effectively by machines [than by people]. In this future scenario, it would cost less to pay a machine than a human for the same type of work. Equilibrium wages would not even cover the basic subsistence [needs of humans]. Steph has already hinted at this possibility during her presentation. One of the members of the audience, Vemir Michael, phrased it this way: “In the long term, can shared prosperity be [managed] within the company environment, as workers [self-advocate]? [Or, will there need to be] deeper [governance, in the form of a] government structure? [Or, must there] be a societal shift?”

So, let me ask you: how do you think about this potential long-term future in which all jobs may be automated? How does it square with the agenda that you are advancing? How do you [negotiate] the tension between using work as the vehicle to deliver shared prosperity and the fear—that some in the field have—that there may be no work in the future?

Katya Klinova 1:00:12

[It’s essential to] make sure there is work in the interim [period], [given that during this interim,] work will still be the main vehicle for distributing prosperity around the world and the main source of income for the majority of the global population. [This work availability] is actually a precondition for long-term AI progress. If the decline in labour demand and the elimination of good jobs happens too quickly, there will be so much social discontent that it could preclude technological progress from happening. We need to pace the development of technology [with] the development of social institutions that enable redistribution. Eventually, we may need to decouple people’s prosperity, dignity, and well-being from their employment. Right now, [however,] we are in a society in which [work and well-being] are tightly coupled. We cannot pretend we’ve already figured out how decoupling can be done painlessly and globally. Even the boldest proposals [don’t propose] large [redistributions: they are in the range of] $10,000 to $13,000 per year. I don’t want to say anything at all against UBI—I think social safety nets are incredibly important and very much needed. We just need to be realistic about [what is] sufficient in the interim. [This] interim period is a precondition for success in a future in which nobody needs to work to survive.

Stephanie Bell 1:02:35

I fully agree [with Katya]. The devil really is in the details when [considering] the feasibility of different approaches to the trajectory of [AI] and its impact on people’s livelihoods. I think, based on my previous work in democratic theory and trust building across different social groups, and considering the current political environment, that we are more likely to convince an important subset of capitalist companies to ever so slightly decrease their bottom line than to put in place large-scale redistribution. And that’s just [considering redistribution] in a given nation state, let alone across nation states. [Redistribution requires] functioning democratic governments. Unfortunately, right now, we’re seeing many governments—which would consider themselves to be democratic—backtracking. Given this, what does a transition period look like? What is the best way to work toward a jobless future? How do we ensure that [our path to this future is] humane for everybody involved? Unfortunately, I’m not optimistic that near-term redistribution is the solution.

Anton Korinek 1:04:08

Thank you, Steph and Katya. Let me read the next question from Markus: “Could you say more about the role of policy in shifting AI technology in a labour-augmenting direction?”

I’ll add my own follow-up question for Rob specifically. The agenda for redesigning AI for shared prosperity has focused on making AI more worker friendly, [especially] in the private sector. I think we all agree this is an important starting point. Rob, you also have considerable experience in public policy settings. I wanted to ask you [how you would approach creating] public policies to support shared prosperity. What would be your advice on how to best go about making public policy work useful and appealing to policymakers?

Robert Seamans 1:05:27

I agree with much, though maybe not all, of what Stephanie and Katya have said. [While they didn’t cover this,] I worry about [a specific segment of] AI policy [focused on] addressing inequality. The first reason [I’m concerned relates to what] Katya said earlier: it’s very difficult to know ex-ante if technology will be labour displacing or labour augmenting. We can only [make this distinction] ex-post. I don’t think it makes any sense to try to create a policy focused on taxing certain technologies because we think [these technologies] are going to be labour replacing—I worry about the distortions that [tax] would impose. The second reason [I worry about this policy] is that the larger trends we’ve touched on, like declining union membership, increasing inequality, declining labour force participation, and increasing industry concentration are first-order concerns. We want to be addressing these before coming up with policy that’s specific to AI. That being said, I like The Partnership on AI’s approach because it gets firms to engage in self-regulation, which I think [is a better approach than] government-imposed [regulation]. There is a role for government to play, as a convener of different firms, stakeholders, workers, and customers to arrive at a set of principles that firms might be more willing to adopt rather than less willing to adopt.

Anton Korinek 1:08:00

Katya and Steph, would you like to add your thoughts on policy?

Katya Klinova 1:08:07

It’s, of course, scary if the government begins taxing something that is likely to be labour displacing ex ante. However, the government does fund a great deal of technology R&D which can [affect development] in the private sector. If the government, [in addition to implementing] other policies, starts thinking [ahead] and lays the groundwork for labour-complementing technologies, it [could steer] AI away from becoming excessively automating. Interest rate policy and immigration policy can influence the supply of labour and [impact how likely firms are to] invest in automation. We want the government to be aware of AI’s capacity to [increase] inequality by benefiting high-skilled workers and to think through what [it] can do to create conditions in which the private sector [makes] investments in labour-complementing technology.

Stephanie Bell 1:09:58

I wholeheartedly agree with Rob’s point: a tax that targets AI specifically is likely to cause quite a few distortionary effects, as many of the problems that emerge from AI also emerge from other technologies. To the extent that [we focus] on dealing with the impacts of technological change on workers well-being, worker power, and worker livelihood, a more encompassing set of regulations or approaches would be [warranted].

[Currently, a great deal of] AI research is targeted at human parity metrics: how well can this technology replace a human doing the same task? That’s a very different kind of metric than one focused on what we can achieve when technology is working together with a person on a [given] task. Using something other than a human parity metric to measure success could help the government [steer] AI research to be more augmenting and potentially less exploitative.

A second thought [concerns Katya’s comment on the] taxation scheme. Capital and labour are treated differently in tax schemes around the world. If [government] makes it much cheaper—at least in terms of accounting gains—for a company to purchase software or a robot to do a given task, then [the government is] disadvantaging a worker who could be doing those tasks instead. If aggressive depreciation gives [firms] tax advantages— [as happens in] the United States—on any piece of equipment or a capital investment, but all labour-related [expenses] incur a payroll tax, then [the government creates] two different incentives to replace labour.

Finally, I think many of these problems [stem from] labour law. Places like the United States in particular would benefit from having more stringent laws to protect workers from workplace injuries and exploitation and to safeguard workers’ livelihoods, wages, and hours. Putting [these protections] in place, either through additional rules or heavier fines for breaking these laws, would steer companies away from using more exploitative technologies.

Robert Seamans 1:12:44

I completely agree with these comments, Stephanie and Katya. In particular, I think the point about the different ways that capital and labour are taxed is very important.

[Let’s consider a scenario] that I would like to get your reaction to. Let’s say the Partnership on AI successfully comes up with the framework that you’re in the process of developing. Might [implementing] a policy that the government can only purchase from firms that have adopted this framework [create an incentive] for firms to adopt [these principles]?

Katya Klinova 1:13:27

I would love that. Right now, the government procures a lot of technology. If the government recognized long-term decline in labour demand as [important], how would they [evaluate] criteria for which technology to buy from whom? Would they just decide based on marketing [information] on the website that says, “this technology augments workers”? Or would [the government] ask for disclosures? [If so,] what kind of disclosures would they be looking for? What would be measured? We think this framework could be useful—even if not mandated as law—as a [means] to inform decision makers who handle government procurement of technology.

Stephanie Bell 1:14:58

A question from Michelle: “What do each of you believe will be the biggest challenge in redesigning AI for shared prosperity? Will [the challenge] be [from] a specific industry? The engagement from a specific stakeholder group? Or [will it be] something else? Is there a [consensus] on the largest challenges among your research team?”

I should also add that Michelle is asking how she can best continue the conversation, so perhaps tell us again how to find out more about the Shared Prosperity Agenda on the PAI’s website.

Katya Klinova 1:15:38

Michelle, thank you for the question. For you and for everyone that would like to stay in touch with us, there is a form to leave your email and sign up for updates on these discussions and conversations. All of this is on partnershiponai.org/shared-prosperity.

We will see right now if there is agreement on the biggest challenge. The immediate challenge for us is to figure out a reliable, robust way to measure [our goals] that would be intellectually honest and substantive but at the same time be intuitive and simple enough to explain so that a lot of people could get behind it. [Referring back to the] example of eggs in the store—there is a one simple label you’re looking for, the “free range” label. [To consider another example,] carbon emission targets are a very complex system that was proxied so that it was easy to understand and get behind, though it [still] took two decades to build momentum behind corporate carbon emission targets. And governments [still have difficulty] deciding which investments are environmentally sustainable or not.

We don’t have decades for this work because AI progress, and its impacts on labour, are happening [so quickly]. How do we quickly [develop] a metric that is substantive but intuitive? This is the question that keeps me up at night.

I should add that Anton is this Senior Advisor to the initiative and on our steering committee. I couldn’t have done [this work] without his support.

Stephanie Bell 1:17:58

I agree with Katya: getting this set of metrics right is going to be our biggest challenge because developing intellectually honest and rigorous metrics is challenging. [Another challenge is] finding a way to translate the rigor into something that’s easily implementable, especially for companies who don’t have a team of in-house macroeconomists and microeconomists. [We have to distil our metrics] so that companies can [understand these metrics], support [our] cause, and [feel capable implementing these targets]. Our work over the next couple of years will be to figure out how to make [metrics] that are coherent and actionable.

Anton Korinek 1:18:51

Thank you, Katya and Steph. Now let me ask you, perhaps as a concluding question, if one of the members in our audience is an AI developer, what tangible next steps would you recommend that they take to advance shared prosperity through their work?

Katya Klinova 1:19:24

If you read the companion paper “AI and Shared Prosperity” on our website, we lay out steps that could be [useful for] AI developers. If you would give us feedback on [if these steps] are working for you and whether they’re helpful or not, that would be [very] appreciated. [Another way to help would be to] spread the word: AI developers and innovators at-large have a responsibility to think about their economic impacts on labour and on the distribution of good jobs. I do not think that this notion of [developer and innovator reasonability] is broadly accepted. [You can advance the cause by helping this] become more of a norm and an expectation.

Stephanie Bell 1:20:15

[I echo] everything that Katya just said. [It’s important to] push for economic impact as a fundamental part of AI ethics. AI has advanced impressively along a number of different tracks. For whatever reason, the economic impact of these technologies is not a part of that conversation. The more we’re able to bring awareness to how [AI’s economic impacts affect] people’s livelihoods, the better the opportunity we have for success in [steering AI in a] in a positive [direction].

Anton Korinek 1:20:55

Let me say thank you to Katya, Steph and Rob, not only for your presentations and the discussion, but also for the thoughtful conversation that we have had thereafter.

Thank you and we hope to see you at our next webinar.



Source link

21May

The Centre for the Governance of AI has Relaunched


In Brief

Today, the Centre for the Governance of AI (GovAI) has relaunched as a nonprofit organisation.

Our mission remains the same: We are building a global research community, dedicated to helping humanity navigate the transition to a world with advanced AI. Our core research activities will also remain largely the same. However, owing to the greater flexibility afforded by our new nonprofit structure, we will also be expanding our field-building activities. 

You can see the other pages on our new website more detailed descriptions of our mission and history, research, team, events, governance structure and approach to conflicts of interest, and opportunities for involvement. We are currently hiring for Chief of Staff and Research Fellow roles and accepting applications to our Summer Fellowship program.

Our Activities

We will maintain our core activities of producing, supervising, and coordinating research; running a twice-annual fellowship program for junior AI governance researchers; hosting both internal research seminars and a public seminar series; advising decision-makers; and offering career advice, connections, and informal mentoring to promising people entering the field. In the coming year, we will also host our inaugural AI governance conference, begin awarding student research prizes, and explore an expansion of our policy-advising work.

Over approximately the next two years, we are planning to experiment and learn more about how we can provide the most value in our current form. We may adjust or halt current or planned activities if we are not sufficiently convinced of their impact. We may also trial additional activities such as grantmaking and scholarship programs or a program reminiscent of the Forethought Fellows program.

Our Structure and Team

We are based in Oxford, in the same building as the Future of Humanity Institute, Global Priorities Institute, and Centre for Effective Altruism, with a global network of collaborators and affiliates spread across several institutions. See here for a description of our governance structure.

Our Acting Director, Ben Garfinkel, leads the organization. Ben is a Research Fellow at the Future of Humanity Institute and the head of its AI Governance Team. He has been involved with GovAI and its predecessor organizations for five years: he was a founding member of the Yale University research group that evolved into GovAI.

Our President, Allan Dafoe, advises the organization and collaborates on research projects. Allan is also the founder and previous Director of GovAI. He currently heads DeepMind’s Long-Term AI Strategy and Governance Team.

The rest of the core team consists of:

  • Markus Anderljung, as Head of Policy and Research Fellow.
  • Joslyn Barnhart, as Applied Research Lead.
  • Alexis Carlier, as Head of Strategy.
  • Noemi Dreksler, as Survey Researcher.
  • Anton Korinek, as Economics of AI Lead.
  • Anne le Roux, as Operations Manager.
  • Robert Trager, as Strategic Modelling Team Lead.
  • Eoghan Stafford, as Strategic Modelling Researcher.

Toby Shevlane will also soon join the team as a Research Fellow.

Our Advisory Board consists of Ajeya Cotra, Allan Dafoe, Helen Toner, Tasha McCauley, and Toby Ord.

Our broader affiliate community consists of Miles Brundage, Tantum Collins, Diane Cooke, Jeffrey Ding, Sophie-Charlotte Fischer, Carrick Flynn, Ulrike Franke, Hiski Haukkala, William Isaac, Jade Leung,  Cullen O’Keefe, Jonas Schuett, Stefan Torges, Andrew Trask, Brian Tse, Waqar Zaidi, Baobao Zhang, and Remco Zwetsloot. 

Opportunities

We are also currently accepting applications for two roles: Chief of Staff and Research Fellow.

The Chief of Staff would serve as the “central node” within GovAI, reporting only to the Director. A range of crucial responsibilities, most of which currently sit with the Director, would be delegated to the Chief of Staff. We believe that the right candidate could significantly increase the organization’s long-run impact and ability to expand its activities.

Research Fellows will be expected to produce research that bears on important problems and open questions in AI governance. We are interested in candidates from a range of disciplines, who have a demonstrated ability to produce excellent research and care deeply about the long-run impacts of AI. The role would offer significant research freedom, access to a broad network of experts, and opportunities for collaboration.

We are also currently accepting applications for our Summer Fellowship Program. This program provides an opportunity for early-career individuals to spend three months working on an AI governance research project, learning about the field, and exploring different ways to contribute.

Acknowledgements

We are grateful to Open Philanthropy, the Centre for Emerging Risk Research, and Effective Altruism Funds for financial support; to the Centre for Effective Altruism for providing us with temporary fiscal sponsorship; to the Future of Humanity Institute and the University of Oxford for having provided an excellent initial home; and to the countless individuals who have supported or been part of GovAI over the years.



Source link

Protected by Security by CleanTalk