21May

Noah Feldman, Sophie-Charlotte Fischer, and Gillian Hadfield on the Design of Facebook’s Oversight Board


Noah Feldman is an American author, columnist, public intellectual, and host of the podcast Deep Background. He is the Felix Frankfurter Professor of Law at Harvard Law School and Chairman of the Society of Fellows at Harvard University. His work is devoted to constitutional law, with an emphasis on free speech, law & religion, and the history of constitutional ideas.

Sophie-Charlotte Fischer is a PhD candidate at the Center for Security Studies (CSS), ETH Zurich and a Research Affiliate at the AI Governance Research Group. She holds a Master’s degree in International Security Studies from Sciences Po Paris and a Bachelor’s degree in Liberal Arts and Sciences from University College Maastricht. Sophie is an alumna of the German National Academic Foundation.

Gillian Hadfield is the director of the Schwartz Reisman Institute for Technology and Society. She is the Schwartz Reisman Chair in Technology and Society, professor of law and of strategic management at the University of Toronto, a faculty affiliate at the Vector Institute for Artificial Intelligence, and a senior policy advisor at OpenAI. Her current research is focused on innovative design for legal and regulatory systems for AI and other complex global technologies; computational models of human normative systems; and working with machine learning researchers to build ML systems that understand and respond to human norms.

You can watch a recording of the event here or read the transcript below:

Allan Dafoe  00:09

Okay, welcome. Hopefully you can all hear and see us. So I am Allan Dafoe, Director of the Center for the Governance of AI, which we often abbreviate GovAI, which is at the University of Oxford’s Future of Humanity Institute. Before we start today, I wanted to mention a few things. One, we are currently hiring a project manager for the center, as well as researchers at all levels of seniority, for GovAI and the rest of the Future of Humanity Institute, including interested in further work on this topic. So, for those of you in the audience, take a look. A reminder that you can ask questions in this interface at the bottom and you can vote on which questions you find most interesting, we can’t promise that we will answer them, but we will try to see them and integrate them into the conversation.

Okay, so we have a very exciting event scheduled, we will hear from Professor Noah Feldman about the Facebook oversight board and his views about what a meaningful review board for the AI industry would look like. Noah is a professor of law at Harvard Law School, an expert on constitutional law, and a prominent author and public intellectual. We’re also fortunate to have two excellent discussants with us today. Gillian Hadfield,  whose in my bottom right, maybe it’s the same for you, is a longtime friend of GovAI. She is the director of the Schwartz Reisman Institute for Technology and Society at the University of Toronto, where she’s also professor of law and of strategic management. She also has affiliations at the Vectra Institute for artificial intelligence and OpenAI. Gillian has produced a lot of fascinating work, including some on AI governance, and specifically, I’ll call out her work on regulatory markets for AI safety. She’s also doing some interesting work on how machine learning can learn and adapt to human norms. Our second discussant is Sophie Charlotte Fisher. I’ve actually known Sophie from before GovAI was even established. She was a founding member of GovAI’is predecessor, which was called the Global Politics of AI Research Group, that was in 2016 at Yale, if you can believe those old days. She is currently a PhD candidate at the Center for Security Studies at ETH Zurich. And she continues to work with us as a GovAI affiliate. She has done great work on a range of topics, including the ethics and law of autonomous weapons, US export controls for AI technology, and on the idea of a CERN for AI. And specifically, perhaps one might put it in Switzerland. In 2018, she was listed as one of the 100 brilliant women in AI ethics. And in 2019 she was the Mercator technology fellow at the German Foreign Ministry. So thank you, both of you, for joining us.

Now I’ll share a little bit of background of how I came to this topic and learned about Noah Feldman. So I first learned about him actually, from reading one of his recent books, fairly recent book on the US founding father, James Madison, at the time I was and I still am struck by how much of the work in AI governance has this character of a constitutional moment, we have this opportunity it seems to set not just norms for the future, but also to deeply rooted institutions, which could shape decisions for decades to come. So at the time, I wanted to learn more about James Madison, as he seemed to be one of the best examples of how someone who has is deeply committed to scholarship can have a centuries long impact through the formation of long lasting institutions. I know Feldman, as I understand it, came to this conversation the other way around. So in 2017, he was had just finished publishing this biography of James Madison, and he was visiting the Bay Area talking to some people in the tech community, staying with his friend Sheryl Sandberg when this insight, this constitutional insight came to him that what Facebook needed was a Supreme Court. He sketched what that would look like; Sheryl Sandberg and Mark Zuckerberg were interested. And now two plus years later, the oversight board is on the cusp of starting its work and represents, in my view, a radical new experiment in corporate governance and technology governance. I find this origin story so fascinating, because it shows like with James Madison’s life, how a life of committed scholarship can suddenly and potentially profoundly, offer useful insights that can shape history. So with that, we will now hear from Professor Noah Feldman, about his thoughts on the Facebook oversight board and the governance of AI. Noah.

Noah Feldman  04:30

Thank you so much for that generous introduction, I’m really grateful for it. I have a special feeling for Oxford from when I was a doctoral student. And you say that 2017 was a long time ago, but I am of the generation now who when we were students at Oxford, the computer center was one room by the old parsonage hotel with a bunch of mainframe computers that you could use for email. And the idea that the university which treated my work on medieval Islamic political philosophy [garbled] talk about Aristotle and Plato than it was to talk about the Middle Ages would eventually become a leader in spaces like the governance of AI was literally unimaginable. So I’m thrilled by that. And very excited to be to be here with all of you today, under the auspices of the GovAI webinar.

As you mentioned, I came to the issues here from the standpoint of governance, and specifically from the governance standpoint of constitutional design. If you think about it, constitutional design, as a field is a field about the management of complex social conflicts through the creation of governance institutions. That’s not a terrible summing up of what the whole field of constitutional design is about. And as you mentioned too, I was thinking about constitutional design in a specifically American context, because of this book I wrote about James Madison, who was, after all, the chief designer of the US Constitution, but I’d also been lucky enough to work most recently in Tunisia on constitutional design there after the Arab Spring and earlier, Iraq on constitutional design there, although under much different and worse circumstances of US occupation. So the design issues were of recurring interest to me. But those were always in the context of states. It was always in the context of the state as the locus for the creation of a governance institution to manage some form of social conflict. And I was in fact in in Stanford giving at Stanford giving a talk at Michael McConnell’s seminar. And we’ll come back to Michael McConnell in a few moments, because some of you may know he’s one of the new chairs of this Facebook oversight board. And he is a constitutional law professor, former judge, and I was speaking entirely about Madison, and that was what was on my mind. And then as you say, I was also having some conversations with people at Facebook about content moderation, because like so many other people in the field of free expression, which is one of my fields, I was trying to figure out what free expression was gonna look like, in a world where more and more expression took place on platforms. It was that juxtaposition of thoughts about the development of new challenges for content moderation and simultaneously, the idea of institutional design to manage social conflicts through constitutional mechanisms, that I think led me to think on a long bike ride up in the hills, behind Palo Alto, that actually, Facebook and other platforms could benefit by the introduction of a governance mechanism that has traditionally been used, and intensively for the last, you know, 50 or 60 years in liberal democracies, to manage the social conflict around what speech should be allowed and what speech should not be allowed, namely, the Constitutional Court or Supreme Court model. That is, in its essence, a model where there is an independent body that is not directly answerable to the primary first order decision maker that has a set of principles that are clearly articulated, on which it relies to make decisions, that transparently describes to the world why it has made the decisions that it has made via an explicit balancing of competing social values, such as the value of equality, the value of dignity, the value of safety, and the value of free expression.

And so I thought, perhaps an institution like that could be tried in the context of the private sector of a corporation, even though essentially it had never been done before. And the reason that it hadn’t been done, I think, is largely that we imagined the institutional governance solutions associated with states as solely appropriate for the public sector, and not as appropriate for private actors or private entities. And of course, the difficulty with that restrictive view is that it then deprives us of a whole realm where serious attempts to solve institutional governance problems have been made. On the ground that well, this is the private sector and not the public sector. If you imagine the kind of cognitive divide that people often make, they think, well, if the government is going to regulate us, then it would be appropriate for the government to use its institutional governance mechanisms. But if we’re going to be regulated by a private sector entity, a whole other set of mechanisms are appropriate and kick in. And that is really an arbitrary and artificial divide. I wouldn’t say it’s arbitrary, but it’s an artificial divide in the negative sense of the word artificial. I know with this audience, the word artificial is itself subject to deep analysis. But it’s a divide that is not necessarily valuable pragmatically, it’s simply something to treat as an opening reality that one can then explore and potentially explode. And so that’s essentially what Facebook subsequently did. And I was lucky enough to be advising them throughout this last, you know, two and three quarters years, to the point now where their oversight board is in existence, it has four co-chairs, 20 total members. One of them, as I mentioned, is Michael McConnell. Others are people of different backgrounds. There’s a former prime minister of Denmark, there’s a prominent constitutional law professor at Columbia University Law School, there’s the Dean of the law school in Colombia, who’s also a special rapporteur for the United Nations on free expression. And they’re a diverse group of people from all over. The core design remains the one that basically struck me on the bike ride. And again, just to reiterate that it’s independent, its members are appointed to three year terms that are renew automatically renewable, and therefore are not hired and fired by Facebook. They are paid, but they’re not paid by Facebook, they’re paid by an independent trust, that was funded by Facebook and then spun out of Facebook to become independent. Their decisions will be rendered transparently and publicly, they will give reasons and reason giving is hugely important in this context. And their decisions will balance competing values. And not least, in addition to its so called community standards, which are the content moderation rules that Facebook has. Facebook has also articulated a set of high level principles, a set of values, what they call values, that function effectively as a set of constitutional values here that are also relevant to the decisions that will be made, international legal principles will inform but not dictate results.

So that’s the basic structure of what’s going on here. I’m thrilled to answer questions about the technical sides of this, the difficulty of it, the design of it, I want to say one or two words about its purpose overall. And about two ways to look at it a more optimistic way and a more cynical way. And then from there, I’m going to tack to talking about ways that similar or related governance models could potentially be used in other contexts, including in the context of the governance of AI. So that’s my thought roadmap. So let me start with two ways of thinking about the purpose of the oversight board, let’s call them a publicly interested way, and a more cynical corporate interest way. So let’s start with the more publicly interested approach. And because I do constitutional law as a as my day job, we always have to look at everything through these two lenses, right. Every constitution in the world expresses high flown values, and is institutionally implemented by people who often really believe in those values. And yet every constitution is also a distribution of power by real people in the context of real governments and real estates, where politics and self interest dominate decision making, as we all understand it in the real world. So for those of you who don’t move in a world where these two frames are constantly going back and forth, I just want to flag that these are the two frames that I use all the time, and I’m going to use them here.

From the publicly interested frame it’s just really clear the crucial decisions on issues that affect billions of people should not be made, ultimately, by unelected tech founders, CEOs and COOs. And I think that’s rather obviously true in the case of free expression, as Mark Zuckerberg is the first to acknowledge: he should not be deciding himself whether the President of the United States has or has not breached some ethical principle of safety, when he criticizes Black Lives Matter protesters, and therefore should have his content taken down or whether the President of the United States running for office is participating in a political process that needs to be facilitated, and therefore what he says should be left up. That’s just much too important decision to be left to Mark or to Mark and Cheryl, or to the excellent teams that they nevertheless put together. I would argue that it goes even beyond those kind of hot button issues and extends to the more you know, in the weeds, but hugely important questions. What counts as hate speech? What hate speech should be prohibited? What hate speech should be permitted, because it’s necessary to have some forms of free expression. What forms of human dignity are respected by displays of the human body? What forms of human dignity might be violated by certain displays of the human body or certain human behavior or conduct? These are questions on which reasonable people can and do disagree. They’re questions that implicate major forms of social conflict. I’m not a relativist, I don’t think there are no right answers on these questions. But I do think there’s a lot of variation in what different societies might come up with as the right answers. And especially when you consider the platform’s cross social and political and legal boundaries, it just makes almost no sense for the power to make those ultimate decisions to be concentrated in just a few people. Now, that doesn’t mean that the decisions don’t have to be made, there has to be responsibility taken. And so the objective of a revolutionary strategy, which is what we with the oversight board uses, is to ensure that there are people who are making these decisions, who are accountable in the sense that they give reasons for their decisions, accountable in the sense that they explain transparently what they’re doing, accountable in the sense that they can be criticized, but are nevertheless not easily removable by the for profit actors who are involved. The result of this, again, speaking in terms of the public interest should be – it may not be, but experimentally, it ought to be – some legitimacy for the decision-making process. And now what I’m talking about is legitimacy in what philosophers call the normative sense of legitimacy. That is something is legitimate, because it should be legitimate, it ought to be considered legitimate. And from a publicly interested perspective, we should all want important decisions to be made in ways and with values that ultimately serve the goal of public legitimacy. Now, let me turn briefly to the cynical perspective, the cynical, self interested perspective. Facebook is a for profit company, it is governed under corporate law of the United States. And by virtue of being governed in that way, its management and its board of directors have certain fiduciary duties to its shareholders, which includes the duty to make it an effective and profitable company.

If Facebook’s senior management hadn’t believed that it was in the interests of the company, to devolve decision making power on these issues away from senior management, they would actually have been in breach of their fiduciary duties to advocate and then adopt the strategy. So in that sense, when somebody says to me, and people do say to me all the time, well, Facebook just did this because it’s in Facebook’s self interest. My answer that is twofold. First of all, yes, that’s absolutely right, they would have been in breach of their own fiduciary obligations if they had thought they were acting against the company’s interests. And my second is, please show me any example anywhere in the world, of any person or entity with power, giving out a power for any reason other than that, they believed in that given circumstance, they have more to gain by giving up that power than by keeping that power. I mean, this is an insight from constitutional studies. Any would be dictator would like to just be the dictator all the time. It’s really nice to be the dictator. But we recognize that governments based on dictatorial principles are frequently, not always, but are frequently unstable, and effective lead to bad outcomes, not just for the general public, but also to bad outcomes for the dictators who have a bad habit of ending up dead rather than, you know, beloved and in retirement. And so systems of power that involve power sharing are always shot through with structures of self interest.

So then that raises the question of why should anybody trust an oversight board or any other devolutionary governance experiment that is adopted by for profit actors? We might imagine that if the state imposes something, then it would reflect public interest. But if it’s adopted by private actors, we might say it should never be trusted. Well, part of the answer that is that even state bodies aren’t purely publicly interested. You know, political science as a field has spent much of the last half century showing the ways that state actors, including governmental actors, are privately interested, notwithstanding that they have jobs where in principle, they’re supposed to be answerable to the public. So there is no perfect world where everybody is perfectly publicly interested. But more importantly, the reason that the public should be able under some circumstances to trust a system put in place through the self-interest of corporate actors, is that it is in the self-interest of those corporate actors to be trusted. And to do so in this day and age they must impose transparency, independence and reason given, not because it’s their first order of preference. After all, this is not how content moderation was designed in any major platform initially, but because they realize that they have so much to lose by continuing the model that they’ve been following, and they need to try something new. So the cynical view is that in this day and age, companies can’t get away with appearing to devolve power or appearing to be legitimate, they have to actually go ahead and do it. You might say, well, their most effective game theoretic strategy is to appear to be not getting away with it, but actually to be getting away with it. That might be true. And it’s an empirical proposition to say that in this day and age, with as much scrutiny and skepticism as exists, it’s very difficult for a corporate actor to get away with that, in a way that might not have been true as recently as a quarter century ago.

Okay, let me say a word now about other contexts and other possible governance solutions. You know, having spent the better part of the last three years incredibly focused on the problem of content moderation and the solution of a governance institution modeled loosely on a Constitutional Court, I have now shifted my own attention to trying to think about other kinds of governance institutions, they could also be borrowed from different places, and shapes and contexts that might be appropriate to the governance of other kinds of social conflicts that arise in technology spaces. And here, I come close to the topic of your ongoing seminar and to your program, namely, the question of the governance of AI. Now, we do have, and I actually had it right in front of my face as I was originally – not right when I was designing this thing in the very beginning, but very quickly in the process – the Google AI committee that came into existence and came out of existence in a incredibly short period of time, a story that you all know better than I, had. So I had in front of me exactly what not to do from an early moment.

So we can stipulate that the model of the corporate appointed group of potential advisors on its own without more with a high risk and unstable models to adopt if we in circumstances where the corporate actor would react very negatively to criticisms of the membership of the board. But that doesn’t mean that there aren’t other mechanisms that are worthy of being explored. And these are other models of governance. So let me just name a few of them. And then we can talk more in our conversation about which of these might be adaptable in different circumstances to different aspects of AI governance. So one interesting model that comes not purely from the public sector, but from the educational and medical sector, is the model of the institutional review board or IRB. So those of you who are social scientists aren’t used to dealing with IRBs, the same will be true of those of you in the harder sciences, whose work interfaces with important ethical considerations. IRBs are quasi-independent bodies, typically, constituted and composed by institutions, universities and hospitals, most typically, that have full authority to approve or disapprove proposed research plans or programs. Their power is enormous, as anybody who’s ever dealt with an IRB knows. It’s subject to abuse, like all great powers, and the question of how to govern IRBs is itself a rich and important question. But the IRB model is a model that remarkably, hasn’t really been tried in the private corporate sector. Sometimes there’s overlap, because if you are a researcher at Harvard Medical School, and you have a great idea, you form a company, but you also continue to do research in the university. And so you need to both go through an IRB and then discuss it with your investors. So there are some points of overlap. But we don’t really have in the private corporate sector, an institutionalized IRB model in place. Now, IRB is have something in common with the oversight board that Facebook has created, because they’re meant to be institutionally independent, but they still belong to the institution. So the Supreme Court of the United States is the Supreme Court of the United States, it’s part of the US government, but it’s also independent and its independence is assured by certain institutional features, life tenure, etc, etc. It’s not without government influence. We see that right now in the United States. We’re in the middle of a huge fight over our next Supreme Court appointees. So you see, there’s a politicization of one aspect of the process. But part of the reason for that that intensity of that fight is that once appointed, the Justice will have complete independence.

IRBs are typically part of technically the university or hospital with which they’re affiliated. So in that sense, they’re part of that entity. Therefore, they internalize some sense of responsibility, but their members typically come from the outside and they cannot have their judgment overruled by the institutional actor that convenes that. So could corporations create IRBs on their own? One option is that corporations could create independent IRBs of their own. If they offloaded management and devolved it through [garbled] foundation in the way that Facebook has done. That’s very expensive. And it requires long term commitments, but it can be done. Another alternative is to have IRB-like independent entities created by third parties. Those could be nonprofit foundations that produce their own IRBs, that are then selectively employed by companies that are looking for independent judgment. Or you can also imagine, and I’m toying with trying to create one of these right now, one can also imagine a private entity being created either for profit or not for profit, but a private entity not growing out of an existing foundation that maintains an IRB, or multiple IRBs with subject matter expertise, that can be as it were rented by the corporation, which says, gee, we’re going to be making the following difficult decisions over deploying our AI for the next two years or five years. We publicly commit ourselves to submit those decisions at a given juncture point to this independent IRB, which has AI subject matter expertise alongside possessing ethicists, stakeholder interest and other sorts of interests. Now, there are all kinds of technical issues that need to be worked out here, which I’m happy to talk about. But I think they’re all in the realm of tractable problems. The overall model, though, would be to actually devolve some meaningful power to these IRBs, and for their decisions to be not merely advisory, but to function as actual choke points for the corporate actor. You may ask why would any corporate actor ever want to agree with that, agree to do that? And the answer is self interest: the corporate actor might be aware that in order to get credibility for its decisions, it needs to have those decisions blessed by a body that can only give a meaningful blessing if it can also prohibit certain lines of conduct, or blocks or lines of conduct or behavior. And I think there is a game theoretic situation where that becomes desirable and even necessary from the standpoint of the company. Transparency is a really interesting issue here. And I don’t need to tell all of you the transparency, challenging as it is, in any corporate domain is doubling or tripling hard in the context of AI where you don’t only have to deal, you have to deal first with proprietary technologies, but also with the – fascinating to me as an outsider to AI – conceptual problem of what counts as transparency in the case of certain, certain AI function, certain machine learning functions, where they may not be fully interpretable. I mean, there’s a fascinating conceptual question. I’m sure you’ve all spent time on this. When I taught a seminar on some of these issues a couple of years ago, we spend a couple of sessions on this fascinating issue of what counts as transparency in an in a situation where you have genuinely uninterpretable algorithm where again [garbled] I understand is also debatable term, but an algorithm that is not, we are not able to interpret under given circumstances. There are very rich and fascinating questions there deserve close scrutiny and attention.

That said, it is possible using an IRB structure to maintain selective confidentiality. So you could imagine a FinTech company, that is using a proprietary machine learning algorithm, to sort credit-worthiness of applicants, profound social conflict is inevitably going to arise there. And I can say a few more words about that if people are interested. But profound social conflict is inevitably going to arise. And there are many subtle questions to be worked through. For example, does the algorithm pick out discriminatory patterns that already exist in society? Does it reinforce those if the algorithm is quote unquote, “formally instructed” to ignore those, will it then replicate them nevertheless, by virtue of picking out a proxy that the algorithm is capable of picking out? These are incredibly rich, fascinating issues. I know you’ve spoken about them before here here, and I’m actually happy to discuss them as well. But one could imagine a private company with a proprietary algorithm, just saying to the IRB, listen, we will show you what’s under the hood. You will agree not to share that with anybody else, but in your public account what you will say is we have been under the hood, and we say that what we consider to be the cutting edge techniques that can be used to manage and limit the discriminatory effects have been employed here. And those are techniques such as such and such or such and such. Right. So imagine you think, with a very, very brilliant new professor at Columbia Law School, Talia Gillis, a recent graduate of the PhD and  SJD programs at Harvard, who worked with me that one of Talia’s arguments is that the only really reliable mechanism for evaluating discriminatory effects in a range of algorithmic contexts is running empirical tests of those algorithms and measuring outcomes, much in the way that historically actors who are trying to constrain governmental – sorry, governmental actors were private actors who were trying to use existing law to constrain private discriminatory conduct in say the housing context or the employment context, historically ran empirical tests to see whether a given company or institution was discriminating. So imagine one has Talia’s view – it’s not the only possible view but imagine one has that view – well, if one has that view, then what the IRB would do is it would say, we self-certify that we’ve run those tests, we’ve done the cutting edge, you know, approach, and we’ve created a protocol or supervisor protocol where those tests will be run regularly on the data as it develops. And so we’re not showing you what’s under the hood, but we’re telling you what our approach is transparently, we’re telling you what the research is transparently, and we’ll probably be able to show you the results transparently, or compel the private actor, the corporate actor that has the proprietary algorithm to do so. That’s just a sketch out an example of how this kind of institutional governance mechanism might potentially work.

So that’s the sort of, you know, private IRB or independent IRB type of approach, then there are some other potential governance mechanisms that are also worth thinking about that go outside of the IRB context, and that could also be borrowed from various institutional structures. There’s industry-level regulatory bodies, that could be created, that are always subject to the skepticism that they’re just like the Motion Picture Association, you know, the MPAA, controlled largely by their members. But it’s possible to create a more robust, industry wide regulatory actors, which again, by use of transparency and independent funding, and real independence from the corporations that constitute them, could engage in regulation of a kind that is analogous to what a governmental regulatory agency might do, but could do it more efficiently than a government regulatory agency. And it could potentially also maintain certain kinds of confidentiality to a greater degree than a government institution might be able to do. So there, you have a full range of different regulatory mechanisms, you know, European governance is different from Chinese governance is different from US governance, one can pick and choose in the institutional design process to obtain the best features or the most appropriate features here. And so there’s a full scale set of options for what I would call private, collective regulatory governance, that, again, look familiar in the context of state regulation, but avoid some of the problems that scientists and corporate actors alike inevitably fear. When they start thinking about government regulation, among them external political influence on that regulation, among them, a tendency to always be as conservative as possible to avoid criticism, sort of, you know, cover yourself in the worst case scenario, danger or risk. So that’s yet another set of techniques that can be borrowed from the public sector, suitably adopted and tweaked.

I could go on about, you know, other possible directions, I won’t, because I want to leave as much time as possible for conversation. So I’m going to pause there and just say in conclusion, that I’m eager to talk both about the particularities of how the Facebook model is working. But I’m also really eager to speak about other potential directions and options that might be more suitable in some of these AI contexts. Then the full on Constitutional Court like review board, and those may be IRB style, they may be regulatory style, and they may there may be other techniques, too. I have some other ideas for other things. They’re not as well developed, but maybe you can get me to throw them out there in conversation, that are potential directions all have in common billing is to say that we can learn from institutional governance solutions from different contexts and try to adapt and adopt them and we should never say that will never work here because that comes from this context. Rather, we should say in these other contexts, these things have these benefits and these costs, how might we try to adapt them to our needs, such that we will capture some of the benefits while reducing some of the costs. So thank you all for listening, and I’m looking forward to our conversation.

Allan Dafoe  35:17

Thank you, Noah. That was fantastic. And I’m sure if we were live, there would be a very enthusiastic round of applause from the 90 plus people in this room. I have lots of thoughts and questions. And I was very stimulated by this. But it’s the honour of Gillian now and our honor to hear from her to share her thoughts.

Gillian Hadfield  35:36

Great. Thanks. Thanks, Allan. And thanks Noah, that was terrific, really. It’s such an important thing to be discussing, and I really, the way you you wind up there with: we need to be looking for alternative regulatory models, we should look and draw on other models out there, and then thinking creatively about what the demands are regulating in the AI context, and, and, and how do we how do we meet those demands? Lots and lots for us to discuss, I want to try and just keep focused on a couple of a couple of points. First, I think it’s fantastic that somebody with your background and thinking about the origins of democracy and the development of the constitutional system, I think it’s really great that context here because I do think we are – maybe Alan is the one who used – a constitutional moment, I do think we are at a point in time, where we are seeing kind of the same question like, you know, Magna Carta 1215, where, you know, you know, we have entities, they’re not monarchs, but they’re private corporations that have become the dominant aggregations of wealth and power, and they’re defining so much about the way our lives work. That to be exploring, okay, so how do we democratize that process, is absolutely critical. I think it does raise the question, is that the right way to do this? Is it both feasible, and is it desirable to democratize private technology companies? You’re exactly right to frame it up, as I’m pretty sure I agree with you, people who are running these corporations that they hate, look, I don’t want to be the person deciding, you know, what, what Trump’s tweets can be left up or what postings can be there. But as you pointed out, this is this a global platform with two and a half billion people on the Facebook platform. It’s just something we’ve never seen before. And so I think both the question of democratizing that – both is it feasible, and is it desirable, where the [garbled], so I think that’s exactly the way we have to be thinking about this, this moment in time. And then I want to say a little bit about, you know, the appeal to existing institutions. And then, you know, we’re talking about the Facebook Supreme Court, particularly, but your comments, sort of in a broader context of thinking more generally about that, IRBs, and so on. But one of the things I think we’re also at a point of is, you know, the set of institutions that we created over the last couple of hundred years, in the context, originally of the commercial revolution, then the Industrial Revolution, the mass manufacturing economy, the nation state-based economy, and society that those institutions, you know, in many ways worked fabulously well, for significant periods there. But there are lots of ways in which they are no longer working very well. So we’re talking here, you’re sort of using a model of the high level Constitutional Court, but a lot of the issues are facing is like, you know, I’m user 5262 if I got in there early, I guess. And I’ve got a picture from my, you know, party that I that I want to post or I have a political statement I want to make, and those numbers are in the millions I was trying to get. Yeah, so 8 billion pieces of content were removed, something like that, in 2019, from Facebook, these are just massive, massive numbers. And one of the things we know about our existing institutions, which are heavily process-based, phenomenally expensive, the vast majority of people have zero access to those existing institutions. Now, certainly, that’s true if we go up to the level of the Supreme Court, and you’re not proposing that we create something that is going to be responsive to every individual who has a complaint. So I totally get that you have focuses, you know, even jump up to the Supreme Court rather than to say, you know what, let’s start with our trial courts or, but, but I think that’s actually a really critical thing for us to be thinking about is that both processes are incredibly expensive. They end up being like little pinholes of light into this big, big area. What can we be doing to, in fact, bring many, many more people into this process of expressing and communicating and constituting the norms of what we consider to be okay and not okay, on our platforms just to focus on that that framing in the context of, you know, free speech. And the other things, the other values that that are lined up against against free speech, what can we be doing to incorporate that? Now I think that’s where we just have to come to grips with this massive mismatch between a huge volume and a cost to the process and the – and I’ll go back to that – that language of democratization of that process. I think we will not develop methods that will be responsive that don’t ultimately involve AI. You’ve mentioned some of those. And actually, your concept of privately recruiting an IRB to review under confidentiality provisions, you know, what’s under the hood, in a model, and so on, I think is great.

I’ve been thinking about comparable models, Alan mentioned at the outset some of this work on regulatory markets, and I think we do need to be figuring out, how are we going to simultaneously get investment and methods of, in this instance, content, moderation, that are still responsive and legitimate, but I think we’re gonna have to figure out ways to incorporate a lot more people. So I’m particularly interested in thinking through the, the technical as well as legitimacy challenge of how can you get many more people involved in that in that process. And I actually think that’s really important, not just from the point of view of thinking about equality, or thinking about equal participation, but also because it’s fundamentally critical for the constituting of social order, that, that our norms are deeply rooted in ordinary people, their ordinary lives and, and communities. And of course, once you start talking global, that’s where it becomes tremendously difficult. I worry a little bit that, you know, the, the model of the Supreme Court, with an elite group, and so on, is going to be actually pretty difficult to try and make that progress. I think one of the things that I also want us to think about is, if we are so there’s a way to go back to this point about democratizing the private technology company, one of the things you might say is I can let them recruit the market here, in the sense that part of the global issues, you know, Facebook is such a massive platform and dominates the space. So incredibly, is there a role for communities to be working? You know, can we develop groups within our platforms, multiple platforms where people can, basically sort of in that in the [garbled] from political scientists voting with your feet, you know, voting with your with your browser, for which platform in which values and which norms you want to, you want to follow? I think there’s a lot of challenges to think about, I think this is the great challenge of how do we figure out how to respond to this massive scale, and respond to the global nature of these platforms, without taking decision making further and further and further away from ordinary people and their and their experiences and their experience of being a member of the community who’ve seen and recognized in these environments. I’ll stop there. Thanks.

Allan Dafoe  43:14

Thanks, Gillian, that was great. I’m thinking actually, perhaps Noah, we can give you some time to reflect and respond. And I’ll use the fact that I have the mic right now to add on to Gillian the specific question of the the choice of whether to have a global court and sort of a global moderation policy versus culturally specific policies. Yeah. And I you discussed at one point having regions or countries but yeah, that’s sort of just an interesting question.

Noah Feldman  43:43

Thank you. Thank you, Gillian. Those are really very rich, and hugely important issues that you’re raising. So let me just say a few a few words about them. If I if I could summarize, and maybe slightly oversimplify the argument that you’re making, it’s that we need greater democratization, greater public access, more people involved, I think you said, in order to aim at constituting social order. And you expressed a concern, which is completely correct, that the Constitutional Court model – and this I think would also be true of an IRB model – tends to rely on a smaller, elite group of people to make the relevant decisions. And I think that’s a correct analysis of what’s going on in both of those contexts. So I want to start by just acknowledging how incredibly challenging this problem is in democracies, not in you know, platforms, not in AI, not not on on social media, but just in democracy, right. How does one get genuine public participation in decision making?

It remains the central problem in most developed democracies: some have great turnout or lots of people show up to vote, but many have relatively weak turnout, where not that many people show up to vote. Voting, as political science has repeatedly demonstrated, is subject to all kinds of strange problems of agent principle control, and doesn’t always give ordinary people all the options that they would like to see represented. And there are tweaks for that, proportional representation tweaks, which have their own consequences, like the production of a parliament with so many parties in it, that it becomes very difficult for anything to get done. Even though a greater set of points of view are represented, there’s a set of complex trade offs that arise there as well. You know, the strongest critics of contemporary liberal democracy would probably say that one of the worst things about contemporary liberal democracy is that it purports to give the public opportunities to participate, and doesn’t actually give that to them, or gives them some simulacrum of participation. So that’s just to deepen the problem that you’re describing. Even if we could borrow some of the features that come from democracy, that might not solve our problems, because democracy itself is struggling. Making it much harder is the problem that I like to sum up with the example that I’m sure most or all of you know about. The example that arose a few years ago, when Great Britain decided to use an online voting process to name a new warship. As you will recall, the eventual winner was not Intrepid, or Valor, or Harry and Kate, but Boaty McBoatface. And sometimes in our conversations at Facebook, around the difficulties of democratization, we just summed up that this problem, which I’ll say a word about, in that phrase, Boaty McBoatface. And, you know, to formalize the Boaty McBoatface problem is that so far, it seems that when voting techniques are used online, the fact that the UN user is very far from internalizing the costs of his his or her vote, make it appealing, and not unappealing maybe more importantly, to cast votes that are silly or frivolous, or humorous. And so, you know, Facebook had actually experimented more than a decade before I got involved with them with a regulatory democratization approach, which is famous only in the small circles of people who care about online, regulatory democratization. It was an utter disaster, they said they were certain they wouldn’t have they wouldn’t make basically, they wouldn’t make certain kinds of major changes on the platform without getting a certain number of votes from a certain percentage of users. They couldn’t get participation, comparable to what they needed to get anything done. And it was also very subject to capture, again, a political science concept that’s very familiar here by small groups of concentrated people who had interest and could generate votes. And it’s actually I mean, sometimes I wonder, like, how did it happen that, you know, a random constitutional law person made a suggestion and Facebook decided to do it. Of course, the reason was that when I came to Cheryl with this idea, and then she brought it to Mark, Mark had actually been thinking for years, for at least a decade, about potential ways to devolve power. But the problem that he and the very, very smart people around Facebook kept bumping into was that if you devolve power, you want to democratize it. And if you democratize it, you run into cycling problems, and capture problems, and Boaty McBoatface problems. And I think what was actually when I just to finish the thought, and then by all means, yeah, but from the outside, when I look at it from the outside, I think, oh, no wonder they like this solution. Because it was it was about devolution, without democratization. It was devolution into an institutional structure like a court, that is not technically a, you know, small democratizing structure. So this is all by way of acknowledgement. And then I’ll say a word, you should say speak Gillian, and then I’ll say a word about what I think might be scary.

Gillian Hadfield  48:58

Yeah. And I only just want to jump in there and say, because I, I think the challenges of developing the regulatory model here that are democratically responsive is, is as big a challenge as building AI. And it’s also why sort of I’m under a focus on regulatory markets models, because I think we need to attract investment into this problem in the same way that we attract investment into building AI. So when I think about, okay, we want Yes, okay, getting voting, I think that’s inevitably going to be a poor that that was a technology that worked in various time, but don’t think that is going to work here. You you’ve said that’s been tried. But being able to read the normative environment, is something that we have now tremendous tools at our disposal, like what what is the reaction to different kinds we could build? I think we could be building machine learning models that are reading the rich, dense, massive volume of responses. And I think we should be figuring out how to do that and how to make that more legible, but I don’t think it’s only voting, I think that that we just sort of say, we want the idea of voting. But it’s not, as you say that that’s kind of broken in our current in our offline worlds. And I’m not surprised it doesn’t carry over. Anyway, I just wanted to jump in there with that, with that thought as well.

Noah Feldman  50:15

A couple of couple of thoughts on that. First, I actually think it’s harder than AI, because we’re in an early stage still of AI. And yet, the problem that you and I are talking about now of giving the legitimate public access to democratic participation, was posed explicitly by Plato and Aristotle. And in about 2500 years, smart people have been thinking about it, and nobody’s really solved it, you could say that the most intense process probably goes back to the French Revolution, of trying to mobilize a mass democratic public to make decisions effectively. So let’s just say it’s been the last 200 plus years that people have been trying to do it. And a lot of really smart people have focused on it, and haven’t really solved it. So I think it’s even harder. I also think it’s interesting when you say maybe we could use AI in order to solve it. And there will be hundreds of people out there, if there are hundreds of people listening, I’ve got a 222 people mark at the bottom of mine, but I don’t know if that means that’s the number of people listening. But if there are, then there are 221 people better than I, at answering the technical question of whether current techniques of aggregation are promising for doing what I would call normative political theory, you know, substantive analysis of what people are saying out there so as to glean a direction, maybe, but so as to glean a set of arguments about legitimacy. That’s a hard problem. I don’t I don’t claim to say that it’s an insoluble problem, just that it’s a genuinely hard problem. And if we were in over in the seminar room, where we talk about administrative and regulatory law, and and and Gillian were to say, you know, we should improve our legitimacy by using machine learning tools to get a sense of what all the comments are out there, I would say, interesting. doubtful, tell me more, I guess, is what I would have said. And maybe maybe it would work.

Just a last thought on this, I have a kind of approach to the problem that Gillian is talking about. And the approach is to say that we actually have a series of legitimating techniques that we use when mass voting doesn’t work very well. And those include transparent reason giving and subjection to intense public criticism. When a regulatory body is silent and behind closed doors, and is not easily transparent for analysis, it tends to lose legitimacy. And, you know, those of you who are in the UK and lived through the the Brexit process, probably know, on both wherever you were on that issue, that the perception I’m not speaking of realities now. But the perception that European regulation was insufficiently transparent, and therefore could not be subject to detailed criticism played a crucial role, I would argue, in the delegitimation within the UK of the project of European regulation, I mean, it’s not a coincidence that one of the most powerful and pro Brexit arguments from the powerful, most powerful leave arguments was, again, rhetorically was a claim of a legitimate regulation, illegitimate because non-democratic, and illegitimate and non-democratic, because non-transparent. So transparency can be an import- play an important role, because then we have other institutions, institutions like advocacy groups, institutions like the press, that can then engage in criticism of what are perceived as bad regulatory outcomes. So to me, in the absence of a magic bullet solution, I am interested in finding ways that it’s possible to use existing mechanisms of legitimation, I would call a democratic legitimation in the absence of mass voting, to improve participation and to improve access, not that these are perfect solutions at all, they’re very far from perfect, but they’re definitely starts in that direction. And they’re, they’re identifiable and they’re concrete. And you can point to them and say this regulatory process is good because people know what’s happening. They know the reasons, and they can be criticized and discussed. This is bad because they don’t.

Allan Dafoe  54:26

Thanks. Sophie, over to you.

Sophie Fisher  54:37

Okay, can you hear me now? Perfect. Okay, so already, we’re already in the middle of this really interesting discussion. I just want to take a couple of steps back and talk about how we actually got to the point that we’re now talk, we’re talking about the Facebook oversight board before offering some reflections on the limitations but then also the strength of this approach and maybe some of the lessons But we can learn for other regulatory models, maybe for the case of AI. Now we all know that Facebook has actually made for a long time important decisions of what kind of content it removes or leaves up on its platform that affected its 2.7 billion users around the world. And we’ve just heard from Noah that actually also within Facebook, there’s been a lot of thinking before about how to make this process of content moderation more participatory. But I think what really outside of Facebook has changed over the last couple of years are that we have seen new challenges brought about by, for example, the interference in the 2016 US elections in which Facebook has played a prominent role, or even the prosecution of targeted populations, and most notably, the Rohingya minority in Myanmar. And I think these cases really have showed that the stakes inherent in handling this kind of content that we see on a platform, like Facebook really have changed. And this incidents have not only emphasized the difficulty of balancing the freedom of expression and removing harmful content from the platform, in different national and cultural contexts. But, and I think this is important to stress again, it also created tangible economic costs for Facebook, due to an audible loss of consumer trust, which threaten Facebook’s business model and future growth.

So I think these developments really have emphasized again, the need for new measures and participatory measures to evaluate content in a fair and transparent manner to maintain the trust of Facebook users in the long term. And what we’re looking at now is the Facebook oversight board, which is certainly one of the most ambitious private governance experiments to date as a transnational platforms get mechanism to govern something which is very vital to the public and an essential human rights speech. Now, the board hasn’t even started operations yet. And we’re still at a very early stage. But different facets of its design has already been criticized widely, for example by journalists, but also nonprofit organizations, and I very briefly want to get into one of the most fundamental ones, and that is the at present very limited mandate of the board. So the limited mandate of this board implies that most probably the board won’t be in the position where it is able to really solve some of the most critical issues related to the content that we see on these platforms and do the most harm. So for example, it won’t probably tackle the selection and augmentation of certain content visible to users made by Facebook’s algorithm, including this information, it won’t necessarily minimize coordinated attacks on democracies around the world. And although there’s also an expedited procedure to bring issues more quickly to the attention of the board, the board won’t be able to offer a quick reaction to and prevent the spread of harmful content, such as the live streaming of the Christchurch shooting a while ago. Now, some of these limitations are probably inherent in the function of a court-like body as the board that exerts influence by making clear how the law applies for cases. But the problem is that many of the most contentious incidents, and I’ve named a couple before, that [garbled] the past few years and then have shown that the stakes and handing this cognitive change won’t be tackled by the organization that at least partially was established in response to begin to regain user trust. And they were also again to safeguard Facebook’s future growth. So I would argue that there’s a risk that the board would distract regulators from addressing some of the fundamental and most harmful activities on the platform and by the company that will maintain. Out of that, I would also argue that when we judge the board based on its mandate, and its court-like function from what we know about it today, its design is very thoughtful, and also very promising, because not only is it a clear improvement to the existing system, that existing system that we currently have in place, but I would also argue that we can learn different lessons from the way the board was set up, especially with regard to one of the key challenges of industry self governance, and that is how to structure private governance mechanism and essentially diplomacy already in the institution vetting process given that it originates with the organization that it is supposed to check on. And with legitimacy, I mean, here how to ensure meaningful transparency, impartiality and accountability.

And I briefly want to reflect on five of these lessons that we I think, have learned from the process of how the board was established. And the first is probably a very banal one, that is power sharing. So first of all, we need to reach a situation when we look at these tech firms, where they’re actually willing to share power. And I think Facebook is a really extreme case here, because due to its dual stock structure, then the exclusive power for a very long time over contentious decisions was with a CEO Mark Zuckerberg and I think now at the board, there’s the power to actually overrule Zuckerberg on contentious decisions and also previous decisions made by content moderators. The second aspect is public outreach. So what I found very fascinating about the way in which this board was set up is that there was actually a month long consultation process all around the world, with users and stakeholders in different countries, and also that this feedback was actually published afterwards. And then you can see that it is flown into the design of the board. So I think developing a public process that incorporates listening to outside users and stakeholders and to show as a company that you take this feedback seriously, is a really important issue to keep in mind. The third aspect is diversity. So Facebook’s community standards have been looking very American for a long time. And I think they’ve shifted towards most of the European approach, but input of the global south has always been absent. And while the composition of the board as it looks now is definitely not perfect, I think it reflects much better the diversity of this reserve base in the very broadest sense representing different cultural backgrounds, professional experiences, languages, etc. The fourth aspect is independent judgment, a really fundamental one. And I think if a private governance initiative should be perceived as legitimate, it is, of course, important that the people are working in these kind of boards, or outside organizations should not be working for this company. And I think, of course, there’s a chicken and egg problem that Facebook has always also faced, how to select the first members of this kind of institution that will then select other members. But I think the solution of using a non-charitable purpose trust, pay the members and set up a limited liability company to run the operations of the board is actually quite an elegant solution that we can run from. And the last aspect is transparency. And I think also here, Facebook did quite a good job of making all the steps and key decisions taken on the design of the board transparent. And [garbled] plans to make the decisions of the board transparent, how they’re being implemented. And also policy recommendations are issued by the board explained by the public, why, how they’re being implemented, or if not why they’re not being implemented. And I think being transparent all along the way, also really increases the cost of Facebook to just drop the board or threaten its independence.

So this was this will basically the five lessons where I think we can really learn from this process. And to conclude the oversight board, as it stands, is certainly no silver bullet to reform Facebook, and it shouldn’t distract regulators from tackling some of the remaining probably most harmful activities that are happening as well on the board and that are to a certain extent, also promoted by the platform. However, within the scope of what an outside body with a limited mandate as the board can do, it is certainly a really important step towards more transparency, and also to empower users by providing them with potential lever for accountability and a mechanism for due process. I also want to stress at the end that I think it is way too early to really say how meaningful and effective the board will eventually be and whether its operations will be independent, before it even has started operations. And there are many other important unknowns outside of the realm of the board and Facebook, including how exactly foreign governments or national governments will react to the board, how national courts will react to it, and how other platforms will perceive it. So I think for now to close, we can just impatiently wait for the board to finally start its work and see how things will unfold. Thank you very much.

Allan Dafoe  1:03:11

Thanks, Sophie and Noah’s muted. There we go.

Noah Feldman  1:03:14

Let me make just a few responses. And in the process, I think I’ll still try to answer Allan’s which I didn’t answer before about the global versus regional. I agree with, you know, 95% of what Sophie said. And it’s important to note that experiments need to evolve in the in the real world. And that evolutionary experimentalism and incrementalism are sometimes the right thing when you’re trying something radical, you’re trying a radical experiment, you don’t necessarily want to roll it out, giving it all of the power to do everything that it could possibly do. Because it might not work well. Instead, a little incremental ism is appropriate. And in fact, every Constitutional Court in the world has only gradually and incrementally increased its power, you also have to realize that in the process of institutional design, the oversight board face two opposite criticisms from within Facebook. One was, it will be much too powerful, it’s going to take over the core decision making that goes to our business function and shut us down, we can’t have this. The other was this will be a big waste of time and money, it will be purely symbolic, it will have no impact. I won’t help us at all. It’s a waste of time and money to do it. And my response to both was to say you’re both completely correct that these are risks, they can’t both be correct. You know, either it will turn out to be so powerful, that it threatens Facebook’s business model or it will turn out to be purely symbolic. The history of constitutional courts is a history of gradually expanding powers sometimes having to pull back after they’ve gotten too much power. But you also couldn’t possibly have convinced the board of directors of a major company to do something, or the management of the company, or the you know the leading shareholder in the case of Mark, to do or if you thought it was going to destroy the company. And in fact that would be, that wouldn’t be responsible on his part. So I think we will see whether the limited mandate, first of all that mandate is described already in the documents as intended to expand. Second of all, there are many things that the board can do to expand its mandate right out of the box. They can say to Facebook, we don’t like your rules, write new ones in light of these values, and they have the capacity to do that written into their mandate, which is a very, very great power. In the first instance, they’re supposed to decide if Facebook is following on rules, and if they accord with their values and the second instance, they can say your rules don’t fit your values, write new rules. So I’m agreeing with Sophie, that we’re at the beginning of the experiment. And we’ll see how it goes. And I hope that we don’t we remain patient rather than impatient because it will take time for this experiment to run out. It’s not going to solve all of the problems at Facebook, and it’s not going to solve them all right away.

With respect to the global versus the local. That was a really interesting and important design question, Allan, from the beginning, it may be relevant in the AI context as well, it was very relevant with respect to content moderation, because reasonable cultures, let’s say, could have different solutions to the question, right? I mean, and then there are real cultural value differences on the platform. So you know that what is culturally appropriate to wear to the beach in San Jose is different from what is culturally appropriate to wear on Main Street in Jeddah at prayer time. I like being in both of those places, but they are very different cultural norms for what dress is appropriate. And so, and I mentioned that because nudity policy is, you know, one of the most basic policies that a social media platform has to cope with, I mean, nobody in all of the consultation that Facebook did, I didn’t encounter anybody who said, Facebook should have such a radical free expression that it’s open to pornography, I didn’t hear anybody say that. But there is such a view out there, one could imagine in that view, and there has been a real fight on Instagram, about the extent to which sex workers accounts should be constrained or limited with organized sex workers in some places in Northern Europe, arguing for greater range of expression in order to facilitate their businesses. So this is a kind of everyday day in day out difficult thing to deal with. I think the difficulty of going down the every culture on its own is basically a line drawing one out, you know, where do you draw the line? Where do you say is the definitive view within a given culture? You know, some women in Saudi Arabia really don’t want to wear the hijab. And some consider the job to be liberating and say so, who’s right? That’s a very difficult social question, which couldn’t be answered without some independent base of [garbled], as Sophie says, community standards have traditionally been very American and their orientation, opening that up is risky. Because it may lead to, you know, I was often asked in Facebook internal deliberations? Well, what are the things that you imagine could happen in terms of interest group politics? And I said, well, the single largest, if you’re gonna break groups down by interests, the single largest group of Facebook users is Muslims.

Noah Feldman  1:07:59

Right, and so you know, not all Muslims agree on all things. Many Muslims disagree on a wide range of things. But imagine that there were agreement among Muslims on some set of issues, you know, what if one then want the views held by Muslims to govern the platform? What about the views of Christians? What about the views of, you know, so you know, these are hard and genuine questions. And I think Facebook, in the end decided that hard as it is to have standards that fit the whole platform. It would be harder to divide the world up into a kind of quasi, [would have to] map making kind of way, to create different Facebook’s for different contexts and places. And I think that’s where they drove that, coupled with Facebook’s ongoing vision of wanting to be a global community. And we have a fascinating conversation about what is a global community? Can there be a global community with two and a half billion people? What is the word community even mean in that context, but that is also part of the part of the aspirational picture. So you know, much more to be said about all these topics. But I think our time is sort of coming to a to its end, if I’m not mistaken. So I just want to thank all of you for great questions and comments. And if we have more time, and I’m happy to keep talking, I’m leaving that up to you.

Allan Dafoe  1:09:06

Great, well, I’m torn, because formally, we said it would end in one minute, but of course, I would love to keep talking. Why don’t we see if there’s any burning last thoughts more discusses? And maybe I’ll say something, and then No, you can reflect again, and then we’ll we’ll close. Gillian, Sophie, do have anything last you want to share?

Gillian Hadfield  1:09:24

So I think this question about the global and the local is really quite critical. And I think that’s how we train that challenge is how do you have a global platform that yet allows smaller subgroups to have different values and to have – somebody in the chat has picked up this idea of, you know, competition between those different subgroups. It’s, you know, the, the challenges around a harmonization of standards, globally, is one we’ve been struggling with in many, many domains for decades. And I don’t think it’s reasonable to think we’ll get there Allan, I’ve had a lot of conversations along these lines over time, so I think the real challenge is how can you have, how can you have a global community where people nonetheless feel that there are smaller communities to which they belong and in which they feel reflected and respected?

Sophie Fisher  1:10:17

I agree. And I think it’s also going to be very interesting to see what the support staff of the board will maybe be able to contribute in terms of acquiring local knowledge that may be necessary to really get into the culture of these of these individual cases. And it’s not only about the diversity of the board members as such, but really also about the support staff and what they can contribute.

Allan Dafoe  1:10:40

Maybe I’ll just add to this. So I find this decision fascinating politically, and I can completely believe that global is just the most viable solution, because as you say, you know, are you going to make them national, are you going to start drawing, defining the sort of cultural, social networks? Me, I’m imagining maybe there’s some clever social network clustering algorithm that could allow subgroups to self identify, self select? And maybe this actually gets to a broader governance question about Facebook, which is the ability of users to define the mechanisms of their interaction, you know, maybe they different users would like, different weightings of what kind of media they’re provided with, you know, news versus family updates versus political inputs. Maybe I’ll say one last thing, which is, yeah, I think your argument is right, it makes sense that you want to start in many ways, you wanna start with the lowest hanging fruit. If we think this kind of governance initiative is promising, you want to start with something that ideally will succeed, right, that ideally, is good for Facebook, and it’s good for Facebook shareholders, and it’s good for users, and it’s good for the public and, and then you can grow from there. I can imagine that speech moderation is in many ways, the easiest of sort of governance issues facing a company like Facebook, because there’s not as many trade offs between Facebook’s profit and the decisions that are being made, versus other decisions, like how to how to personalize advertising or, or just anything around, I guess, advertising, or perhaps, say the addictiveness of the device, you know, to what extent you use various, you know, notification techniques or other other techniques to keep people engaged. So maybe a worry is, it’s going to be much more difficult to have these sorts of solutions for domains where there is more of a trade off between profit motive and the sort of the legitimate decision. I’ll conclude there. So over to know if you have last thoughts.

Noah Feldman  1:12:40

Just, just briefly, again, thanking everybody for great comments, I think it’s just worth noting that the problems we’re talking about are the problems of human societies. And their problems that we face at the local level, at the sub state level, and their problems we face at the global level. One interesting thing about the social media platforms is that they have, they’re both not state problems, because they’re, this is a private corporation, not a state, it doesn’t have an, Facebook doesn’t have an army, it can be shut down by states, it’s, Facebook is weaker in many ways than certain states, than most states. But at the same time, they’re also super state problems, because they’re about crossing borders and users globally. And these are problems that in international affairs, international relations of international law, we also haven’t solved, you know, the United Nations, you know, we have universal declaration rights, which are defined at such a high level of generality that lots of countries can adopt them. But many of those countries don’t follow those, those principles, because that’s the only way you could get the consensus. So you have both sub state level problems and super state problems. And I think that carries through to the AI context, as well, insofar as AI is deployed by platforms that have this kind of reach, in so far as it’s to a certain degree shaped and controlled at the highest end, by multi, by corporations that are multinational and that are present in many different contexts. And I guess I would end just with a plea to people who are listening in to remember that, in order for us to make good decisions about governance, whether in AI or other tech contexts, we need to be deeply aware of the body of social conflict, and the body of thought and debate that exists around the deepest governance problems that we face as human beings. I mean, in the end, you know, when Aristotle said that, that humans were political animals. He didn’t just mean that we do politics, he meant that we live in a polis, and that we make a politeia, which is a constitution, you know, that humans have the capacity uniquely, not to live socially, lots of animals or social, but to have a conscious thought through a set of our publicly articulated values and norms by which we try to live together, and that, to me is the challenge of governance. And I’m all for doing that across the, across the disciplines, the less we hive ourselves off, the better we’ll do. And we also have to have modesty in knowing that, unlike some problems in science, and unlike, unlike some problems in AI, which may actually be soluble, by, you know, better work and faster processors and more sophisticated algorithmic design, some of the problems that we’re talking about here, don’t are not they don’t admit, have definitive solutions. If they did, we would have converged on one system of government sometime in the last 3000 or say, 10,000 years since we started making constitutions. But we haven’t converged because there are a range of different possibilities, a range of different viewpoints, again, about which reasonable people can disagree. So some degree of epistemological modesty, I mean, that’s always good in life to have epistemological modesty, but I’m never I’m not the one to tell anybody who works in the scientific domain to be epistemologically modest, what I can say is in the domain of governance, that kind of modesty is very called for, and people like me who want to and like you who want to contribute to doing better governance. It behooves us to be modest, and incremental, and cautious, and experimental. So thanks to all of you for a great conversation, and thanks to those who listened in for listening in.

Allan Dafoe  1:16:29

Fantastic, what a great conclusion. So yes, thank you again, to our wonderful discussions and to know for this great conversation.

Gillian Hadfield  1:16:41

All right. Thanks, everybody. Bye bye.



Source link

21May

Business Development Director for Private Sector

Job title: Business Development Director for Private Sector

Company: Version 1

Job description: are at the heart of Version 1. We’ve been awarded: Innovation Partner of the Year Winner 2023 Oracle EMEA Partner Awards…. We are looking for a senior Business Development Director with a pedigree in Professional Services Sales to Enterprise scale customers…

Expected salary:

Location: Dublin

Job date: Sat, 11 May 2024 22:23:34 GMT

Apply for the job now!

21May

Ben Jones & Chad Jones on Economic Growth in the Long Run: Artificial Intelligence Explosion or an Empty Planet?


Benjamin Jones is the Gordon and Llura Gund Family Professor of Entrepreneurship at the Kellogg School of Management at Northwestern. He studies the sources of economic growth in advanced economies, with an emphasis innovation, entrepreneurship, and scientific progress. He also studies global economic development, including the roles of education, climate, and national leadership in explaining the wealth and poverty of nations. His research has appeared in journals such as Science, the Quarterly Journal of Economics and the American Economic Review, and has been profiled in media outlets such as the Wall Street Journal, the Economist, and The New Yorker.

Chad Jones is the STANCO 25 Professor of Economics at the Stanford Graduate School of Business. He is noted for his research on long-run economic growth. In particular, he has examined theoretically and empirically the fundamental sources of growth in incomes over time and the reasons underlying the enormous differences in standards of living across countries. In recent years, he has used his expertise in macroeconomic methods to study the economic causes behind the rise in health spending and top income inequality. He is the author of one of the most popular textbooks of Macroeconomics, and his research has been published in the top journals of economics.

The session was moderated by Anton Korinek (UVA) and featured Rachael Ngai (LSE) and Phil Trammell (Oxford) as discussants.

You can watch a recording of the event here or read the transcript below:

Anton Korinek  00:06

Welcome to our webinar on the governance and economics of AI. I’m glad that so many of you from all corners of the earth are joining us today. I’m Anton Korinek. I’m an economist at the University of Virginia. And the topic of our webinar today is economic growth in the long run, whether it be an artificial intelligence explosion, or an empty planet. And we have two eminent speakers, Ben Jones and Chad Jones, as well as two distinguished discussants, Rachael Ngai and Phil Trammell. I will introduce each one of them when they are taking the stage.

We’re excited to have this discussion today, because the field of economic growth theory has gone through a really interesting resurgence in recent years. At the risk of oversimplifying, a lot of growth theory in the past has focused on describing or explaining the steady state growth experience that much of the advanced world has experienced in the post-war period, that was captured in what economists call the “Kaldor facts.” But in recent years, a chorus of technologists, especially in the field of AI, have emphasized that there is no natural law that growth in the future has to continue on the same trajectory as it has in the past, and they have spoken of the possibility of an artificial intelligence explosion, or even a singularity in economic growth. Our two speakers, Ben Jones and Chad Jones, have been at the forefront of this literature in a paper that is published in an NBER volume on the economics of AI. And Ben will tell us a bit about this today. And since an explosion in economic growth is by no means guaranteed, Chad will then remind us that the range of possible outcomes for economic growth is indeed vast. And we cannot rule out that growth may, in fact, go the other direction.

Our webinar today is co-organized by the Center for the Governance of AI at Oxford’s Future of Humanity Institute and by the University of Virginia’s Human and Machine Intelligence group, both of which I’m glad to be a member of. It is also sponsored by the UVA Darden School of Business. And before I yield to our speakers, let me thank everyone who has worked hard to put this event together: Anne le Roux, Markus Anderljung at the Center for the Governance of AI and Paul Humphreys at the UVA Human Machine Intelligence Group, as well as Azmi Yousef at Darden.

So let me now introduce Ben Jones more formally. Ben is the Gordon and Llura Gund Family Professor of Entrepreneurship at the Kellogg School of Management at Northwestern. He studies the sources of economic growth in advanced economies, with an emphasis on innovation, entrepreneurship, and scientific progress. He also studies global economic development, including the roles of education, climate, and national leadership in explaining the wealth and poverty of nations. His research has appeared in journals such as Science, the Quarterly Journal of Economics and the American Economic Review, and has been profiled in media outlets such as the Wall Street Journal, the Economist, and The New Yorker. Ben, the virtual floor is yours.

Ben Jones  03:49

Okay, thank you very much, Anton, for that introduction. And let me share my screen here. It’s great to be with you to talk about these issues. And thanks, again, to Anton and the organizers for putting this together and for inviting me to participate. So the first paper that I’m going to talk about is actually joint with Chad, your second speaker, he’s gonna appear in both, and this is also with Philippe Aghion. The idea in this paper was rather than sort of a typical economics paper, where you go super deep into one model and do all the details, this was really to kind of step back and look at the kind of breadth of growth models that we have. And then say, well, how would you insert artificial intelligence into these more standard understandings of growth? And where would that lead us? So we actually have a series of sort of toy models here. We’re exploring the variety of directions this can lead us and seeing what you have to believe in order for those various outcomes to occur. So that’s kind of the idea behind this paper. I’m going to do this in an almost non-mathematical way, not a completely math-free way, but I know that this is a seminar with a group of people with diverse disciplinary backgrounds. I don’t want to presume people are steeped in endogenous growth models. So I’m going to try to really emphasize the intuition as I go through the best that I can. I will have to show a little bit of math a couple of times, but not too much.

The idea in this paper is how would we think about AI? And you might think that AI helps us make goods and services, and things that go into GDP, and that we consume. It also might help us be more creative. Okay, so we’re gonna distinguish between AI entering kind of the ordinary production function for goods and services in the economy, but also that it might go into the so called knowledge production function, into R&D, and it might help us, succeed better in revealing new insights and breakthroughs about the world in the economy, and then the kind of implications we want to look at: just two very high level implications what will happen to long run growth under various assumptions, what do you have to believe, to get different outcomes in terms of the rate at which standards of living are improving, but also inequality, what share like the GDP per capita might go up, but what share that is going to go to labor, particular workers. There’s obviously a lot of fear that AI would displace workers, and that maybe more and more of the fruits of income will go to the owners of the capitals, or the owners of the AI. And then of course, there’s this other idea almost more from science fiction, it seems but taken seriously by some in the computer science community, that we might actually experience radical accelerations in growth, even to the point of some singularity. Anton referenced how growth has been very steady, according to sort of since the Industrial Revolution, but maybe we’re gonna see an actual structural break, where things will really take off. And of course, I think in Chad’s paper, he’ll show later it may be going the other way. Potentially, we’ll explore that as well.

So how are we going to think about AI? And you might think AI is this radically new thing. And in some ways it is. But one way to think about it is that we are furthering automation, right? What are we doing, we’re taking a task that is performed by labor, maybe reading a radiology result in a medical setting. And then we’re going to have a machine or algorithm do that for us. And we’re going to be every image search on Google, I used to categorize which is a cat. And now Google can just have an AI that tells us which image is a cat. But if you think about it in terms of automation, it can be very useful, because then we can think about AI in more standard terms that we’re used to, to some extent, in economics. So if you think, in the past, the Industrial Revolution was largely about replacing labor with certain kinds of capital equipment, maybe that was textile looms, and steam engines for power. AI is sort of a continuation of that process in this view. And it’s things like driverless cars and, and pathology and other other applications. So that’s one main theme in the work and how we want to introduce AI into our thinking of growth and see where it takes us.

The second main theme that really comes out in this paper we developed writing it is that we want to be very careful to think about not just what we get good at, but what we’re not so good at. And the idea that growth might be determined more by bottlenecks that growth, maybe constrained not by what we get really good at, but what is actually really important, what is essential, and yet it’s hard to improve. And I’ll make a lot of sense of that as we go intuitively.

I have a picture of the these guys are sugar beet farmers, pulling sugar beets out of the ground by hand, harvesting them. And that was done. This is a combine harvester type machine, that’s automating that and pulling sugar beets out of the ground with a machine. So that’s kind of like 20th century automation. And then a lower graph, lower picture, I’m trying to think about automation as AI. On the left, if you’ve seen the movie, Hidden Figures, these are the computers, I always think it’s very interesting. So computer was actually the job description. These women were computers at NASA involved in spaceflight. And they were actually doing computational calculations by hand. And then on the right, I have one of massive supercomputers who have basically replaced that job description entirely. So we see a lot of laborers being replaced by capital, raising productivity, but also displacing workers. And so how do we think about those forces?

Okay, so one way to think about this model is to start with a Zeira model, which is the following. So imagine there’s just n different things we do in the economy, n different tasks. And each task represents the same constant share of GDP, of total output. And to an economist here, that would sound like Cobb Douglas. Right? So we have the Cobb Douglas model. But if you’re not an economist, ignore that we just imagined the tasks every task has an equivalent share of GDP for simplicity. And when we think about automation, what we’re saying is that a task was done by labor, but now it might be done by capital equipment. Instead, AI would be a computer and an algorithm, a combine harvester would be a piece of farming equipment, automation. And so if you think that a fraction of the tasks beta, so there’s all of task one or beta percent are automated, then the capital share of total GDP is beta. So that means labor gets one minus beta. And then the expenditure on the capital equipment is a beta share of GDP. Okay? So that would seem that’s a very simple model, like it’s very elegant in a way. And it would say that if we keep automating, if you increase beta, we keep taking tasks that were done by labor and replacing them with machines, or AI, what will happen? Well, the capital share of income will increase and the labor share of income will decrease. So that sounds like it will be inequality in the sense that labor will get less – less income. That might sound very natural, maybe that’s what’s happening today, we have seen the capital share, it seems to be going up, in a lot of countries in advanced economies and like in the US, it seems like there’s a lot of automation going on from robots. So these new AI type things. Those are, of course, those two trends, though, so maybe they’re just happened to be correlated, we think that the AI is causing that rise in the capital share, well, this would be a model in which that could be true.

The problem with that model, though, is if you look backwards, we’ve seen tons of automation, like sugar beets, or so many other things, robots and auto manufacturing automobile manufacturing, that we didn’t see the capital share go up in the 20th century was very, very steady. So that suggests that if  , it just wouldn’t really fit kind of our normal understanding of automation. And so it’s not clear that this seems like quite the right model. So how can we repair it? A simple way to repair it, one idea that we developed in the paper, is to introduce the so-called Baumol’s cost disease, which is that the better you get at a task, the less you spend on it. So as you automate more tasks, maybe the capital share wants to go up. But something else also happens, right. So if I automate a task, like collecting sugar beets, what can I do, I can start throwing a lot more capital at that task, I can keep getting more and more machines and doing sugar beets, right. And moreover, the capital I put at the task, the capital equipment might get better and better, I first use a pretty rudimentary type of capital, and eventually these very fancy machines or introduced computers, and then computers get faster. If you throw more capital or better capital at it, what’s going to happen? Well, you’re gonna get more productive at getting sugar beets or doing computation at NASA. And so the cost of doing the task is going to drop. But if the cost drops, and things are kind of competitive in the market, the price should drop, right, the price will also drop. So what’s going on, you can do more of it at greater quantity, but the price of the task you’re performing will fall. So what’s the share in GDP, well, the quantity is going up, but the price is falling. If the price is falling fast enough, that actually the share in GDP will go down, even though you do more of it. So you get more sugar beets, but the prices of sugar beets plummets. And so sugar beets as a share of GDP is actually declining. And then what happens is that the non automated sort of bottleneck tasks, the ones you’re not very good at, actually come to dominate even more and more.

If you think backwards in the 20th century history or back to the Industrial Revolution, we see that agriculture and manufacturing have had rapid productivity growth and lots of apparent automation, like sugar beets, that agriculture and manufacturing agriculture, surely dwindling and dwindling share of GDP. And manufacturing GDP shares also seem to be going down. So it’s like what you get good at, what you automate, actually, interestingly, becomes less important with time as it starts to disappear in the overall economy. We’re left with things like health services, education services, government services, this is Baumol’s cost disease point, things that we find hard to improve the hard stuff actually comes to take on a larger and larger share of GDP. And if we can’t improve that, then our chances for growth, because it matters so much to GDP, dwindle.

So in this view, the capital share is a balance between automating more tasks, which tends to make the capital share go up. But the expense of sharing each task declines. And that tends to make it go down. So one model we offer in this paper is what we can call more of the same, maybe that’s what AI is, maybe AI is just more balanced growth. And we keep automating more and more tasks, but then they keep becoming a dwindling share of the economy. And we never automate everything. And you can actually show as we do in the paper, a model, where even though this is all the set of tasks, and a greater and greater share being done by capital equipment, and artificial intelligence, and a tinier and tinier share being done by labor, even the labor gets stuck in a very, very small share, it still actually gets, say two thirds of GDP, it still gets the same historical number. And again, why is that? It’s because this stuff, all the labor, all the capital stuff, it’s doing more tasks, but its price is plummeting, because we’re so good at it. And you’re left with just a small set of tasks being done by labor, that pays enormously for them. And that may be what’s going on in the economy. You’ve got something going on in the 20th century, certainly consistent with what’s been going on in the 20th century to a first order without overstating that case, but it’s broadly consistent with the stylized facts of growth. But that would suggest AI is again, just more of the same. We just keep automating it.

Here’s a simulation from our paper. This is steady state growth. You look on the x axis, we’re looking over five centuries, you get steady state growth, even as what’s happening with automation. Here’s the green line — you’re ultimately automating almost everything. You’re just sort of slowly automating everything, you never quite get to the end. And you just get constant growth and you can get a constant capital share. Okay, not a rising capital share. So actually, this is an idea that I’ve been developing in a new paper, which is almost done, this sort of seeing how far we can go along this, this line.

Okay, but let’s go a different tack, because a lot of people who observe artificial intelligence are excited by the possibility that maybe it will accelerate growth. And many futurists make these claims that we could even get some massive acceleration, something like a singularity. So we explore that in this paper as well. What would you have to believe for this to happen? So we consider two different topologies of a growth explosion, what we call a type one growth explosion, where the growth rates were going to depart from this steady state early 21st century experience, and we’re going to see a slow acceleration in growth, maybe to very, very high levels, they’ll call that a type one growth explosion. And the other would be a type two, where we mean a literal and a mathematical sense singularity, where you go to infinity in productivity and income, in some finite point in time in the future, you actually literally have a singularity, where you go to you go to infinity. You can actually surprisingly, using sort of standard growth, reasoning and automation, you can get either of those outcomes. Alright, so the first one is a simple example. And they’re more but one example of the first one is when you do achieve complete automation, so not just you kind of keep automating at a rate and never quite finish. Now we’re going to fully automate. Here’s my first equation, y GDP, k capital. That’s the automation capital. That’s all the combine harvesters, and the supercomputers, and the AI. And then a is the kind of the quality of the capital, the productivity of one unit of capital. All right. So this is fully automated. In other words, there’s no labor there, there’s no l, labor is now irrelevant to production of GDP, we can do the whole thing just with machines. That’s what that’s saying. It just depends on k and the quality of the l, which we call a. If you look at that, what’s the growth rate in y it’s going to be the growth rate in y and the growth rate in a, technology level, plus the growth rate in capital. Now, the thing about capital, which is really interesting and different from labor, which Chad’s going to be going over in his paper, is that with capital, you can keep making more and more of it, right? Because of how you make capital, you invest in it, you build it, and that comes out of GDP. So think about this equation, if I push up capital, I get more output. And then with more output, I can invest more. Okay. And then more importantly, if I push with the level of technology, I get more and more for every unit of capital, that increases GDP, I can invest more and keep building more capital. Okay, so the growth rate actually turns out to be what’s below. I’m ignoring depreciation. But basically, you can see that as long as you keep pushing up if you can keep pushing up the level of technology, so you keep improving the AI, you keep improving computers, the growth rate is going to track with a, it’s going to keep going up and up and up and up. And this is a type one growth explosion, so called why it’s an AK model. It’s a standard model, an endogenous and early standard model and endogenous growth theory. If we can automate everything, this suggests, in fact, that we can have a very sharp effect on the growth rate, that’s a very strong view of what one view of what AI might do.

Interestingly, another place to put AI, as I alluded to, in the very beginning, is you could put it into creativity and innovation itself. And if you do that, things can really take off. Alright, so this is a knowledge production function. a. is the rate of change of the level of technology, the quality of the capital. And if I fully automate how we produce that, again, there’s no labor in this equation, it just depends on capital. And then the state of technology itself a, and that’s going to act a lot like the second equation, which is that the growth in a is going to depend on the level of a to some parameter phi. And that’s like positive feedback, I push up a growth goes up in a, which causes growth in y to go up, I push up growth, and then a goes up and keeps going like this, okay? And that actually will produce if you saw the differential equation, it does produce a true mathematical singularity, it’ll be some point in time t star, which is definable, at which we will achieve infinite productivity. All right. Now, maybe that sounds like a fantasy. And it would be a fantasy because there may be certain obstacles that happen. I’ll just go very quickly through a couple.

One obstacle is that you just simply can’t automate everything, right? So both of those models assume you can get to a lot of automation, right? Maybe automation there is actually very hard. Maybe it was easy to automate sugar beets, but there are just certain cognitive tasks, for example, with regard to AI, that are going to be very, very hard to automate. If we never get to full automation, we can still get growth to go up. But we’re never going to get these kinds of singularities in these models. Okay, in the simplest form. So if you think that there’s some kind of bottleneck tasks that we can’t automate, and then we’re going to get we’re not we’re not going to get these labor free, full automation singularities, you have to believe that to some extent that we can truly automate all these things. And of course, that’s an open question with AI, but how far it can go in goods and services production and in sort of creative innovative activity.

A second a second constraint kind of the latter two constraints in some sense come from the universe itself, which is this differential equation at the top if that parameter phi is greater than zero, it will give you a singularity you will get one. You fully automate idea production and you will get one in finite time. But the question then is really whether we believe that parameter phi is actually larger than zero. What does that say? It’s saying that if it’s greater than zero, then when I increase A, I increase the level of technology in the economy, I make future growth faster, right. But if phi is less than zero, what happens when I raise the level of existing technology, and phi is less than zero, I make future growth slower, it takes away that positive feedback loop. And then you don’t get a singularity. And there are good reasons to think that phi might be less than zero, we don’t know. But there are reasons to think it is because there’s only so many good ideas in the universe, and we came up with calculus, and we came up with the good ones early, and the remaining ones are hard to discover, or just there aren’t that many good ones left. And so if you think we’re kind of fishing out the pond, right, think of AI as changing the fishermen, we get better fishermen on the edge of the pond. But if the pond itself is running out of fish, big fish for us have new ideas, it doesn’t matter how good your fishermen are, there’s nothing left in the pond to catch. And then there’s some other I have another AI version called the burden of knowledge. But regardless, there are some ideas in the existing economic growth literature about science and innovation that suggests phi may be less than zero. And that’s just going to turn off that singularity.

And then the third one, which is somewhat related, is that there just might be bottleneck tasks. And this kind of comes back to Baumol’s cost disease as he’s reasoning, but more at a task level. So for example let’s say that GDP here is actually a combination of our output and all these tasks. And the most simple form, let’s say it’s the minimum. So this is a real bottleneck, you’re only as good as your weakest link. It’s one version of a simple version of Baumol’s cost disease. So if it’s the min function, it doesn’t matter how good you get at every task, the only thing that matters is how good you are at your worst task. Right. So in other words, we might be really, really good at agriculture. But at the end of the day, we’re really bad at something else. And so that’s what’s holding us back.

I think that this is actually quite instructive. Because think about Moore’s Law, people get so excited about Moore’s law and computing. And a lot of people who believe in singularities are staring at the Moore’s Law curve. And it’s just incredibly dramatic, exponential, rapid, rapid, rapid increase in productivity, which is mind boggling in a way. At the same time, this has been going on for a long time that Moore’s law, and if you look at economic growth, we don’t see an acceleration. Right? If anything, we probably see it slow down. And that suggests that no matter how good you get at computer’s, there are other things holding us back, like it still takes as long to get from one point on a map to another based on available transportation technologies, that’s not really changing. I go back to the Baumol theme, if things really depend on what we’re sort of what is essential, but hard to improve, we can actually take our computing productivity to infinity, literally, and it just doesn’t matter. It’ll help, it’ll make us richer, it’s good. But it won’t fundamentally change our growth prospects unless we can go after the hard problems, or the hard ones to solve.

To conclude these are a whole series of models, obviously, we do this at much greater length in the paper, if you’d like to read it, you can put AI in the production of goods and services. If you can’t fully automate, you just kind of slowly automate, you kind of it looks like more of the same, it’s sort of a natural way to go. But if you can get to full automation, where you don’t need labor anymore, you can get a rapid acceleration in growth through the so called, what we call a type one singularity. When you put AI in the ideas production function in the creation of new knowledge, you can get even stronger growth effects. And that, in fact, could even lead to one of these true mathematical singularities, sort of, in science fiction. But there are a bunch of reasons in both cases to think that we might be limited because of either automation limits, because their search limits and that creative process, really, with regard to the knowledge production function, or more generally, in either setting, with natural laws, like I didn’t say it a lot, but like, the second law of thermodynamics seems like a big constraint on energy efficiency, that we’re actually pretty close to, in current technology. And if energy matters, then that’s going to be a bottleneck, even if we can get other things to sort of skyrocket in terms of productivity. And so I think a theme that Chad and I certainly came to writing this paper was the kind of interesting idea that ultimately growth seems determined, potentially not by what you are good at, but by what is essential, yet hard to improve. And that that is kind of important for us to keep in mind, when we all get excited about where we are advancing quickly. Then we go back to the aggregate numbers, and we don’t see much progress. This is just like a pretty useful way potentially, to frame that and begin to think about it, maybe we should be doing a lot of thinking about what we’re bad at improving, and why that is, if we really want to understand future growth. Okay, so I went pretty quickly, but hopefully I used my time, I didn’t spill over too much beyond my time and look forward to the discussions from thanks, Rachael and Phil in advance. I look forward to Chad’s comments as well. Thank you.

Anton Korinek  24:27

Thank you, Ben. The timing was perfect. And to all our participants, let me invite you to submit questions through the Q&A field at the bottom of the screen. After all the presentations, we’re going to continue the event with the discussions of the points that you are raising; and incidentally to the speakers, if there are some questions, clarification questions, for example, where you can type a quick response, feel free to respond to the Q&A in the Q&A box directly.

Let me now turn it over to Chad. Chad is the STANCO 25 Professor of Economics at the Stanford Graduate School of Business. He is noted for his research on long-run economic growth. In particular, he has examined theoretically and empirically the fundamental sources of growth in incomes over time and the reasons underlying the enormous differences in standards of living across countries. In recent years, he has used his expertise in macroeconomic methods to study the economic causes behind the rise in health spending and top income inequality. He is the author of one of the most popular textbooks of Macroeconomics, and his research has been published in the top journals of economics. Chad, the floor is yours.

Chad Jones  25:50

Wonderful, thanks very much Anton. It’s really a pleasure to be here. I think Anton did a great job of introducing this session and pairing these two papers together. As he said, a lot of growth theory historically looked back and tried to understand how constant exponential growth can be possible for 100 years. The first paper that Ben presented kind of looked at automation, artificial intelligence and possibilities for growth rates to rise and even explode. This paper is going to look at the opposite possibility. And ask, could there be an end of economic growth? And I think all these ideas are worth exploring. And I guess my general perspective is part of the role of economic theory is to zoom in on particular forces and study them closely. And then at the end of the day, we can come back and ask, Well, how do these different forces play against each other? So that’s kind of the spirit of this paper.

So a large number of growth models work this way: basically, people produce ideas, and those ideas are the engine of economic growth. So the original papers by Paul Romer and Howard and Grossman Hellman work this way, the sort of semi endogenous growth models that I’ve worked on and Sam Kortum, and Paul Seger show, basically, all idea driven growth models work this way people produce ideas and ideas drive growth. Now these models typically assume that population is either constant or growing exponentially. And for historical purposes, that seems like a good assumption. An interesting question to think about, though, is what does the future hold? From this perspective, I would say before I started this paper, my view of the future of global population, which I think is kind of the conventional view, is that it was likely to stabilize 8 or 10 billion people a hundred years from now or something. Interestingly, there was a paper, a book published last year by Bricker and Ibbitson called Empty Planet. And this book made a point that after you see it, is very compelling and interesting. They claim that maybe the future is actually not one where world population stabilizes. Maybe the future is one where world population declines, maybe the future is negative population growth. And the evidence for that is remarkably strong, I would say, in that high income countries already have fertility rates that are below replacement. So the total fertility rate is sort of a measure in the cross section of how many kids are women having on average. And obviously two is a special number here, if women are having more than two kids on average, then populations tend to rise, if women are having fewer than two kids on average, then the population will decline and maybe it’s 2.1 to take into account mortality, but you get the idea. The interesting fact highlighted by Bricker and Ibbitson and well known to demographers is that fertility rates in many, many countries, especially advanced countries are already below replacement. So the fertility rate in the US is about 1.8 in high income countries as a whole 1.7, China 1.7, Germany 1.6, Japan, Italy and Spain even lower 1.3 or 1.4. So, in many advanced countries, fertility rates are already well below replacement. And then if we look historically, again, we kind of all know this graph qualitatively fertility rates have been declining. So take India, for example, in the 1950s and 60s, the total fertility rate in India with something like six, women had six kids on average, and then it fell to five and then to four and then to three, and the latest numbers in India, I think are 2.5 or 2.4. But the perspective you get from this kind of graph is, well, if we wait another decade or two, even India may have fertility below replacement rates, fertility rates have been falling all over the world. And maybe they’re going to end up below two.

So, the question in this paper is what happens to economic growth, if the future of population growth is that it’s negative rather than zero or positive, right? And the way the paper is structured, it considers this possibility from two perspectives. First, let’s just feed in exogenous population growth, let’s just assume population growth is negative half a percent per year forever, feed that into the standard models, and then see what happens. And the really surprising thing that happens is you get a result that I call, in honour of the book, the empty planet result. And that is that not only does the population vanish with negative population growth, that the global population is disappearing, but while that happens, living standards stagnate. So this is quite a negative result: living standards stagnate for a vanishing number of people. And it contrasts with the standard growth model result that all these growth models that I mentioned earlier, half, which I’m now going to call an expanding cosmos result. But it’s basically a result that you get exponential growth in living standards. So living standards grow exponentially, at the same time the population grows exponentially. On the one hand, you have this sort of traditional expanding cosmos view of the world. And what this paper identifies is, hey, if these patterns in fertility continue, we may have a completely different kind of result, where instead of living standards growing for a population that itself is growing, maybe living standards stagnate for a population that disappears.

Then the second half of the paper, and I only have a chance to allude to how this works, says, Well, what if you endogenize the rate of fertility? What if you endogenize the population growth? Do you learn anything else? And you can get an equilibrium that features negative population growth, that’s good, we can get something that looks like the world. And the surprising result that comes out of that model is that even a social planner, if you ask, what’s the best you can do in this world, choose the allocation that maximizes the utility of everyone in the economy. And with population growth, the question of who is everyone is essentially in question. But the result there is that a planner who prefers this expanding cosmos result can actually get trapped by the empty planning outcome. And that’s a surprising kind of result, it might seem like it doesn’t make any sense at all, but I’ll try to highlight how it can happen.

I’m going to skip the literature review in the interest of time, I’ve already kind of told you how I’m going to proceed. Basically, what I want to do is look at this negative population growth in the sort of classic Romer framework, and then in a semi endogenous growth framework, and then go to the fertility results.

Let me start off by illustrating this empty planet results in a set of traditional models. So make one change in traditional models, instead of having positive population growth, or zero population growth, have negative population growth and see what happens. That’s, that’s the name of the game for the first half of the paper. To do that, let me just remind you what the traditional results are in a really simplified version of the Romer model. I’m sure you all know, but the model this is based on and this paper by Romer won the Nobel Prize in Economics a couple of years ago. So this is a very well-respected, important model in the growth literature. So the insight that got Romer, the Nobel Prize was the notion that ideas are not rival. Ideas don’t suffer the same kind of inherent scarcity as a good. So if if there’s an apple on the table, you can eat it, or I can eat it, apples are scarce, bottles of olive oil are scarce, coal is scarce, a surgeon’s time is scarce. Everything in economics that we’re traditionally studying is a scarce factor of production. And economics is the study of how you allocate those scarce factors. But ideas are different. If we’ve also got the fundamental theorem of calculus, one person can use it, a million people can use it, a billion people can use it, and you don’t run out of the fundamental theorem of calculus the same way you’d run out of apples or computers.

And so that means that production is characterized by increasing returns to scale, there’s constant returns to objects, here just people, increasing returns to objects and ideas taken together. And this parameter sigma being positive measures the degree of increasing returns to scale. Then where do ideas come from? In the Romer model, there’s a basic assumption that says that each person can produce a constant proportional improvement in productivity. So the growth rate of knowledge is proportional to the number of people and then the Romer model just assumes that population is constant. This is the assumption I’m going to come back and relax in just a second. So if you solve this model income per person, lowercase y is just GDP divided by the number of people that’s just proportional to the number of ideas, right? The amount of knowledge, each improvement in knowledge raises everyone’s income because of non rivalry, that’s the deep Romer point. And in the growth rate of income per person, depends on the growth rate of knowledge, which is proportional to population, right? So this is a model where you can get constant exponential growth in living standards, with a constant population. And if you look at this equation, you realize, well, if there’s population growth in this model, that gives us exploding growth in living standards. We don’t see exploding growth and living standards historically. And we do see population growth. So there’s some tension there. And that’s what the semi endogenous growth models are designed to fix that I’ll come back to in a second.

In the meantime, what I want to do is change this assumption that population is constant, and replace it with an assumption that the population itself is declining at a constant exponential rate. So let Ada denote this rate of population decline. So think of Ada as 1% per year, half a percent per year, the populations falling a half a percent per year. And then what happens in this model? Well, if you combine the second third equations, you get this, this law of motion for knowledge. And this differential equation is easy to integrate, right? It says the growth rate of knowledge is itself falling at a constant exponential rate. And not surprisingly, if the growth rate is falling exponentially, then the level is bounded. That’s what happens when you integrate this differential equation, you get the result that the stock of knowledge converges to some finite upper bound A*. And since knowledge converges to some finite upper bound, income per person does as well. And you can calculate these as functions of the parameter values. And it’s interesting to do that I do a little bit of that in the paper. But let me leave it for now, by just saying, what we did is just by by changing this assumption, that population was constant, making it population growth negative, you get this empty planet result, you get that living standards asymptote, they stagnate at some value, y* as the population vanishes, that’s the empty planet.

Now that I look at this other class of models, the semi-endogenous growth class of models and what was interesting about these models is that in the original framework, the Romer style models and the semi endogenous growth models lead to very different results. In the presence of positive population growth, these models yield very different outcomes. And what’s kind of interesting is that with negative population growth, they yield very similar outcomes. Okay. So again, let me go through it in the same kind of order as before, let me present the traditional result with positive population growth, and then change that assumption and show you what happens when population growth is negative. So same goods production function, we’re taking advantage of Romer’s non rivalry here. And I’m making basically one change, if you want set lambda equal to one that doesn’t really matter. I’m introducing what Ben described in the earlier paper, as as this sort of “ideas are getting harder to find” force, right, the fishing out force, right? And beta kind of measures the rate at which ideas are getting harder to find it says, the growth rate of knowledge is proportional to the population, but the more ideas you discover, the harder it is to find the next one, right? Beta measures the degree to which it’s getting harder. So beta, think beta, some positive number, and then let’s put in population growth at some positive rate and it’s exogenous. Same equation, income per person is proportional to the stock of ideas raised to some power, the stock of ideas is itself proportional to the number of people. And that’s an interesting finding here, which is, the more people you have, the more ideas you produce, and the more total stock of knowledge you have, and therefore the richer the economy is. People correspond to the economy being rich in the long run by having lots of ideas, not to the economy growing rapidly. That’s what happens, versus the earlier models. And then, if you take this equation, and you take logs and derivatives of it, it says that the growth rate of income per person depends on the growth rate of knowledge, which in turn depends on the growth rate of people. The growth rate of income per person is proportional to the rate of population growth, where the factor of proportionality is the degree of increasing returns to scale in the economy, essentially. And so this model, you can have positive population growth, being consistent with constant exponential growth and living standards. So this is the expanding cosmos result, right, we get exponential growth and living standards for a population that itself grows exponentially, maybe it fills the earth, maybe it fills the solar system, maybe it fills the cosmos, right, that’s the kind of taken to the implausible extreme maybe result of this model.

Let’s do the same thing. Suppose we change that assumption that population growth is positive, to one of population growth being negative, again, that kind of remarkably, I would say, looks like the future of the world that we live in, right, based on the evidence that I presented earlier. So once again, we’ve got this differential equation, you substitute from the negative population growth equation again, and you see that not only does the growth rate of knowledge decline exponentially because of this term, but it falls even faster. So the growth rate of knowledge falls even faster than exponentially. So of course, the stock of knowledge is still going to be bound. This is another differential equation that’s really easy to integrate. And you get that once again, the stock of knowledge is bounded, and you can play around with the parameter values and do some calculations. In the interest of time, let me not do that.

Let me instead say, what we see, let me just sort of summarize, is so first, as a historical statement, fertility has been trending downward, we went from five kids to four kids to three kids to two kids, and now even less in rich countries. And an interesting thing about that is from the microeconomic perspective, from the perspective of the individual family, there’s nothing at all special about having more than two kids or fewer than two kids: it’s an individual family’s decision, and some families decide on three, some families decide on 2, 1, 0, whatever. But there’s nothing magic about above two versus below two, from an individual family’s perspective. But the macroeconomics of the problem makes this distinction absolutely critical. Because obviously, if on average, women choose to have slightly more than two kids, we get positive population growth. Whereas if women decide to have slightly fewer than two kids, we get negative population growth. And what I’ve shown you on the previous four or five slides, is that that difference makes all the difference in the world to how we think about growth and living standards in the future. If there’s negative population growth, that could condemn us to this empty planet result, where living standards stagnate as the population disappears, instead of this world we thought we lived in, where living standards were going to keep growing exponentially along with the population. And so this relatively small difference matters enormously when you project growth forward. The sort of fascinating thing about it is, it seems like as an empirical matter, we’re much closer to the below two view of the world than we are to the above two view the world. So maybe this empty planet result is something we should take seriously. That’s, I would say that the most important finding of the paper.

Let me go to the second half of the paper, just very briefly, and I won’t go through the model in detail. It’s admittedly subtle and complicated, and took me a long time to understand fully, but I do want to give you the intuition for what’s going on. I write down a model where people choose how many kids to have. And in the equilibrium of this model, the idea part of kids is an externality. So we have kids, because we love them. And in my simple model, people ignore the fact that their kids might be the next Einstein and Marie Curie or Jennifer Doudna, I guess, now with the Nobel Prize for CRISPR. And if they might create ideas that benefit everyone in the world. The individual families ignore the fact that their kids might be Isaac Newton. And so the planner is going to recognize that social welfare recognizes that having kids creates ideas. And so the planner wants you to have more kids than you and I want to have, there’s an externality in the simple model along those lines. And, admittedly, this is a modeling choice people would writing down these kind of fertility models for a while and there are lots of other forces and you can get different results. I don’t want to claim this as a general result, rather, I see it as illustrating an important possibility. As I mentioned, the key insight that you get out of studying this endogenous fertility model, is that the social planner can get trapped in the empty plant, even a social planner who wants this expanding cosmos, if they’re not careful. I’ll try to say what I mean by if they’re not careful, they can get trapped in the empty planet. So how to understand that.

In this model, population growth depends on the state variable x, which you can think of is knowledge per person. It’s a to some power divided by n by some power, let me just call it knowledge per person. And we can parameterize the model so that the equilibrium, women have fewer than two kids. And so population growth is negative. If population growth is negative look at what happens to x. I’ve already told you that a converges to some constant, and n is declining. And so x is going off to infinity. So in the equilibrium x is rising forever. What about in the optimal allocation, the allocation that maximizes some social welfare function? Well, the planner is going to want us to have kids not only because we love them, but because they produce ideas that raise everyone’s income. The key subtlety here is suppose we start out in the equilibrium allocation, where x is rising and population growth is negative. And ask: when do we adopt the good policies that raise fertility? The planner wants you to have more kids? Do we adopt the policies that raise fertility immediately? Do we wait a decade? Do we wait 50 years, do we wait 100 years? That’s the ‘if you’re not sufficiently careful’. The point is, if society waits too long to switch to the optimal rate of fertility, well, then x is going to keep rising. And the idea value of kids gets small as x rises, because remember, x is knowledge per person. As x rises, we have tons of knowledge for every person in the economy. So the marginal benefit of another piece of knowledge is getting smaller and smaller. So the idea value of kids is getting smaller and smaller. And because we’ve already said that the loving your kids force still leads to negative population growth, well, even if you add a positive idea value of kids, the planner might still want negative population growth, if you wait too long. If you wait for the idea value of kids to shrink sufficiently low, then even the planner who, ex ante, preferred the expanding cosmos, gets trapped by the empty planet. So what this says is that it’s not enough to worry about fertility policy, we have to worry about it sooner rather than later. And here’s just a diagram.

I think I’m almost out of time, let me just conclude. So what I take away from this paper, is that fertility considerations are likely to be much more important than we thought, this distinction between slightly above two and slightly below two, that from an individual family standpoint just barely seems to matter. From an aggregate standpoint, from a macroeconomic standpoint, is a big deal. It’s the difference between the expanding cosmos and the empty planet. As I mentioned, when I started, this is not a prediction. It’s a study of one force. But I think it’s much more likely than I would have thought, before I started this project. And there are other possibilities. Of course, we’ve talked about one with AI producing ideas so that people aren’t necessary: important in my production function is that people are a necessary input. You don’t get ideas without having people and maybe AI can change that, that’s something we should discuss in the open period. There are other forces: technology may affect fertility and mortality, maybe we end up reducing the mortality rate to zero so that even having one kid per person is enough to keep population growing, for example. Maybe evolutionary forces favor groups that have high fertility for some reason, maybe it selects for those genes. And so maybe this below replacement world we look like we’re living in, maybe that’s not going to happen in the long run. But anyway, I think I’m out of time, let me go ahead and stop there.

Anton Korinek  48:33

Thank you very much, Chad. And let me remind everybody of the Q&A again. Our first discussant of these ideas is Rachael Ngai. Rachael is a professor of Economics at the London School of Economics and a research associate at the Center for Economic Performance, as well as a research affiliate at the Center for Economic Policy Research. Her interests include macroeconomic topics such as growth and development, structural transformation, as well as labor markets and housing markets. Rachael, the floor is yours.

Rachael Ngai  49:11

Thank you Anton. Thank you very much for having me discuss these two very interesting papers. There’s a lot of interesting content in both, but because of time, what I will focus on is the aspect related to the future of economic growth and the role played by artificial intelligence, declining population growth. Now, when we talk about artificial intelligence, there are many aspects, there will be political aspects, philosophical aspects, which I will not have time to talk about. Today, I will purely focus on the implication for future of economic growth.

Okay, so economic growth is about the improvement in the living standard. When we think about the fundamental source of growth as both Ben and Chad point out, it’s about technology progress. Technology progress can happen through R&D, or experience, when we are doing something and we get better at doing something. But the key thing for technological progress is that it requires brain input. So far, for the last 2000 years or so, the main brain input is the human brain. So here are some examples we already have mentioned. That how their research outputs have improved the living standard for mankind over the last 2000 years. Now, Chad’s paper is very interesting, and it is bringing up something that is really important. So here is the figure that basically repeats what Chad has shown us from the United Nations, about how many children women have, as you seen in high income country, has already fallen below the replacement ratio, which is about two, and for the world as a whole it is also falling. And in fact, United Nation predicts that in 80 years, population growth will be stagnant. So there will be zero population growth. And that means going forward, we will see negative population growth. What Chad has convincingly shown is that, when that is happening, we might get the empty planet result, which is the result that living standards will be stagnant and the human race will start to disappear.

And this is really an alarming result. And the reason for this is because the profit incentive of having children – we love children – does not take into account that the children are producing ideas that are useful for technological progress. So clearly, there’s a role for policy here, which Chad mentioned earlier as well. So we could try to introduce some policies that help to stimulate people to have more children. And the problem is, if we wait too long, then the empty planet result cannot be avoided. So that is something really, really worrying.

Then it goes to Ben’s paper, which gives you the tentative scenario, which is to say: what if we have the following situation. And they suppose we think of the human brain or that man is basically like machine. So artificial intelligence can replicate human brain. In fact, in Chinese, we say the computer we translate as electrical brain. So it’s really saying can the electrical brain really replace a human brain. If it can, then what we will obtain is that we can avoid that nation, which is the empty planet itself. And even more, we might be able to move through to a technological singularity, where the artificial intelligence can self improve, and the growth can explode.

Now, I think we are all kind of convinced by that the singularity result seems quite impossible, because one simple thing one can say is that many essential activity cannot be done by AI, and because of that, which is sometimes called the Baumol’s effect, because of that, you will not get the situation where growth explodes. So let me focus on the situation whether AI can solve the problem that Chad mentioned, which is the stagnation result. So how possible is it really, that we can have AI completely replace humans in generating technological progress? Meaning in that R&D production function, we do not need humans anymore, we can just have AI in it. How is that possible?

So here’s the brief timeline of the development of artificial intelligence, which is quite remarkable fact, which started in 1950. So over the last 70 years, a lot of progress has been made. Okay, there’s a lot of great discovery. But is it enough? And what do we look for into the future? So there’s a report by the Stanford University called Artificial Intelligence Index Report. What this shows are a few points I want to highlight. One is that the human brain itself is actually still needed in improving AI. So for the last 10 years 2010 to 2019. What we’ve seen is published paper about artificial intelligence has increased by 300%. And the papers online before they were published, has increased by 2,000%. So there’s a huge increase in how we researchers are trying to improve AI. And at the same time, we also see a lot of students choose to go to university to study AI. So looks like we still need quite a lot of human brains to pour into making the artificial intelligence to replace the human brain. So there is progress being made in many area. But there is a lot of questions here, AI is good for searching pattern using the observed observed data. Okay, so that is basically how artificial intelligence works with big data. But can it really work like human brains on intuition and imagination.

Now, on the right hand side here, I took one example from this annual report, which is to show a video to the machine and ask the machine to recognize what is going on in that video. When you show the video of some high activity thing, for example, like Zumba dancing, the precision rate is very high, the machine really picks up the activity very easily. But if you look at these other activities, for example, here is show the hardest activity is drinking coffee. So presumably, when people enjoy their coffee, they do not do much special movement. And there’s no special characteristic for the machine to pick up very easily. So the precision rate is less than 10%. And there has been very little progress over the last 10 years. So my take on this is that it’s still quite a long time for the artificial intelligence to completely replace the human brain. And it really matters a lot to see. If the world is going to have stagnant population 80 years, do we have enough time to make artificial intelligent replace human growth? So when you think about the future growth? Here’s the question, which is less costly and more likely, producing human brain or producing human green light, artificial intelligence? Can we, humans, with the help of artificial intelligence actually create an Einstein-like artificial intelligence? It to me, I don’t know, it seems quite difficult. But on the other hand, if we go back to Chad Jones’ paper, is saying that we need policy, we need policy to increase fertility. But it’s not an easy part on its own. As you’re thinking woman today face a trade off between career concerns and having children. So just by giving childcare subsidy on maternity leave, these are costly policies, and most of the time, it might not work. So when we think about fertility, of course, there’s lots of theories about fertility. Here, I’m just going to focus on a few things.

What, what is behind this? So if you look historically, how can we have very high fertility in the past, which is like five children per woman. So because there is a big role play by family farms, so family farms on the right hand side, here is some data from the AL, which show you how the fraction of woman working on family farms has been declining over time. Now family farms are very special, it creates demand for children, because children can help on the farm. And it also allows a woman to combine home production and work. But the process of urbanization and structural transformation have come along with the disappearance of family farms. Modern day, when a woman has to go to work, it really means leaving home. So making it incompatible to combine home production and work. So you look at home production. Here I show you a picture of the home production time per day, and market production time per day for women and for men. So the first bar is women, the second bar is men. And these two bar represent the world. What we see here is something really striking: woman’s home to home hour and childcare time is triple men’s. So for every one hour men does for home production, women have to do three hours. Now that itself this kind of picture may give young woman especially a pause when choosing whether to get married and to have children. While we see that women’s education is rising, and there is rising concern for gender equality.

So let me just conclude with this on the future fertility. So I hope I sort of convinced myself the artificial intelligence will take some time, but if we don’t change anything in 80 years, population growth will go negative. We need to really think about how we can do something about fertility. Childcare subsidies and maternity leave will not be enough. One possibility, maybe it will help women to choose to have more children is that if there’s more possibility of outsourcing home production to the market, but that really will play on the development of the service economy. Now, of course, the social norm is important as well, the social norm around the role of a mother can play a crucial role for woman’s decisions to become a mother. But the social norms themself are changing over time. And they will change and you will respond to technology and policy. So some hope there is, if this thing are all working, perhaps we can revert the trend of fertility to bring it up above the replacement level before or together with the artificial intelligence. And that will be the future of growth hope. Thank you very much.

Anton Korinek  60:59

Thank you very much, Rachael. Our next discussant is Philip Trammell. Phil is an economist at the Global Priorities Institute at the University of Oxford. His research interests lie at the intersection of economic theory and moral philosophy, with specific focus on the long term. And as part of this focus, he is also an expert on long-run growth issues. And incidentally, he has also written a recent paper on growth and the transformative AI together with me in which we synthesize the literature related to the theme of today’s webinar. Phil, the floor is yours.

Phil Trammell  62:32

Thank you, Chad, Ben and Rachael. And thank you, Anton, for giving me this chance to see if I can keep up with the Joneses. Some of what I say will overlap what’s already been said. But yeah, hopefully I have something new to say. As Anton said at the beginning, when thinking about growth, economists are typically content to observe, as Kaldor first famously did, that growth has been roughly exponential at 2 to 3% a year since the Industrial Revolution. And so they’ll assume that this will continue, at least over the timescales that they care about. Sometimes they do this bluntly, by just stipulating an exogenous growth process going on in the background, and then studying something else. But even when constructing endogenous or semi-endogenous growth models; that is, ones that model the inputs to growth explicitly, research and so on. A primary concern of these models is usually to match this stylized description of growth over the past few centuries. For example, the Aghion, Jones and Jones paper that Ben presented is unusually sympathetic to the possibility of a growth regime shift and acceleration. But even so, it focuses less on scenarios in which capital becomes highly substitutable for labor and tech production, ones that overcome that Baumol effect, on the grounds that as long as that phi parameter Ben mentioned is positive, which I think the authors believed at the time, then capital accumulation is enough to generate explosive growth, which is not what we’ve historically observed. And restrictions along these lines appear throughout the growth literature. As a result, alternate growth regimes currently seem to just be off most people’s radar. For example, environmental economists have to think about longer timescales than most economists, but they typically just assume exponential growth, or a growth rate that falls to zero over the next few centuries. A recent survey of economists and environmental scientists just asked: when will growth end? As if that roughly characterized the uncertainty, and those with an opinion about half said within this century, and about half said never. No one seems to have filled in a comment saying they thought it would accelerate or anything like that. Plus when asked why it might end, insufficient fertility wasn’t explicitly listed as a reason, and no one seems to have commented on its absence.

But on a longer timeframe, accelerating growth wouldn’t be ahistorical, the growth rate was far lower before the Industrial Revolution. And before the agricultural revolution, it was lower still. So some forecasts on the basis of these longer-run trends have predicted continual acceleration to growth, sometimes in the near future, multiplied by a factor of 20 again, it might be 40% growth a year or something. Furthermore, radically faster growth doesn’t seem deeply theoretically impossible, I don’t think. Lots of systems do grow very quickly. If you put mold in a petri dish, it’ll multiply a lot faster than 2% a year, right.

So more formally, the Ben paper finds that you can get permanent acceleration under this innocent seeming pair of conditions. First, you need capital that can start doing research without human input, or can substitute well enough to overcome that Baumol effect. And second, you need phi at least zero, the fishing out effect. Not not too strong. Yeah, just to just to recap what here’s what phi at least zero means. When you have advanced tech, on the one hand, it gets easier to advance further, because you have the aid of all the tech you’ve already developed. And on the other hand, it gets harder because you’ve already picked all the low hanging fruit. Phi less than zero means the second effect wins out. Okay, so as you can see, these two conditions are basically a way of formalizing the idea of recursively self-improving AI, leading to a singularity, and then translating that into the language of economics.

That’s a great contribution in formalization in its own right, but the really nice thing about it, is that it lets us test these requirements, the singularitarian scenario. So as Ben noted a recent paper estimates phi to be substantially negative, or using Chad’s notation, beta to be positive, implying that even reproducing and self-improving robot researchers couldn’t bring about a real singularity, like a type one or type two. But they could still bring about a one time growth rate increase, as long as they can perform all the tasks involved in research.

In any event, this is just one model. There, there are plenty of others. Anders Sandberg here put together a summary of these back in 2013, of what people have come up with at the time. And Anton and I did the same more recently to cover the past decade of economist’s engagement with AI. But I think the most significant contribution on this front is just the paper that Ben presented. It solidifies my own belief for whatever little it’s worth that an AI growth explosion of one kind or another, even just a growth rate increase rather than a singularity is not inevitable, but not implausible. And it’s at least a scenario we should have on our radar.

This is all very valuable for those of us interested in thinking about the range of possibilities for long-run growth. For those of us also interested in trying to shape how the long-run future might go though, what we especially want to keep an eye out for our opportunities for very long run path dependence, right, not just forecasting. In fact, I think almost a general principle for those interested in maximizing their long term impact would be to look for systems with multiple stable equilibria which have very different social welfares in them and where we’re not yet locked into one and then to look for opportunities to steer toward a good stable equilibrium. So we have to ask ourselves, does the development of AI offer us any opportunities like this? If so, I don’t think the economics literature as yet identified it identified them actually. As Ben Garfinkel here has pointed out, a philanthropist who saw electric power coming decades in advance might not have found that insight to be a decision element. It just doesn’t really help you do good. There could be long term consequences of the social disruption AI could wreak, or of who first develops AI, and like takes over the world or something. And most dramatically, if we do something to prevent AI from wiping out the human species, that would certainly be a case of avoiding a very bad and very stable equilibrium. But scenarios like these aren’t really represented in the economics literature on AI.

By contrast, path dependency is a really clear implication of Chad’s paper. We may have this once and forever opportunity to steer civilization from the empty planet equilibrium to the expanding cosmos equilibrium by lobbying for policies that maintain positive population growth and thus maintain a positive incentive to fund research and fertility. To my mind, this is a really important and novel insight. And it would be worth a lot more papers to trace out more fully, just under what conditions it holds. But I think it’s pretty robust. So the key ingredient is just that if there’s too much tech per person, the social planner can stop finding it worthwhile to pay for further research. For the reasons Chad explained, fertility has proportional consumption costs, you have to bring about a proportional population increase, people have to give up a certain fraction of their time to have the children but it would no longer be producing proportional research increases, because there’s this mountain of ideas, you can hardly add much to in proportional terms. So as long as this dynamic holds, you’ll get that pair of equilibria.

So for example, in the model, peoples’ utility takes this quirky form you see here, where c is average consumption of the time, and n is how many descendants people have alive at a time. But you might wonder, what, if people are more utilitarian, what if they’re perhaps number-dampened time-separable utilitarians like this? Well, if their utility function takes this form, as Chad points out in the paper, actually, we get the same results. And the utility functions are basically just monotonic transformations of one another. So they represent the same preference ordering is how you can see that. Anyway, likewise, in the model, people generate innovation just by living. This is equivalent to exogenously stipulating that a constant fraction of the population has to work as researchers full time. But what if research has to be funded by the social planner, at the cost of having fewer people working in final good output and thus lower consumption? Well, then, at least if my own scratch work is right, we still have our two stable equilibria. And in fact, in this case, the bad one stagnates even more fully. Research can zero out even though it’s not like everyone has died off, because it’s just not worth allocating any of the population to research as opposed to final good production.

Finally, sort of like Rachael is saying, I think there’s an important interaction between the models. If we’re headed for the empty planet equilibrium, the technology level plateaus. But the plateau level can depend on policy decisions at the margin, right, like research funding or just a little bit more fertility, even if it doesn’t break us out of equilibrium. And the empty planet result doesn’t hold if capital can accumulate costlessly and do the research for us. So maybe all that matters is just making sure we make it over the AI threshold and letting the AI take it from there. All right.

Well, to wrap up if we care about the long run, we should consider a wider spectrum of ways long run growth might unfold, not just those matching the Kaldor facts the last few centuries. If we care about influencing the long run, we should also look for those rare pivotal opportunities to change which scenario plays out. To simplify a lot, the Ben paper helps us with the former, showing how a gross singularity via AI may or may not be compatible with reasonable economic modeling. And the Chad paper helped us with the latter, showing a counter intuitive channel through which we could get locked into a low-growth equilibrium, sort of ironically via excessive tech per person. And a policy channel, that could avert it. He focuses on fertility subsidies, destroying technological ideas would do the trick too because it would shrink the number of ideas per person. But hopefully the future of civilization doesn’t ultimately depend on longtermists taking to book burning. And yeah, hopefully all this paves the way for future research on how we can reach an expanding cosmos. Thank you.

Anton Korinek  76:19

Thank you Phil, and thank you all for your contributions, and to everyone who has posted so many interesting questions in our Q&A. Now, luckily, many of them have already been answered in writing, because we are at the end of our allocated time. So let me perhaps just let both of our speakers have 30 seconds to give us a quick reaction to the discussion. Ben, would you like to go first?

Ben Jones  76:51

Sure, I will. Thanks, everyone, for all the great questions in the Q&A. Thanks, Rachael and Phil, for very interesting discussions about us writers in the pair of these papers. And I think the distinction of whether you can automate the ideas production function or not. I mean, that’s kind of where, what do we believe about that? In terms of which very different trajectory we end up on, I think it’s a super interesting question for research. I guess one last comment, I think the singularity type people, they tell a story something like you get a computer one algorithm as good or better than a human. And because you can then have huge increasing returns to scale from that invention of that algorithm, that AI, you think you just keep repeating it over and over again, as instantiations on computing equipment, then you kind of can get to sort of infinite input into or very high input growth into the idea production function. And I mean, that’s where you get this really, really strong singularity, I think, at a more micro statement of what’s going on. But I think the point that Chad and I are making, another way to think about it, is that you’re not going to repeat the human. What we’re going to do, it’s sort of like that we had a slide rule, and then we had a computer, and we have centrifuges, we’ve got automated pipetting, we’re going to research just like production as a whole set of different tasks. And probably what’s going to happen is we’re going to slowly continue to automate some of those tasks. And the more you automate the more you leverage the people who are left, and you can throw capital at those automated tasks. And I think that is the way that’s still a way doesn’t get you to singularities necessarily, but it’s the way potentially past the point Chad is making. And I think it’s really interesting, I think this work collectively helps us really think about where the rubber hits the road, in terms what we have to believe and where the action will be, in terms of the long run outcomes.

Anton Korinek  78:36

Thank you Ben. Chad?

Chad Jones  78:38

Yeah, so let me thank Phil and Rachael for excellent discussions, those were really informative. And I think the one thing I took away from your discussion and from pairing these two papers together is the point that you both identified, so I’ll just repeat it, I think it’s important.  An interesting question is, does the AI revolution come soon enough to avoid the empty planet? And I think that’s really when you put these papers together, the thing that jumps out at you the most, and as Phil kind of mentioned, and Ben was just referring to, small improvements can help you get there and so maybe it’s possible to leverage our way into that, but it’s by no means obvious. It’s been pointed out if you’ve got this fixed pool of ideas, then the AI improves the the fishers but doesn’t change the pool. And so I think a lot of these questions deserve a lot more research. And so I think, Anton, thanks for putting this session together. It was really great and very helpful.

Anton Korinek  79:34

Thank you, everyone, for joining us today and I hope to see you again soon at one of our future webinars on the governance and economics of AI. Bye.



Source link

21May

Digital Solution Sales Specialist, Business Applications (German speaking)

Job title: Digital Solution Sales Specialist, Business Applications (German speaking)

Company: Microsoft

Job description: outcome. You will learn how Microsoft is changing the way companies do business and how to accelerate innovation and empower…The Small, Medium, and Corporate team helps businesses achieve their digital transformation and business goals…

Expected salary:

Location: Dublin

Job date: Sat, 11 May 2024 22:57:03 GMT

Apply for the job now!

21May

GovAI Annual Report 2019 | GovAI Blog


The governance of AI is in my view the most important global issue of the coming decades. 2019 saw many developments in AI governance. It is heartening to see how rapidly this field is growing, and exciting to be part of that growth.

This report provides a summary of our activities in 2019.

We now have a core team of 7 researchers and a network of 16 research affiliates and collaborators. This year we published a major report, nine academic publications, four op-eds, and our first DPhil (read Oxfordese for PhD) thesis and graduate! Our work covered many topics:

  • US public opinion about AI
  • The offense defense balance of AI and scientific publishing
  • Export controls
  • AI standards
  • The technology life-cycle of AI domestic politics
  • A proposal for how to distribute the long-term benefits from AI for the common good
  • The social implications of increased data efficiency
  • And others…

This, however, just scratches the surface of the problem, and we are excited about growing our team and ambitions to better make progress. We are fortunate in this respect to have received financial support from, among others, the Future of Life Institute, the Ethics and Governance of AI Initiative, and especially from the Open Philanthropy Project. We are also fortunate to be a part of the Future of Humanity Institute, which is dense with good ideas, brilliant people, and a truly long-term perspective. The University of Oxford similarly has been a rich intellectual environment, with increasingly productive connections with the Department of Politics and International Relations, the Department of Computer Science, and the new AI Ethics Institute.  

As part of our growth ambitions for the field and GovAI, we are always looking to help new talent get into the field of AI governance, be that through our Governance of AI Fellowship, hiring researchers, finding collaborators, or hosting senior visitors. If you’re interested, visit www.governance.ai for updates on our latest opportunities, or consider reaching out to Markus Anderljung (

ma***************@ph********.uk











).

We look forward to seeing what we can all achieve in 2020.

Allan Dafoe
Director, Centre for the Governance of AI
Associate Professor and Senior Research Fellow
Future of Humanity Institute, University of Oxford

Research

Research from previous years available here.

Major Reports and Academic Publications
  • US Public Opinion on Artificial Intelligence by Baobao Zhang and Allan Dafoe. In the report, we present the results from an extensive look at the American public’s attitudes toward AI and AI governance. We surveyed 2,000 Americans with the help of YouGov. As the study of public opinion toward AI is relatively new, we aimed for breadth over depth, with our questions touching on: workplace automation; attitudes regarding international cooperation; the public’s trust in various actors to develop and regulate AI; views about the importance and likely impact of different AI governance challenges; and historical and cross-national trends in public opinion regarding AI. Featured in Bloomberg, Vox, Axios and the MIT Technology Review.
  • How Does the Offense-Defense Balance Scale? in Journal of Strategic Studies by Ben Garfinkel and Allan Dafoe. The article asks how the offense-defense balance scales, meaning how it changes as investments into a conflict increase. Simple models of ground invasions and cyberattacks that exploit software vulnerabilities suggest that, in both cases, growth in investments will favor offense when investment levels are sufficiently low and favor defense when they are sufficiently high. We refer to this phenomenon as offensive-then-defensive scaling or OD-scaling. Such scaling effects may help us understand the security implications of applications of artificial intelligence that in essence scale up existing capabilities.
  • The Interests behind China’s Artificial Intelligence Dream by Jeffrey Ding in the edited volume “Artificial Intelligence, China, Russia and the Global Order”, published by Air University Press. This high-level overview of China’s AI dream, places China’s AI strategy in the context of its past science and technology plans, outlines how AI development intersects with multiple areas of China’s national interests, and discusses the main barriers to China realizing its AI dream.
  • Jade Leung completed her DPhil thesis Who Will Govern Artificial Intelligence? Learning from the history of strategic politics in emerging technologies, which looks at how the control over previous strategic general purpose technologies – aerospace technology, biotechnology, and cryptography – changed over the technology’s lifecycle, and what this might teach us about how the control over AI will shift over time.
  • The Vulnerable World Hypothesis in Global Policy by Nick Bostrom. This paper introduces the concept of a vulnerable world: roughly, one in which there is some level of technological development at which civilization almost certainly gets devastated by default, i.e. unless it has exited the ‘semi‐anarchic default condition’. It was originally published as a working paper in 2018.

A number of our papers were accepted to the AAAI AIES conference (which in the discipline of computer science is a standard form of publishing), taking place in February 2020:

  • The Offense-Defense Balance of Scientific Knowledge: Does Publishing AI Research Reduce Misuse of the Technology? by Toby Shevlane and Allan Dafoe. The existing conversation within AI has imported concepts and conclusions from prior debates within computer security over the disclosure of software vulnerabilities. While disclosure of software vulnerabilities often favours defence, this article argues that the same cannot be assumed for AI research. It provides a theoretical framework for thinking about the offense-defence balance of scientific knowledge.
  • The Windfall Clause: Distributing the Benefits of AI for the Common Good by Cullen O’Keefe, Peter Cihon, Carrick Flynn, Ben Garfinkel, Jade Leung and Allan Dafoe. The windfall clause is a policy proposal to devise a mechanism for AI developers to make ex-ante commitments to distribute a substantial part of profits back to the global commons if they were to capture an extremely large part of the global economy via developing transformative AI.
  • U.S. Public Opinion on the Governance of Artificial Intelligence by Baobao Zhang and Allan Dafoe. In the report, we present the results from an extensive survey into 2,000 Americans’ attitudes toward AI and AI governance. The results are available in full here.
  • Near term versus long term AI risk framings by Carina Prunkl and Jess Whittlestone (CSER/CFI). This article considers the extent to which there is a tension between focusing on the near and long term AI risks.
  • Should Artificial Intelligence Governance be Centralised? Design Lessons from History by Peter Cihon, Matthijs Maas and Luke Kemp (CSER). There is need for urgent debate over the question over how the international governance for artificial intelligence should be organised. Can it remain fragmented, or is there a need for a central international organisation? This paper draws on the history of other international regimes to identify advantages and disadvantages involved in centralising AI governance.
  • Social and Governance Implications of Improved Data Efficiency by Aaron Tucker, Markus Anderljung, and Allan Dafoe. Many researchers work on improving the data efficiency of machine learning. What would happen if they succeed? This paper explores the social-economic impact of increased data efficiency on e.g. market concentration, malicious use, privacy, and robustness.
Op-Eds & other Public Work
  • Artificial Intelligence, Foresight, and the Offense-Defense Balance, War on the Rocks, by Ben Garfinkel and Allan Dafoe. AI may cause significant changes to the offense-defense balance in warfare. Such changes that essentially scale up existing capabilities are likely to be much easier to analyze than changes that introduce fundamentally new capabilities. Substantial insight into the impacts of AI can be achieved by focusing on this kind of quantitative change. The article summarises the work of How Does the Offense-Defense Balance Scale? in the Journal on Strategic Studies by the same authors.
  • Thinking about Risks from Artificial Intelligence: Accidents, Misuse and Structure, Lawfare by Remco Zwetsloot and Allan Dafoe. Dividing AI risks into misuse risks and accident risks has become a prevailing approach in the AI safety field. This piece argues that a third, perhaps more important, source of risk should be considered: structural risks. AI could shift political, social and economic structures in a direction that puts pressure on decision-makers — even well-intentioned and competent ones — to make costly or risky choices. Conversely, existing political, social and economic structures are important causes of risks from AI, including risks that might look initially like straightforward cases of accidents or misuse.
  • Public Opinion Lessons for AI Regulation Brookings Report by Baobao Zhang. An overwhelming majority of the American public believes that artificial intelligence (AI) should be carefully managed. Nevertheless, the public does not agree on the proper regulation of AI applications, as illustrated by the three case studies in this report: facial recognition technology used by law enforcement, algorithms used by social media platforms, and lethal autonomous weapons.  
  • Export Controls in the Age of AI in War on the Rocks by Jade Leung, Allan Dafoe, and Sophie Charlotte-Fischer. Some US policy makers have expressed interest in using export controls as a way to maintain a US lead in AI development. History, this piece argues, suggests that export controls, if not wielded carefully, are a poor tool for today’s emerging dual-use technologies such as AI. At best, they are one tool in the policymakers’ toolbox, and a niche one at that.
  • GovAI (primarily Peter Cihon) led on a joint submission with the Center for Long Term Cybersecurity (UC Berkeley), the Future of Life Institute, and the Leverhulme Centre for the Future of Intelligence (Cambridge) in response to the US government’s RFI Federal Engagement in Artificial Intelligence Standards.
  • A Politically Neutral Hub for Basic AI Research by Sophie-Charlotte Fischer. This piece argues that a politically neutral hub for basic AI research, committed to the responsible, inclusive, in addition to peaceful development and use of the new technologies should be set up.
  • Ben Garfinkel has been doing research on AI risk arguments, exemplified in his Reinterpreting AI and Compute, a number of internal documents (many of which are shared with OPP), his EAG London talk, and an upcoming interview on the 80,000Hours Podcast.
  • ChinAI Newsletter. Jeff Ding continues to produce the ChinAI newsletter, which now has over 6,000 subscribers.
Technical Reports Published on our Website
  • Stable Agreements in Turbulent Times: A Legal Toolkit for Constrained Temporal Decision Transmission by Cullen O’Keefe. Much of AI governance research focuses on the question of how we can make agreements or commitments now that have a positive impact during or after a transition to a world of advanced or transformative artificial intelligence. However, such a transition may produce significant turbulence, potentially rendering the pre-transition agreement ineffectual or even harmful. This Technical Report proposes some tools from legal theory to design agreements where such turbulence is expected.
  • Standards for AI Governance: International Standards to Enable Global Coordination in AI Research & Development by Peter Cihon. AI standards work is ongoing at ISO and IEEE, two leading standards bodies. But these ongoing standards efforts risk not addressing policy objectives, such as a culture of responsible deployment and use of safety specifications in fundamental research. Furthermore, leading AI research organizations that share concerns about such policy objectives are conspicuously absent from ongoing standardization efforts. This Technical Report summarises ongoing efforts in producing standards for AI, what their effects might be, and makes recommendations for the AI Governance / Strategy community.
Select publications by our Research Affiliates

Public engagement

Many more of our public appearances (e.g. talks, podcasts, interviews) can be found here. Below is a subset:

Team and Growth

The team has grown substantially. In 2019, we welcomed Toby Shevlane as a Researcher, Ben Garfinkel and Remco Zwetsloot as DPhil scholars, Hiski Haukkala as a policy expert, Ulrike Franke and Brian Tse as a Policy Affiliates, in addition to Carina Prunkl, Max Daniel, and Andrew Trask as Research Affiliates. 2019 also saw the launch of our GovAI Fellowship, which received over 250 applications and welcomed 5 Fellows in the Summer. We will continue to run this Fellowship in 2020, running a Spring and Summer cohort.

We continue to receive a lot of applications and expressions of interest from researchers across the world who are eager to join our team. In 2020, we plan to continue our GovAI Fellowship programme, engaging with PhD researchers particularly in Oxford, and hiring additional researchers.



Source link

21May

Business Development Representative – Dublin based – Covering the German Market

Job title: Business Development Representative – Dublin based – Covering the German Market

Company: BMC Software

Job description: – and are relentless in the pursuit of innovation! BMC Software Inside Sales professionals have the power to transform entire… where you would feel happy to come to work, then BMC is the place to be. Join us today as a Business Development Representative in…

Expected salary:

Location: Dublin

Job date: Sat, 11 May 2024 23:19:03 GMT

Apply for the job now!

21May

Carles Boix and Sir Tim Besley on Democratic Capitalism at the Crossroads


Carles Boix is the Robert Garrett Professor of Politics and Public Affairs in the Department of Politics and the Woodrow Wilson School of Public and International Affairs at Princeton University. In 2015, he published Political Order and Inequality, followed by Democratic Capitalism at the Crossroads: Technological Change and the Future of Politics, the subject of our webinar, in 2019.

Sir Tim Besley is School Professor of Economics of Political Science and W. Arthur Lewis Professor of Development Economics in the Department of Economics at LSE.  He is also a member of the National Infrastructure Commission and was President of the Econometric Society in 2018.  He is a Fellow of the Econometric Society and British Academy.  He is also a Foreign Honorary Member of the American Economic Association and the American Academy of Arts and Sciences. In 2016 he published Contemporary Issues in Development Economics.

You can watch a recording of the event here or read the transcript below:

Allan Dafoe:

Okay, welcome everyone. I hope you can all hear and see us. Today we have the privilege of having Carles Boix and Sir Tim Besley talk to us about insights from Carles’s recent book, Democratic Capitalism at the Crossroads:Technological Change and the Future of Politics. Carles is the Robert Garrett Professor of Politics and Public Affairs in the Department of Politics and the Woodrow Wilson School of Public and International Affairs at Princeton University. He has published important work in political economy and comparative politics, particularly around the role of institutions in shaping economic growth and inequality. Carles’s work has been especially impactful on my career choice, I’m happy to share in 2003 when I was considering what fields to move into — economics, political science, sociology, — I read Carles’s book on Democracy and Redistribution and I was at the time and remain deeply impressed by the sweep and importance of its argument, but also its theoretical parsimony and breadth of empirical support, I highly recommend it. This book, in fact led me to see political science as a discipline compatible with asking big important questions such as, as will be discussed today the impact of advances in AI on political institutions inequality.

Following Carles’s talk, Sir Tim Besley will offer some reflections on the theme. Tim is the School Professor of Economics of Political Science and W. Arthur Lewis Professor of Development Economics in the Department of Economics at LSE.  He is also a member of the National Infrastructure Commission and was President of the Econometric Society in 2018.  Tim has published in top journals in economics, and I’m also happy to see in top journals in political science on topics spanning political violence, state capacity, economic development and their interactions.

Following this, we will have a 30 minute discussion. At the bottom of the screen you can see ask a question, please do click on that and articulate your questions and comments. And then also vote up those questions that you most think worth discussing. We may not get to too many or any of them, but I will certainly try to look at them and incorporate them into the conversation. Okay, so with that out of the way, it’s, again, our pleasure to have Carles introduce some of the themes and insights from his book.

Carles Boix:

Okay, so, thank you so much for your inviting me to this and for this very kind presentation and thank you to Tim for the you know, being willing to discuss the book, and of course, to all those attending. I think It’s going to be a bit difficult, like talking to a dark room. And normally we are used to seeing what the audience thinks, at least when we give seminars or talks. So I’ll try my best. I hope that the future is not like this and it’s going to be like a short shock to all of us and we can reconvene in a normal way. So I’m going to use some slides and to let me look for them.

So basically, this is a book that I published last year. And it’s basically driven by a double motivation. The first one is what I think is a long term intellectual question, which is the compatibility of democracy and capitalism. This is a long running question that in the 19th century was basically answered in a very pessimistic way, both by the right and the left. So Marx thought democracy and capitalism were incompatible. Having one person one vote, could not be together with the protection of what he called the interests of the bourgeoisie. But it was also answered in a pessimistic way by conservatives and in fact, by liberals like john Stuart Mill, who made the point that they would only be compatible, provided the population was educated at a high level. Then after these periods of pessimism, what we see is in the 20 century, at least In the second half of the 20th century, a period of optimism, the coining of the term democratic capitalism, the possibility of having representative democracy, markets or regulated markets, but it’s still free markets and a kind of strong welfare state. And then today, what we see is a moment of questioning of the compatibility of these two things. And so that’s where the book comes. And these links, the sort of long term intellectual question with today’s politics, that’s the second motivation, which I perhaps I think it’s what has driven most of you to come to to attend this talk. And the politics of today, at least in the advanced world, is one where you we see a lot of mistrust towards politicians, growing abstension, polarisation and the rise of what many call populist parties.

So this process or this ideational change that I’m talking about during the 19th century, 20th century and today also coincides or comes parallel to economic change. So what I’m showing here basically is over time for a few countries for which we have some data the US, Britain and Japan, the evolution of their level of income inequality measured by the Gini index. And what we see is high levels of inequality in the 19th century, a sharp decline by the middle or the end of the first third of the 20th century, and then approximately around 1970, 1980 a growing inequality in those countries, so three periods. And my answer in the book is that in a way, this is driven by technological change, which then has consequences on the labour market and in politics, these technological changes basically, in let’s say industrial capitalism, produced or generated by a stretch for more productivity, an increase in efficiency leads to basically a process of capital labour substitution that results in what I would call different production models, especially in what regards to what is the kind of labour that is complimentary to capital. So, in the book I distinguish, and I discuss three moments that are not precise but that kind of dominate each century, what I call the Manchester model, where basically what we see is a process of mechanisation. At least of some industries. The paradigm of them would be the textile industry with the use of mostly unskilled labour, the decline of demand for artisans, an increasing use of unskilled labour, that then coincides or leads, there is discussion on that in the literature, with declining wages and a more unequal distribution of income.

By the beginning of the 20th century or at the end of the 19th century, a set of technological changes such as electricity, what is called the use of parts that are interchangeable that can be used in many different industrial processes, and the production system that has a sequential layout, literally, assembly line in Detroit, or in the batch production machine, for example, tobacco machines that basically result in the decline of very unskilled labour and semi skilled labour becoming complimentary. These transformations in the labour market, I claim in the book are associated with growing wages for everyone with a distribution of income this equalising and with the formation of a class that in the British sociological literature was labelled the affluent worker or having an affluent working class. Then by the 1970s and 80s a bunch of changes, mostly an increasing computational capacity and a decline in computing costs leads to a new transformation that I center in Silicon Valley.

So, everything in the book moves west from Manchester to Detroit, from Detroit to Silicon Valley. That all these changes in computation lead to an automation of basically routine jobs, and in a way also fosters a new globalisation a real globalisation that includes a set of countries that were in the periphery of capitalism into the world economy, China, East Asian countries with important effects on wages in Europe and North America. The complimentary labour at this stage is high skilled labour. And as volunteers such as the board or have shown the hollowing out of the labour market, a polarisation of the labour market, a decline in what were considered middle class jobs. Here are good jobs in the past. So a growing inequality.

These changes have an impact on politics. In the 19th century, the incompatibility of democracy and capitalism led to basically very restrictive suffrage regimes. But in the 20th century, what we see is after a period of strife and World War One and Two and the crisis of the 30s, a process of democratic consolidation, what sociologists and political scientists called the end of ideology period, and base and pivotal politics with parties competing for the median voter. This has been in a way replaced today by conflictual politics, by polarisation and by what some call a possible real crisis of democracy and democratic backsliding. I’m going to focus on the rest of the talk on basically this political side of the story. Which basically occupies the last two chapters of the book.

Let me say that when we look at democracy, at least in advanced countries, which are basically where I pay attention to in the book with some references to developing countries at the end of the book, and at the end of this talk, when we look at the politics of democratic countries, what we observe is a process of, as I said before, depolarization. So here I show how left or centre left, certain right, centre right parties, on average, were located in terms of their position in ideological left right, using party manifestos for which we have information about available at least since the end or the beginning of the Cold War period. What we see is a process as I said of depolarization, so the lines, both of them converge to words, more moderate positions, both the centre right parties here in blue, and the social democratic parties in red, they become very similar or relatively similar on average for of course, in the late 1980s. But what is also very interesting about this graph is that, after that process of convergence, they remain basically, at the stable and close to each other.

This is comes in contrast with a process of political disaffection at the mass level. So, here what I show is the proportion of people that respond that “politicians care about what people like the respondent, like me,” what we see is a growing or a decline of trust in politicians. The longest series is the US series represented in black, for which we have data going back to the mid 1960s. At that time, about 60% of Americans thought that politicians care about what people like them thought. There was a decline that coincided with the Vietnam War, a slight increase in the 1980s, and then a big our fall in are in the 1980s. With a very short spike related to the to the Twin Towers attacks at the beginning of the 2000s. Today, only 20% of Americans think that politicians care about what they think. But the same process happens in Germany in blue, in France, in red in Britain, already at low levels in the 1970s. Now at only 10% of British voters think that politicians care about them, or at least this was for 2014. It’s not the same everywhere. So here I show a case where the trend goes just opposite. This is the case of Finland, where trust is at about 40%. So much higher than any of the big countries. There are others small countries like the Netherlands that have have a similar levels of of trust, or at least in 2015.

This process of political disaffection is correlated or comes with growing abstension. So here I show in green, the levels of abstention in the US excluding the South which is a different or was a different thing. Abstention was extremely high as a result of exclusion of African Americans in Europe is in black. And what we see is that extension till the early 1970s was at 15%. extremely low. At that time, the study of abstention in the scholarly literature was a non issue. So it’s very hard to find any one that was interested in that topic. But today, abstention in Western Europe stands at about 33% not that different from the US in the East Coast and in the West.

The abstention is concentrated in particular sectors. So here I show three countries Finland, for which we have real data because they have a registration system that allows us to track all individuals and surveys from France and the United Kingdom and I show the rate of abstention divided by cohorts. So young, middle and senior. And then by income quintile, just here, the top quintile, the middle quintile, and the bottom quintile. And what we see, basically, is a extremely high level of abstention among young people, but also among those that are in the low income levels. So certainly groups that are have experienced, I think, economic shocks, and that in some economies the young are more excluded from labour markets somehow are turning out much less.

When we think about the stability of party platforms I was showing before and this kind of growing alienation of the population the question basically brings to us is a puzzle, why is that, given the degree of dissatisfaction that was building up among voters for a long time, European mainstream parties, so centre right centre left parties, those in favour of the consensus of democratic capitalism didn’t react. In part they did, but they did not as much. And it’s surprising because when we look at the support among the electorate, what we see is that starting in the 1970s, the percentage of support for centre right, and centre left parties, which was at about 70% had been like that, since the end of World War Two started to decline today. They together have the support of about 45% of the electorate, so it’s a big decline. It’s corresponds with increasing abstention, of course. But the question is why didn’t these parties respond?

And I think that there are several explanations to that. It could be that fiscal constraints such as a growing population of pensioners, globalisation prevented them constrained mainstream parties from acting more strongly to in response to this decline in trust, but I think that part of it was also a question of electoral incentives. When we look at the proportion of votes that mainstream parties got in elections, basically until the crisis of 2007, 2008, their level of support was very stable at around 80% and it has only been in the last 10,15 years that as a proportion of voters, they have started to bleed votes. This has led to a changed political landscape.

So, here in a sort of very stylized way I represent how we may want to think about electoral politics during what I would call the trade capitalism or the golden age of capitalism. Or if you will, the post war period. The graph has two dimensions one is about compensation. So basically taxes from low to high. And then there is a vertical dimension these would be globalism, trade, immigration, from being very much against globalism, so at zero to being very much in favour of globalisation. And then I represent the location of voters in blue, it would be were more or less the location of middle class voters. And in red the location of working class voters, this is these curves are not the circles or are not indifference curves is just more or less representing what is the bulk of the middle classes and the working classes at that time. And then two parties right and left located are closer to the let’s say, median voter. Here clearly the preferences of voters are only different or heterogeneous along the compensation dimension. Globalisation at that time was not that important. Again, I’m talking about advanced democracies. And it was not important because there was globalisation in the sense of high integration of advanced countries. But the developing countries were not really competing with industrial workers in advanced countries, this has changed progressively. And there has been a process whereby the middle classes have become more heterogeneous, with some part of the great work of those voters moving towards embracing even globalisation even more, those and some are less. And then the working class affected by all the transformations of automation and globalisation, moving towards a position that is protectionist, if you will. Here, what I’m talking about is preferences. So in a way these preferences have been framed, or structure the narratives have been constructed by politicians themselves.

And so in this new world, what we see is the growth of an alternative instead of the old right and left of the past, what we see is the growth of this thing called populism, which is a term that I refuse to use without quote marks, because it’s a difficult term to define. And I would rather talk about these as anti globalisation or nationalist movements. And so in response to this movement that insists on the second dimension, the trade and immigration and mentioned then the response, I think, at least from the left to get many of its voters back seems to be a more polarised position. So that would be a movement from L to L prime. So here we would have if we think about the US, Trump versus Sanders or something like that. So that’s how these economic transformations have transformed the politics  of the advanced world. And when we look at some data on vote for populist parties, what we see is, again, I here I show the proportion of people voting for populist parties in the mid 2000s divided by income quintiles. So, basically, in the bottom quintiles, the proportion of voters voting for populist parties, more than double sometimes triples the vote for populist parties among those at the top income quintile. Of course, this is mediated by electoral institutions in proportional representation systems, populist parties have had a much easier life in terms of getting the vote and becoming a viable party that is voted by by the electorate in countries like the UK with a majoritarian system. Can parties like the UKIP didn’t do well in elections. But of course, everything exploded in the around the question of Brexit referendum.

So, let me now kind of finish by thinking a bit about or talking a bit about what’s next, which is kind of the discussion that is all over the place. And here what we find there’s a division between techno optimists and what we want to call techno pessimists. This is not a new discussion, we go back to the 19th century, the 20th century, and we have the two positions there. So Keynes, as well known from his well known as, say, economic possibilities for our grandchildren, has a optimistic position about the effects of technology. And basically, consider that we three hours a day, we probably will have enough and we will be able to do many other things, such as what Marx thought communism would bring. So, you know, hunting in the morning and fishing in the afternoon or the other way around. We also have the pessimistic Marx, at least for capitalism, who thought that at the end of the day, technological change would lead to monopolistic capital to the immiseration of the working class and two columns of workers going down into Manhattan and Silicon Valley, I suppose, to burn everything.

So what is my position? Well, really, I think we do not know the future. Right? So here I’m showing a graph that I find I don’t know if funny but at least entertaining and revealing. This is data from a paper by two authors Armstrong and Sotala that look at papers and books with predictions about when artificial intelligence may replace human activity. And so what I plot here is I use their plot, in fact, is when did they predict in the horizontal axis that AI would replace human activity in the sense of when did they publish the paper and then in the vertical axis, what was the year they predicted AI would replace human activity. And what we see is predictions all over the place. That go from for the latest from 2020. So papers published in 2010 thought that we would already be replaced. To some that put the date at the end of this century. In fact, there are some authors like the father of the singularity movement, Kurzweil, who have made predictions for different years over time. So if we do not know what may happen, the only thing I think we can do is to consider different scenarios, as they are defined by preparing parameters.

That’s what I do at the end of the book. I consider four things that I think will determine where are we going to go. Three of them are mostly economic, if you will, and one is political. Those parameters are labour demand, the supply of labour, how concentrated capital may be, and the responses, the political responses to all these in the north, but also in the south.

So I will just spend my few minutes last minutes on political responses by saying that my assumption through the discussion is that what has changed from the 19th century. So if you remember the first graph I showed was about the evolution of inequality from high to then declining, and now, again, growing. What differs today is that in the 19th century, even the most advanced countries, the industrial countries were poor in comparison with what we are now, right so, a country like the US had a per capita income in 1870 at around $2500 are in 1990 constant dollars Today, the per capita income is about $30,000. So this change, in a way has to have an important impact in terms of the room of manoeuvre that we have to do many things that we could not have done two centuries ago or 150 years ago.

So what may be the impact of these technological changes? As I said at the beginning, there has been a lot of talk about the crisis of democracy about democratic backsliding. And so when we look at the data, what we find is that in terms of the number of democratic breakdowns since 1800 till now, there were no democratic breakdowns basically in the 19th century, because there were no democracies. Then we see a lot of democratic breakdowns in the 1920s, 30s and then again during the 20th century, but when we look at us today, and given that there are many more democracies, what we see is at least till 2015, which is the last year for which I have reliable data, democratic breakdowns have not gone up. So that’s an optimistic thing or information or piece of information we have. What about the probability of democratic breakdowns? Here, what I show is what is the probability that any democracy today breaks down four different levels of income from these, the horizontal axis is in thousands of dollars, so 5000 10,000 15,000 20,000 and the probability of democratic breakdown here is calculated as the number of democratic breakdowns over democracies for different levels of income from all the data we have in the last 200 years.

And for poor countries, the probability of democratic breakdown at any given a year is about 6%. For very rich countries, it’s about basically zero percent. So, in principle, the question is that when we look at economic development, more developed countries are more likely to be democratic, or at least for those that are democratic, not to break down.

And so, there are many different explanations that have been given. It could be that economic development leads to a as a result of declining marginal utility of income makes the rich indifferent about the the distributional mechanisms that come with democratic voting, it could be that the economic development is correlated with a set of ideational commitments, such as toleration, and so on that then lead to a more willingness to have democracy or, and that’s the explanation that I think I prefer, which is that without excluding the others, but I think it’s more important is that development has led to a different incentive, a different structure of the economy has led to at least till now, more equality that then has been used on the conflict, social conflict, and has allowed democracy to flourish.

So basically, this argument in some way to be the quick arm runs like this. In the old times, before industrialization in most countries were governed by monarchy and nobility, what also would refer as stationary bandits. And as a result of the way they governed, the distribution of wealth was unequal. And in that context, democracy was impossible. Those elites would blocked the introduction of representational mechanisms. Then with industrialization, a new system appears what we may want to call open economies with free markets and with technologies that equalised growth. So 20th century capitalism. When we look at the distribution of cases in terms of [inaudible] country years for those cases, we have information with the vertical axis being the Gini index and the horizontal axis being income.

In the 20 century, what we see is a mass of countries, that among poor countries, there is a lot of heterogeneity in terms of levels of inequality. But for high levels of development, what we see is much lower levels on average of inequality. That is the story, I think of the 19th century and especially late 20th century capitalism. And so of course, the question is, what is I think in the back of our minds, is, what if these open economies, so, industrial capitalism, as we know it, free markets generate increasingly unequal economies? What are the chances of democracy in this context of yes, much higher levels of prosperity, but also much more inequality? Now, I think that it’s difficult to know it will happen, it will may well be that this inequality may lead to elite capture and to democratic backsliding and so to mass resentment, so it will basically undermine the foundations of democracy. On the other hand, and I here want to be optimistic or at least sound optimistic is that provided that the allocation of returns is thought to be fair, so markets function in a way that do not give advantage to those that have more. And given our already high income levels, it may result in democratic stability.

What do I mean by that? Well, we know that in lab settings, participants generally allocate resources in equal shares, but we also know that they distribute them unqually when in these lab settings when people are engaging tasks by that require effort. We also know from observational data from surveys, that inequality to people is unacceptable when it is at high levels, but some inequality is acceptable. So if we maintain a system that is fair, and then democracy, even if there is inequality may well survive. And I think that in the past democracy was impossible, meaning before the 19th century or before the 20th century, because besides most of the population being poor, the distribution of assets was extremely unfair. The second part of it is that because we have high incomes, this should allow us to develop institutions to respond to shocks to compensate losers to make sure that people are helped to adjust to these new technologies. Now, for this to happen for this democracy to be of quality and to not disappear, we need institutional reforms, in my opinion, to prevent an excessive concentration of economic and political power to avoid crony capitalism. Here, I think that some democratic accountability mechanisms, such as, of course, financial reform in the US, but also the introduction of ideas, such as the electoral voucher should help. I think that breaking up large companies, I don’t want to get into the Zuckerberg Musk dispute, but I think that making sure that those that have become very advantaged or benefited from these technological changes need to be regulated and perhaps, are divided into several companies that also should help.

A third thing that I think is not that discussed that electoral participation matters, that the fact that a lot of people are not turning out and that these abstainers basically come from low income young people, these are changes the median voter and changes the median representative and has an impact on policy. So changes in participation are fundamental mobilisation may be fundamental to reverse the things that are not good for democracy.

And finally, a fourth thing, which is country size. I think that small countries for some reason, because of the tight relationship between representatives and and voters are better positioned to respond to the excessive concentration of economic and political power of the capitalism of today. This is something that I just suggest. It’s not that I’m completely sure, but I think it’s something that should be investigated a bit more by all of us.

So let me finish by saying that in the book, what I challenge is the standard story. We have a standard story that says that modernization led to the end of history, and instead what I suggest is that we should think about capitalism as having different technologies, with different effects on employment on wages. And so these may explain the kinds of political honeymoons and political moments of conflict that we have seen in the last 200 years. So thank you so much.



Source link

21May

Finance Business Partner

Job title: Finance Business Partner

Company: Vodafone

Job description: : In this role within the Finance team you will be a business partner to the Vodafone Enterprise business divisions in… relation to the Vodafone Business Acceleration program called SPARK. Vodafone Business is repositioning itself as a 360-degree…

Expected salary:

Location: Dublin

Job date: Sun, 12 May 2024 01:27:25 GMT

Apply for the job now!

21May

Michael C. Horowitz on When Speed Kills: Autonomous Weapon Systems, Deterrence, and Stability


In this talk, Michael draws on classic research in security studies and examples from military history to assess how AWS could influence two outcome areas: the development and deployment of systems, including arms races, and the stability of deterrence, including strategic stability, the risk of crisis instability, and wartime escalation. He focuses on these questions through the lens of two characteristics of AWS: the potential for increased operational speed and the potential for decreased human control over battlefield choices.

You can watch the full talk here



Source link

21May

Business Development Manager

Job title: Business Development Manager

Company: Celtic Careers

Job description: Business Development Manager Celtic Careers has partnered with a leading distributor of engineering materials… and solutions who is currently hiring a Business Development Manager to join their Regional Business Development team. Reporting…

Expected salary:

Location: Dublin

Job date: Sun, 12 May 2024 03:11:18 GMT

Apply for the job now!

Protected by Security by CleanTalk