We were joined by Sam Altman and William G. Gale for the first GovAI seminar of 2022. Sam and William discussed Sam’s blog post ‘Moore’s Law for Everything‘ and taxation solutions for advanced AI.
You can watch a recording of the event here and read the transcript below.
Anton Korinek 0:00
Hello, I’m Anton Korinek. I’m the economics lead of the Centre for the Governance of AI which is organizing this event, and I’m also a Rubenstein Fellow at Brookings and a Professor of Economics and Business Administration at the University of Virginia.
Welcome to our first seminar of 2022! This event is part of a series put on by the Centre for the Governance of AI dedicated to understanding the long-term risks and opportunities posed by AI. Future seminars in the series will feature Paul Scharre on the long-term security implications of AI and Holden Karnofsky on the possibility that we are living in the most important century.
It’s an honour and a true pleasure to welcome our two distinguished guests for today’s event, Sam Altman of OpenAI and Bill Gale of Brookings. Our topic for today is [ensuring] shared prosperity in an age of transformative advances in AI. This is a question of very general interest. I personally view it as the most important economic question of our century. What made us particularly excited to invite Sam for this is a blog post that he published last year, entitled “Moore’s Law for Everything,” to which we have linked from the event page. In his post, Sam describes the economic and social challenges that we will face if advanced AI pushes the price of many kinds of labour towards zero.
The goal of today’s webinar is to have a conversation between experts on technology and on public policy, represented here by Sam and by Bill, because the two fields are often too far apart, and we believe this is a really important conversation. Sometimes technologists and public policy experts even speak two different languages. For example, we will be talking about AGI in today’s webinar, and that has two very different meanings in the two fields. In public policy, specifically in US tax policy, AGI is “Adjusted Gross Income” and is used to calculate federal income taxes. In technology circles, AGI means “Artificial General Intelligence.” To be honest, I have mixed feelings about both forms of AGI.
I want to start our event today with Sam to hear about the transformative potential of AI. But let me first introduce Sam. Sam is the CEO of OpenAI, which he co-founded in 2015, and which is one of the leading AI companies focused on the development of AGI that benefits all of humanity. Sam was also a former president of Y Combinator.
Sam, OpenAI has made its name by defining the cutting edge of large language models. To give our conversation a technical grounding, can you tell us about your vision for how we will get from where we are now to something that people will recognize as human-level AI or AGI? And are you perhaps willing to speculate on a timeline?
Sam Altman 3:18
First of all, thanks for having me. I am excited to get to talk about the economic and policy implications of [AI], as we spend most of our time really thinking hard about technology and the very long-term future. The short and medium-term challenges to society are going to be immense. It’s nice to have a forum of smart people to talk about that. We’re looking for as many good ideas here as we can find.
There are people who think that if you just continue to scale up large language models, you will get AGI (not the tax version of “AGI”!). We don’t think that is the most likely path to get there. But certainly, I think we will get closer to AGI as we create models that can do more: models that can work with different modalities, learn, operate over long time horizons, accomplish complex goals, pick the data they need to train on to do the things that a human would do, read books about a specific area of interest, experiment, or call a smart friend. I think that’s going to bring us closer to something that feels like an AGI.
I don’t think it will be an all at once moment; I don’t think we’re gonna have this one day of [AGI] takeoff or one week of takeoff. But I do expect it to be an accelerating process. At OpenAI we believe continuous [AGI] deployment and a roughly constant rate of change is better for the world and the best way to steward AGI’s [emergence]. People should not wake up [one morning] and say, “Well, I had no idea this was coming, but now there’s an AGI.” [Rather], we would like [AGI development] to be a continuous arc where society, institutions, policy, economics, and people have time to adapt. And importantly, we can learn along the way what the problems are, and how to align [AI] systems, well in advance of having something that would be recognized as an AGI.
I don’t think our current techniques will scale without new ideas. But I think there will be new research [at a] larger scale and complex systems integration engineering, and there will be this ongoing feedback loop with society. Societal inputs and the infrastructure that creates and trains these models, all of that together will at some point become recognizable as an AGI. I’m not sure of a timeline, but I do think this will be the most important century.
Anton Korinek 6:03
Thank you. Let’s turn towards the economic implications. What do you view as the implications of technological advances towards AGI for our economy?
Sam Altman 6:18
It’s always hard to predict the details here. But at the highest level, I expect the price of intelligence—how much one pays for the [completion of] a task which requires a lot of thinking or intellectual labour—to come way down. That affects a lot of other things throughout the system. At present, there’s a level [of the complexity of tasks] that no one person—or group of people that can coordinate well—is smart enough to [perform], and there’s a whole lot of things that [currently] don’t happen. And as these [tasks can be performed using AI], it will have a ton of positive implications for people.
It will also have enormous implications for the wages for cognitive labour.
Anton Korinek 7:10
You titled your blog post on this topic, “Moore’s Law for Everything.” Could you perhaps expand a little bit on what “Moore’s Law for Everything” means to you?
Sam Altman 7:31
The cost of intelligence will fall by half every two years—or 18 months, [depending on which version] of Moore’s law you consult. I think that’s a good thing. Compound growth is an incredibly powerful force in the universe that almost all of us underestimate, even those of us who think we understand how important it is. [This intelligence curve] is equally powerful: and this idea, that we can have twice as much of the things we value every two years. This will [allow for not just] quantitative jumps but also qualitative ones, [which are] things that just weren’t possible before. I think that’s great.
If we look at the last several decades in the US, [we can] think about what wonderful things have been accomplished by the original [version of] Moore’s law. Think about how happy we are to have that. I was just thinking today what the pandemic would have been like if we all didn’t have these massively powerful computers and phones that so much of the world can really depend on. That’s just one little example. Contrast that with industries that have had runaway cost disease and how we feel about those.
We must embrace this idea that AI can deliver a technological miracle, a revolution on the order of the biggest technological revolutions we’ve ever had. [This revolution is] how society gets much better. [However], the challenges society has faced at this moment feel quite huge. People are understandably quite unhappy about a lot of things. I think lack of growth underscores a lot of those and if we can restore that [growth, through] AI cutting the cost of cognitive labour, we can all have a great rate of progress, and a lot of the challenges in the world can get much better.
Anton Korinek 9:30
This is really fascinating. Cutting the cost of everything by 50% every two years, or doubling the size of the economy every two years, no matter which way we put it, is a radical change from the growth rates that we face today.
Sam Altman 9:48
Society doesn’t do a good job with radical ideas anymore. We don’t think about them. We no longer kind of seem to believe they’re possible. But sometimes, technology is just powerful [enough] and we get [radical change] anyway.
Anton Korinek 10:00
Some people assert that once we have systems that reach the level of AGI, there will be no jobs left whatsoever, because AI systems will be able to do everything better: they will be better academics, better policy experts, and even better CEOs. What do you think about this view? Do you think there will be any jobs left? If so what kinds of jobs would they be?
Sam Altman 10:25
I think there will be new kinds of jobs. There will be a big class of [areas] where people want the connection to another human. And I think that will happen. We’re seeing the things that people do when they have way more money than they need than they can spend, and they still want to buy status. I believe the human desire for status is unlimited. NFTs are a fascinating case study, and we can see more things [headed] in that direction. [However], it’s hard to sit here and predict the jobs on the other side of the revolution. But I think we can [observe] some things about human nature that will help us [predict] what they might be.
It’s always been a bad [prediction], it’s always been wrong to say that after a technological revolution there will be no jobs. Jobs just look very different on the other side. [However], in this case, I expect jobs to look the most different of any of the technological revolutions we’ve seen so far. Our cognitive capabilities are such a big part of what makes us human—they are the most remarkable of all of our capabilities—and if [cognitive labour] gets done by technology, then it is different. [But], I think, we’ll find new jobs which will feel really important to the people in the future (and will seem quite silly and frivolous to us, in some cases). But there’s a big universe out there, and we or our descendants are going to hopefully go off and explore that and there’s going to be a lot of new things in that process.
Anton Korinek 11:58
That’s very interesting.
I’ll move to the realm of public policy now. One of the fundamental principles of economics is that technology determines how much we can produce, but that our institutions determine how this is distributed. You wrote that a stable economic system requires growth and inclusivity. I imagine growth will emerge naturally if your technological predictions materialize. But what policies do you advocate to make that growth inclusive?
Sam Altman 12:31
Make everybody an owner. I am not a believer in paternalistic institutions deciding what people need. I think [these systems] end up being wasteful and bureaucratic [along with being] mostly wrong about how to allocate [gains]. I also do not believe [we can maintain a] long-term, successful, capitalist society in which most people don’t own part of the upside.
[However], I am not an economist and even less a public policy expert, so I think the part you should take seriously about the Moore’s Law essay is the technological predictions, which I think [may be] better than average, while my economic and policy predictions are probably bad. I meant [these predictions to serve] as a starting point for the conversation and as a best guess at predicting where things will go. But I’m well out of my depth.
I feel confident that we need a society where everyone feels like an owner. The forces of technology are naturally going to push against that, which I think is already happening. In the US, something like half of the country owns no equities (or land). I think that is really bad. A version of the policy I would like is that rather than having increasingly sclerotic institutions, that I think have a harder time keeping up—given the rate of change and complexity in society, [because] they say we’ll have one program and [then change it to another and then another—we must find a way to say,” Here’s how we’re going to redistribute some amount of ownership in the things that matter, so everyone can participate in the updraft in society.”
Anton Korinek 14:34
That’s very thought-provoking, to redistribute ownership, as opposed to just redistributing the output itself. Now, before we turn over the discussion to Bill, let me ask you one more question: what do you think about the political feasibility of proposals like redistributing ownership? Let me make it more concrete: what could we do now to make a solution like what you are describing politically feasible?
Sam Altman 15:06
I feel so deeply out of my depth there that I hesitate to even hazard a guess. But it seems to me the Overton Window is expanding and could expand a lot more. I think people are ready for a real change. Things are not working that well for a lot of people. I certainly don’t remember people being this unhappy in the US in my lifetime, but maybe I’m just getting old and bitter.
Anton Korinek 15:40
That’s a very honest thing to say. I’m afraid none of us is a real expert on all these changes, because as you say, they are so radical that they are hard to conceive of and [it is] hard to imagine what they will lead to.
Thank you for this fascinating initial conversation, Sam.
Now I’ll turn it over to Bill. Bill is the Miller Chair in Federal Economic Policy and a Senior Fellow at Brookings. He is an expert on tax policy and fiscal policy and a co-director of the Tax Policy Center of Brookings and the Urban Institute. Bill has also been my colleague at Brookings for the past half-year, and I’ve had the pleasure to discuss some of these themes with him.
Bill, Sam has predicted that Moore’s law will hold for everything if AGI is developed, and just to be clear, I mean Artificial General Intelligence. Now, economists have long emphasized that there is a second force that runs counter to Moore’s law which has slowed down overall productivity increases, even though we have had all these fabulous technological advances in so many areas since the onset of the Industrial Revolution. And the second force is Baumol’s cost disease. Can you explain a little bit more about this second force? And what would it take to neutralize it, so that Moore’s law can truly apply to everything?
William Gale 17:12
Thank you. It’s a pleasure to be here and to read the very stimulating proposal that Sam put forward. I am not an AI expert but I’ve read a few pieces about it in the last week. I think there’s huge potential for tax issues here so I’m very excited about being part of this discussion.
Let me answer your question in three parts. The first part is that generally, economists think technological change is a good thing: it makes labour more productive. There are adjustment costs and if we get things right we would compensate people [for these], though we normally don’t. In the long run, technological change has been not just a good but a fantastic thing. We’ve had 250 years of technological change. At the same time, the economy has steadily increased and people have steadily been employed.
Then AI comes along. What’s different about AI relative to other technology? The answer I think is proposed, or expected, [is the] speed of the adjustment in scale and in scope. For example, if driverless cars took place instantly, and all people that were involved—the Uber and Lyft drivers—lost their jobs overnight, that would be very economically disruptive. [However], if that shift happens gradually, over the course of many years, then those people cycle out of those jobs. They [can] look for new jobs, and they’re employed in new sectors. The speed with which AI is being discussed, or the speed of the effects that AI might have, is actually a concern. From a societal perspective, I think it’s possible that technological change could go too fast, even though generally we want to increase the rate of technological change. We should be careful what we wish for.
Baumol’s cost disease applies to things like—just to give an extreme example—the technology for giving haircuts, which probably hasn’t changed in the last 300 years—it still takes the same amount of time and they use the same [tools]. It’s hard to see the productivity of that doubling every couple of years. Of course, that’s a silly example. But in industries like healthcare and education, which have a lot of labour input and a lot of human contact, you might think that Moore’s law wouldn’t come into effect as fast as it would in the computer industry for example. I’m not reading Sam’s paper literally—that the productivity of everything is going to double in two years—but if the productivity of 50 percent of the economy doubled every four years, I think [Sam] would claim a victory, [because] that would be a massive amount of change. There are forces pulling against the speed of technological change. And to me, as an economist, the question is: how different is AI from all the other technological changes we’ve had since the Industrial Revolution?
Anton Korinek 20:24
Thank you, Bill. Now, let’s turn to public policy responses if AGI is developed. If the price of intelligence—as Sam is saying—and therefore the price of many, maybe most, forms of labor converges towards zero, what is our arsenal of policy responses to this type of AGI? And let’s look at this question in both the realm of fiscal spending and in the realm of taxation.
William Gale 20:55
If labour income goes to zero, I don’t know. That’s just a totally different world and we would need to rethink a lot. But if we’re simply moving in that direction faster, then there’s a couple of things to be said on the tax and the spending side. I want to highlight something Sam wrote about the income tax: that it would be a bad instrument to load up on in this case. [Sam] makes the point, in his paper, that the income tax has been moving toward labour and away from capital. [Along with the] payroll tax, loading up on the income tax means more tax on labour. It’s a subtle point that is not even well understood in the tax world, but I think [Sam is] exactly right: the income tax is an instrument to load new revenues on the labour portion of income.
On the spending side, there’s a variety of things that people suggest, some to accelerate the change, like giving people new education and training, and some to cushion the change, like a universal basic income, as Sam wrote in his paper. I actually am much more sympathetic to UBI than most economists are, and not as a replacement for existing subsidies, but as a supplement, even before considering the potential downside of the AI revolution.
The other thing we could do—which sounds good in theory but is harder to implement [in practice]—is to have a job guarantee. With a federal job guarantee, the whole thing depends on what wage we guarantee people. If you guarantee a job at $7.25 an hour, that’s a very different proposal than guaranteeing a job at $15 per hour. Lastly, the classic economic solution to this is wage subsidies. Think of the earned income tax credit, and scale it up massively. For example, somebody making $10 an hour would get a supplement from the government for two times that. This is a little loose, but it’s essentially a UBI for people that are working. Our welfare system tends to focus on benefits for the working poor; it does not provide good benefits for the non-working poor. So some combination of UBI and wage subsidy could cushion a lot of people and give incentives to work.
Anton Korinek 23:29
Let me focus on the tax side. Can you tell us more about what the menu of tax instruments is that we would be left with if we don’t want to tax labour? And how would they compare?
William Gale 23:43
Sure. The obvious candidate is the wealth tax. Sam has proposed a variant of a wealth tax, which is, literally, where the money is. Wealth taxes have administrability issues. Sometimes a good substitute for a wealth tax is a consumption tax, which comes in different forms. Presumably, if people are generating wealth it’s because they want to consume the money. If they just want to save and create a dynasty, a consumption tax doesn’t [tax] that. You can design [taxes], especially in combination with a universal basic income, that on net hit high-income households very hard and actually subsidize low-income households. A paper that I wrote a couple years ago showed that a value-added tax and a UBI can [produce] results that are more progressive than the income tax itself.
Anton Korinek 24:44
Now, let’s turn to a few specific proposals. For example, Bill Gates has advocated a robot tax. Sam has proposed the American Equity Fund and a substantial land tax in his blog post “Moore’s Law for Everything.” What is your assessment of these proposals, especially proposals in the realm of capital taxation? What would you propose if AGI were developed and you were tasked with reforming our tax and spending policies?
William Gale 25:17
Both the Gates proposal and the Altman proposal are motivated by good thoughts. The difference is that I think the Gates proposal is a really bad idea. I don’t understand what a robot is. If it’s any labour-saving technology, then your washing machine is a robot, your PC is a robot, and your operating system—your Microsoft Windows—is a robot. I don’t think we want to tax labour-saving technology in particular. Because most of our policies go toward subsidizing investment. Turning around and then taxing [this same investment] is counterproductive and would create complicated incentives, so I don’t think it’s a good idea.
I love the spirit behind Sam’s wealth tax proposal. I love the idea of making everybody an owner. The issue with wealth taxes is the ability to administer them. For example, if you taxed public corporations, private businesses, and land, [then] people [would] move their money into bank accounts and into gold, art, and yachts. This is the wealth tax debate in a nutshell. Some people say, “You should tax all wealth.” Then [other] people [say], “Well, you can’t tax all wealth, because how are you going to come up with the value of things not exchanged in markets? How are you going to do that every year for every person?” The “throw-your-hands-up-in-the-air” answer is: “Well, we’re just not going to tax it,” and that’s a wealth tax where you just erode some of the base.
Sam is arguing there are certain components of wealth that we can tax, corporation market value and land, which are two good targets. My nerdy, wonky tax concerns are in the weeds about the administrability of the tax and the amount of avoidance and tax shifting it would cause. [However], I really like the general idea of saying, “Here are these changes. They’re going to displace some people and greatly benefit other people.” Let’s use—as you’ve said, Anton—the institutions that we have to offset some of these changes and share the wealth so that everyone can be better off from AI rather than AI causing the immiseration of a substantial share of the population.
Sam Altman 27:51
A point that I cut from the essay, but that I meant to make more clearly, is that a wealth tax on people would be great but is too difficult to administer. A very nice thing about a wealth tax on companies, instead of on people, is that [companies have] a share price. Everyone is aligned, [because] everyone wants [the share price] to go up. It’s easy to take some [of it]. I think [this tax] would be powerful and great.
[Though] I think it didn’t come through, part of my hope with the proposal [was to emphasize] that there are two classes of things that are going to matter more than anything else. Sure, people can buy yachts and art, but it’s not clear to me why stashing away a billion dollars and not spending [that money] matters; [it just means] you took [money] out of the economy and made everybody else’s dollar worth more. If you want to buy a boat, that’s fine, but [the boat] you buy won’t go up in value, it won’t compound, and it won’t create a runaway effect. The design goal was administrability and [a focus] on where big wealth generation [will occur] in the future, which I think is [in] these two areas while trading off perfect fairness in taxing [goods like] art and boats.
Anton Korinek 29:04
It seems we have alignment on this notion that some sort of wealth taxes are very desirable, [but] that there are difficulties for certain classes of wealth. Bill, would you like to add anything more to this?
William Gale 29:17
Oh, no. Let’s have a discussion. I think there are a lot of interesting issues raised and I’d be happy to respond to or clarify anything that I said earlier.
Anton Korinek 29:28
Great. Let’s continue with the Q&A. Please raise your virtual hand and get ready to unmute yourself if you have questions that you would like to pose to Sam or Bill. Also feel free to add your question into the regular chat if you prefer that I read it out.
I see we have a question from Robert.
Robert Long 29:57
My question is from the administration side, [and I ask it], as an outsider, to both of you. In his piece, Sam writes, “There will obviously be an incentive for companies to escape the American Equity Fund Tax by offshoring themselves. But a simple test involving a percentage of revenue derived from America could address this concern.” As someone who doesn’t know about taxes, this section confused me. It wasn’t clear to me how that would be simple or how that would work. I am looking for more detail from either of you about how Sam’s proposal could work. For example, what percentage of Alphabet’s revenue is derived from America and how do we calculate that? Thanks.
Sam Altman 30:40
I think companies have to report how much of their revenue [comes] from different geographies. The hope is that eventually, the whole world realizes this [system] is a good idea and we agree on a global number so that every tax is at the same rate and there’s no reason to move around. But [this vision] is probably a pipe dream and there will be at least one jurisdiction who says come here.
William Gale 31:54
I wrote a 280-page book on tax and fiscal policy and presented it to a bunch of tax economists. Every question I got was on the administrability of the estate tax reforms I proposed. [Though tax economists] can drill down into the details, I don’t want to do that here. [Instead], I want to [focus on] the big picture. The [key] idea is that the tax is essentially on market value; it could be paid in shares or in taxes. The international aspects of it—I want to emphasize—are solvable. In the US’s current corporate tax, we tax foreign corporations that do business in the US on their US income.
Someone just [wrote] in [with] a comment about formulary apportionment. Again, this has the feature—like this proposal—that it doesn’t let the perfect be the enemy of the good. [Much of] the time in tax policy, people shoot for the perfect policy, [but] they never get it, and as a result they end up with not even a good policy.
Anton Korinek 34:09
Thank you. Let’s continue with Daniel followed by Markus and Jenny.
Daniel Susskind 34:20
Terrific. Thank you, Anton. A real pleasure to be with everyone this evening. You have spoken about the distribution problem: how to best share the prosperity created by new technologies. I’d be really interested in your thoughts on a different problem, which is what I call the “contribution problem.” It seems to me that today, social solidarity comes from a feeling that everybody is pulling their economic weight for the work that they do and the taxes that they pay. And if people aren’t working, there’s an expectation that they ought to go actively look for work, if they’re willing and able to do so. One of my worries about universal basic income or universal basic assets—where everybody might have a stake in the sort of productive assets in the economy—is that it undermines that sense of social solidarity, [because] some people might not be paying into the collective pot through the work that they do. I’m interested to [hear] your reflections on that contribution problem. It seems to me that economists spend a lot of time thinking about distributive justice—about what a fair way to share our income in society is. But we don’t spend enough time thinking about contributive justice, about how we provide everyone with an opportunity to contribute in society and to be seen to be contributing. [Mechanisms] like universal basic income and universal basic assets don’t engage with the contribution problem.
Sam Altman 35:07
First, I strongly agree with the problem framing. I think that universal basic income is only half the solution and “universal basic meaning” or “[universal basic] fulfillment” is equally—or almost more—important. My hope is that the tools of AI will empower people [to create] incredible new art, immersive experiences, games, and useful things created for others. People will love that. It just might not look like work in the way that we think of it.
To get the AGI we want—and [to] make governance decisions about that—we’ll want mass participation. We’ll want tons of people helping to train the AI and thinking about expressing the values we want the AI to incorporate. I hope that this idea—that we are racing toward [AGI] and a set of decisions to make in its creation—will be one of the most exciting projects that humanity ever gets to do and that it will significantly impact the course of the universe.
This is the most important century in that there’s a mission for humanity that can unite us, that all of us can to some degree participate in, and that people really want to figure out how to do. I am hopeful that [this mission] will be a kind of Grand Challenge for humanity and a great uniting force. But I also think [people will engage in] more basic [activities] where they create and make value for their communities.
William Gale 37:41
Questions about the value of work and the conditionality of work have long dominated American politics. We have a long history of supporting the working poor as much as other countries, but [we help] the non-working poor less. Having said that, there’s a great concern that UBI would cause people to drop out, relax all day, and listen to music. The studies don’t suggest that would happen. There may be a small reduction in the labour supply. My interpretation of these studies is that the humane aspects of a UBI—like making sure people have enough to eat and that they have shelter—totally dominate the—what appear to be—modest disincentive effects on the labour supply of the recipients.
If we were really concerned [about this disincentive], we could combine a UBI with a job guarantee. This would be a little more Orwellian because a job requirement [restricts the benefit from being] universal. Alternatively, we could provide a wage subsidy which is like a UBI for working people. Ultimately this is not black and white, and we’ll find some balance between those two. I feel we do a bad job [supporting] the extremely disadvantaged which is why I favour a UBI but coupling it with a wage subsidy would help pull people into the labour force.
Anton Korinek 38:27
Thank you. Markus?
Markus Anderljung 38:29
I had a question for Sam. Sam, you said something that didn’t fit with what I expected you to believe. You said, “Even in a world where we have AGI, people will still have jobs.” Do you mean that we’ll have jobs in the sense that we’ll still have things that we do, even though we won’t contribute economic value, because there’ll be an AI system that could just do whatever the human did? Or is it the case that humans will be able to provide things that the AI system cannot provide, for example, because people want to engage with human, [rather than AI], service providers?
Sam Altman 39:18
I meant the former: [that future jobs won’t be] recognisable as jobs of today and will seem frivolous. But I also think that many things we do today wouldn’t have seemed like important jobs to people from hundreds of years ago. I also think there will still be a huge value placed on human-made [products]; [even today] we prefer handmade to machine-made [goods]. There’s something we like about that. Why do we value a classic car or a piece of art when there could be a perfect replica made? It’s because it’s real and someone made it.
In the very long-term future, when we have, [for example], created self-aware superintelligence that is exploring the entire universe at some significant fraction of the speed of light, all bets are off. It’s really hard to make super confident predictions about what jobs look like at that point. But in worlds of even extremely powerful AI, where the world looks somewhat like it does today, I think we’re going to find many things to do that people value, but they won’t be what we would consider economically necessary from our limited vantage point of today.
Anton Korinek 41:15
Thank you, Sam. Jenny?
Jenny Xiao 41:19
I have a question [about] the global implications of AGI and redistribution. The focus you have is on the US and developed countries. If the US or another developed country develops AGI, what would be the economic consequences for other countries that are less technologically capable? Would there be redistribution from the US and Western Europe to Africa or Latin America? And in this scenario, would it be possible for [less developed countries] to catch up? [Many] of the developing countries today caught up through cheap labour. But [according to] Moore’s Law for intelligence, developed countries will no longer need [cheap labour].
Sam Altman 42:08
I think we will eventually need a global solution. There’s no way around that. We can [perhaps] prototype [policies] in the US, [though I] feel strange [telling] other countries [which policies to adopt]. I think we’ll need a very global solution here.
Anton Korinek 42:33
Bill, would you like to add anything to that?
William Gale 42:38
In the interim, as we’re approaching a global solution, if the US or Europe is ahead of the game, that’s going to result in resource transfers to, rather than away from, the US and Europe. Accelerating technological change means that once [a country advances] farther down the curve than everyone else, [they’re] going to continue down that curve and [more rapidly] increase the gap over time. Anton’s question about that is particularly relevant in an international context.
Anton Korinek 43:19
Thank you. Katya?
Katya Klinova 43:22
Thank you. I’m Katya. I run the Shared Prosperity Initiative at the Partnership on AI. Thank you all for hosting this frank and open conversation. My question builds on Jenny’s: in today’s world, in which a global solution for redistributing ownership doesn’t exist and might not exist for a long [time], what can AI companies meaningfully do to soften the economic world from AI and tackle the medium-term challenge that Sam acknowledged [at the beginning]?
Sam Altman 43:57
While this is low confidence, my first instinct is that countries with low wages today will actually benefit from AI the most. What we don’t like about this vision of AI is that things won’t get much cheaper and we don’t like that wages will fall. I think AI should be naturally beneficial to the poor parts of the world. But I haven’t thought about that in depth and I could be totally wrong.
Anton Korinek 45:03
Thank you. We have another question from Phil. And let me invite anybody else who has questions to raise their hand.
Phil Trammell 45:06
This discussion about AI and AGI has treated the technology in the abstract as if it’s the same wherever it comes from. Looking back, it seems as though if someone besides Marconi had invented the radio a month sooner, nothing would have been different. But if the Nazis had invented atomic weapons sooner, history would have been very different. And I’m wondering which category you think AI falls into? If [AI] is something [in] the atomic bomb category, why is this less featured—at least in economic discussions—about the implications of AI?
Sam Altman 45:46
Most of the conversations I have throughout most of the day are about that. I think it matters hugely how [AI] is developed and who develops [AI] first. I spend an order of magnitude more time thinking about that [issue] than economic questions. But I also think the economic questions are super important. Given that I believe [AI] is going to be continually deployed in the world, I think we also need to figure out the economic and policy issues that will lead to a good solution. So it’s not for a lack of belief in the extreme importance of who develops AI—that’s really why OpenAI exists. But [our current] conversation is focused on the economic and policy lens.
Phil Trammell 46:34
Even for [AI’s] economic implications, it seems who develops [AI] first might matter. [But there appears to be a] segmentation: in strategic conversations, people care about who develops [AI] first, but in [economic conversations, for example when we’re discussing] implications for wage distribution, [we think of AI] as an abstract technology.
Sam Altman 46:53
What I would say is that in the short-term issues, it matters less who develops AI. For long-term issues—if we [develop] AGI—[who develops this AGI] will matter much more. I think it’s a timeframe issue; [that’s] how I would frame it.
Anton Korinek 47:13
We have one question that Zoe Cremer put in the chat which is directed at Sam: How does your proposal, Sam, compare and contrast against Landemore and Tang’s radical proposal for distributed decision mechanisms? She writes, both ownership reform and real democracy result, in some ways, in the same thing. People who carry the risks of intervention actually have real control over those interventions. Is there a reason to prefer moving the world toward land taxation and distributed ownership over distributed decision making and citizens controlling public policies?
Sam Altman 47:59
I think we need to do both of them. I think that they are both really important. One of the most important questions that will come if we develop advanced AI is, “How do we [represent the] world’s preferences?” Let’s say we can solve the technical alignment problem. How do we decide whose values to align the AGI to? How do we get the world’s preferences included and taken into account? Of course, I don’t think we should literally make every decision about how AGI gets used decided in a heat-of-the-moment, passions-running-hot vote, by everybody on earth for each decision. But when it comes to framing the constitution of AI and distributing power within that to people, [we should] set guardrails we’re going to follow.
Anton Korinek 48:55
Bill, any thoughts on this question of how much of the distributional challenge we want to solve through private ownership or public distribution in public decisions?
William Gale 49:07
The problem is that the private system won’t solve the distribution problem—it will create a surplus which it will distribute according to the market system. For the public system, as Sam mentioned earlier, I share this incredible concern about the ability of the public system—and the social and political values in the underlying population—to do anything that’s both substantial and right. Sam’s piece makes this point right at the beginning: if these are big changes, we need policy to go big and actually get it right.
Anton Korinek 50:03
Thank you, Sam and Bill. Ben?
Ben Garfinkel 50:07
Thank you both. I have a question about making people owners as a way of redistributing wealth. What do you see as the significant difference between a system where wealth or capital is taxed and then redistributed in the form of income, but not capital ownership, versus a system where shares of companies are redistributed to people in the country as opposed to just dividends or a portion of the welfare? What’s the importance of actually distributing ownership versus just merely income and wealth?
William Gale 50:36
In the short term, it doesn’t matter much. But my impression is that people would be more likely to save shares than they would be to save cash. So there’s a potential, over the longer term—ten or twenty years—to build assets in a substantial share of the population. As Sam mentioned, a very large proportion of the population does not own equity and does not have a net worth above zero. So proposals like sharing corporate shares, or in a social policy context, baby bonds, could have effects over long periods of time, as you develop new generations that get used to saving. But in the short run, I’m not sure it would make that much of a difference.
Sam Altman 51:37
If we give someone $3,000 or one share of Amazon and then Amazon goes up 25% the next year, in the first case, people [say], “Yeah, fuck Jeff Bezos, why does he need to get richer?” But if they have a share of Amazon, [they say], “That’s awesome! I just benefited from that.” That’s a significant difference. We have not effectively taught just how powerful compound growth is. If people can feel that, I think it’ll lead to a more future-oriented society, which I think is better. I also believe that ownership mentality is really different and then the success of society is shared, not resented. People really want [the success] and work for it to happen. One of the magic things about Silicon Valley is the idea that [if] we compensate people in stock, [then] we get a very different amount of long-term buy-in from the team than we do with high cash salaries.
Anton Korinek 52:44
Thank you, Sam. We have one more question from Andrew Trask. You mentioned that prices fall based on abundance which is created by the cost of intelligence falling, and you recommend that we tax corporations. In the extreme case of automation, what is the purpose of the corporation if it is no longer employing meaningful numbers of humans for primary survival tasks? That is to say, assuming healthy competitive dynamics, will that corporation be charging for its products at all, [or] will its products become free?
Sam Altman 53:23
I have a long and complicated answer to this question, but there’s no way I can [answer] it in one minute. Let’s say that people are still willing to pay for status, exclusivity, and a beautifully engineered thing that solves a real problem for them, over and above the cost of materials and labour into a product today. The corporation can clearly charge for that.
Anton Korinek 53:52
Thank you. That was a great way to squeeze it into one minute.
So let me thank Sam and Bill, for joining us in today’s webinar and for sharing your really thought-provoking ideas on these questions. Let me also thank our audience for joining, and we hope to see you at a future GovAI webinar soon.