25May

Quantization and LLMs: Condensing Models to Manageable Sizes


Quantization and LLMs: Condensing Models to Manageable SizesQuantization and LLMs: Condensing Models to Manageable Sizes
 

The Scale and Complexity of LLMs

 
The incredible abilities of LLMs are powered by their vast neural networks which are made up of billions of parameters. These parameters are the result of training on extensive text corpora and are fine-tuned to make the models as accurate and versatile as possible. This level of complexity requires significant computational power for processing and storage.

 
Quantization and LLMs: Condensing Models to Manageable SizesQuantization and LLMs: Condensing Models to Manageable Sizes
 

The accompanying bar graph delineates the number of parameters across different scales of language models. As we move from smaller to larger models, we witness a significant increase in the number of parameters with ‘Small’ language models at the modest millions of parameters and ‘Large’ models with tens of billions of parameters.

However, it is the GPT-4 LLM model with 175 billion parameters that dwarfs other models’ parameter size. Not only is GPT-4 using the most parameters out of the graphs, but it also powers the most recognizable generative AI model, ChatGPT. This towering presence on the graph is representative of other LLMs of its class, displaying the requirements needed to power the future’s AI chatbots, as well as the processing power required to support such advanced AI systems.

 

The Cost of Running LLMs and Quantization

 
Deploying and operating complex models can get costly due to their need for either cloud computing on specialized hardware, such as high-end GPUs, AI accelerators, and continuous energy consumption. Reducing the cost by choosing an on-premises solution can save a great deal of money and increase flexibility in hardware choices and freedom to utilize the system wherever with a trade-off in maintenance and employing a skilled professional. High costs can make it challenging for small business deployments to train and power an advanced AI. Here is where quantization comes in handy.

 

What is Quantization?

 
Quantization is a technique that reduces the numerical precision of each parameter in a model, thereby decreasing its memory footprint. This is akin to compressing a high-resolution image to a lower resolution while retaining the essence and most important aspects but at a reduced data size. This approach enables the deployment of LLMs on with less hardware without substantial performance loss.

ChatGPT was trained and is deployed using thousands of NVIDIA DGX systems, millions of dollars of hardware, and tens of thousands more for infrastructure. Quantization can enable good proof-of-concept, or even fully fledged deployments with less spectacular (but still high performance) hardware.

In the sections to follow, we will dissect the concept of quantization, its methodologies, and its significance in bridging the gap between the highly resource-intensive nature of LLMs and the practicalities of everyday technology use. The transformative power of LLMs can become a staple in smaller-scale applications, offering vast benefits to a broader audience.

 

Basics of Quantization

 
Quantizing a large language model refers to the process of reducing the precision of numerical values used in the model. In the context of neural networks and deep learning models, including large language models, numerical values are typically represented as floating-point numbers with high precision (e.g., 32-bit or 16-bit floating-point format). Read more about Floating Point Precision here.

Quantization addresses this by converting these high-precision floating-point numbers into lower-precision representations, such as 16- or 8-bit integers to make the model more memory-efficient and faster during both training and inference by sacrificing precision. As a result, the training and inferencing of the model requires less storage, consumes less memory, and can be executed more quickly on hardware that supports lower-precision computations.

 

Types of Quantization

 
To add depth and complexity to the topic, it is critical to understand that quantization can be applied at various stages in the lifecycle of a model’s development and deployment. Each method has its distinct advantages and trade-offs and is selected based on the specific requirements and constraints of the use case.

 

1. Static Quantization

Static quantization is a technique applied during the training phase of an AI model, where the weights and activations are quantized to a lower bit precision and applied to all layers. The weights and activations are quantized ahead of time and remain fixed throughout. Static quantization is great for known memory requirements of the system the model is planning to be deployed to.

  • Pros of Static Quantization
    • Simplifies deployment planning as the quantization parameters are fixed.
    • Reduces model size, making it more suitable for edge devices and real-time applications.
  • Cons of Static Quantization
    • Performance drops are predictable; so certain quantized parts may suffer more due to a broad static approach.
    • Limited adaptability for static quantization for varying input patterns and less robust update to weights.

 

2. Dynamic Quantization

Dynamic Quantization involves quantizing weights statically, but activations are quantized on the fly during model inference. The weights are quantized ahead of time, while the activations are quantized dynamically as data passes through the network. This means that quantization of certain parts of the model are executed on different precisions as opposed to defaulting to a fixed quantization.

  • Pros of Dynamic Quantization
    • Balances model compression and runtime efficiency without significant drop in accuracy.
    • Useful for models where activation precision is more critical than weight precision.
  • Cons of Dynamic Quantization
    • Performance improvements aren’t predictable compared to static methods (but this isn’t necessarily a bad thing).
    • Dynamic calculation means more computational overhead and longer train and inference times than the other methods, while still being lighter weight than without quantization

 

3. Post-Training Quantization (PTQ)

In this technique, quantization is incorporated into the training process itself. It involves analyzing the distribution of weights and activations and then mapping these values to a lower bit depth. PTQ is deployed on resource-constrained devices like edge devices and mobile phones. PTQ can be either static or dynamic.

  • Pros of PTQ
    • Can be applied directly to a pre-trained model without the need for retraining.
    • Reduces the model size and decreases memory requirements.
    • Improved inference speeds enabling faster computations during and after deployment.
  • Cons of PTQ
    • Potential loss in model accuracy due to the approximation of weights.
    • Requires careful calibration and fine tuning to mitigate quantization errors.
    • May not be optimal for all types of models, particularly those sensitive to weight precision.

 

4. Quantization Aware Training (QAT)

During training, the model is aware of the quantization operations that will be applied during inference and the parameters are adjusted accordingly. This allows the model to learn to handle quantization induced errors.

  • Pros of QAT
    • Tends to preserve model accuracy compared to PTQ since the model training accounts for quantization errors during training.
    • More robust for models sensitive to precision and is better at inferencing even on lower precisions.
  • Cons of QAT
    • Requires retraining the model resulting in longer training times.
    • More computationally intensive since it incorporates quantization error checking.

 

5. Binary Ternary Quantization

These methods quantize the weights to either two values (binary) or three values (ternary), representing the most extreme form of quantization. Weights are constrained to +1, -1 for binary, or +1, 0, -1 for ternary quantization during or after training. This would drastically reduce the number of possible quantization weight values while still being somewhat dynamic.

  • Pros of Binary Ternary Quantization
    • Maximizes model compression and inferencing speed and has minimal memory requirements.
    • Fast inferencing and quantization calculations enables usefulness on underpowered hardware.
  • Cons of Binary Ternary Quantization
    • High compression and reduced precision results in a significant drop in accuracy.
    • Not suitable for all types of tasks or datasets and struggles with complex tasks.

 

The Benefits & Challenges of Quantization

 
Before and after quantizationBefore and after quantization

The quantization of Large Language Models brings forth multiple operational benefits. Primarily, it achieves a significant reduction in the memory requirements of these models. Our goal for post-quantization models is for the memory footprint to be notably smaller. Higher efficiency permits the deployment of these models on platforms with more modest memory capabilities and decreasing the processing power needed to run the models once quantized translates directly into heightened inference speeds and quicker response times that enhance user experience.

On the other hand, quantization can also introduce some loss in model accuracy since it involves approximating real numbers. The challenge is to quantize the model without significantly affecting its performance. This can be done with testing the model’s precision and time of completion before and after quantization with your models to gauge effectiveness, efficiency, and accuracy.

By optimizing the balance between performance and resource consumption, quantization not only broadens the accessibility of LLMs but also contributes to more sustainable computing practices.
 
Original. Republished with permission.
 
 

Kevin Vu manages Exxact Corp blog and works with many of its talented authors who write about different aspects of Deep Learning.



Source link

24May

Treaty Reform in the Scales of History – European Law Blog


Blogpost 28/2024

The European Parliament’s recent proposal to remove the unanimity requirement from Article 19 TFEU (non-discrimination legislation) echoes a centuries-old US debate on voting and minority rights. James Madison, the ‘father’ of the US Constitution defended majority voting as a necessary condition for impartial law-making and minority protection in multi-state unions. Conversely, John C. Calhoun, the then Vice US President and a key advocate of slavery, sought to maintain the racial status-quo through advocating for a unanimity-based structure.

The purpose of the blog is twofold. First,  it utilises US constitutional history to show how unanimity voting can function as a tool to perpetuate the unjust status quo to the detriment of minority rights. In this regard, partial similarities are drawn between the current Article 19 TFEU and Calhoun’s voting model. Secondly, it contrasts the pragmatic nature of the travaux préparatoires of Article 19 TFEU with the principled approach of the US debate. This juxtaposition underscores the importance of anchoring the proposed treaty changes in the foundational principles of Western constitutionalism. Specifically, it highlights  the nemo judex rule — ‘not being the judge in one’s own cause’ — a principle that shaped the US constitutional debate but was surprisingly absent in the drafting history of Article 19 TFEU. The blog shows why this particular principle should be considered in the debate on the European Parliament’s proposal to amend Article  19 TFEU.  Addressing the foundational role of this principle in Western constitutional theory provides additional support for removing the unanimity requirement from Article 19 TFEU. This change could help ensure better protection of minority rights in line with the values enshrined in Article 2 TEU. It is important to clarify that the term ‘minorities’ here refers to underprivileged segments of societies based on racial or ethnic origin, religion, or belief, among the main grounds protected under Article 19 TFEU – aligning with the EU Commission’s definition.

 

 

The Madison (majority) vs. Calhoun (unanimity) debate

In making the case for a union over unitary states, Madison argued that in unitary states, the majority’s control of legislative bodies enables them to effectively ‘be the judge of their own case’ and legislate in a manner that serves their interests, often to the detriment of minorities. The best means to counteract majoritarian biases is for states to integrate within a larger union, where diverse majorities can balance each other, compelling agreement on common principles that are more likely to lean towards egalitarianism. Madison’s argument from the nemo judex rule is complex and rests on certain assumptions, but the chart below visualizes the essence of his argument.

Assume there are five similarly populated states, each dominated by a racial majority with other dispersed racial minorities. The states then come together into an integrative union.  For simple arithmetical reasons, strong state majorities get diluted at the union level (for instance, group A in the chart shifts from 90% domestically to 18% at the union level).  With majority-based voting, no single group can dominate independently. Rather, groups must  compromise to address common interests. This mutual check on state majorities can provide some  protection for minorities by ensuring that no single domestic group unilaterally decides matters for the whole union.

This Madisonian argument has been tested in many cases, as shown by Halberstam, among others. Minority rights in America have improved significantly in the so-called ‘Civil Rights Era’ when these rights were decided at the federal level rather than left to the majorities of states. Other examples  in the US include various fiscal and economic legislation, where voting at the union level broke the abusive control of local majorities and provided more balanced outcomes.

The most (in)famous challenge to Madison’s argument came from Calhoun, the twice US Vice President and the American South’s ‘evil genius’. Calhoun was known for shifting the slavery debate from being a ‘necessary evil’ to being a ‘morally good’ practice and his theory on voting is closely related to his position on slavery. While accepting the advantages of multi-state union, he feared that majority voting would lead to the emancipation of slaves and disturb the ‘racial hierarchy’. He thus offered a competing voting mechanism rooted in unanimity or what he termed ‘concurring majorities’.  To challenge Madison’s reasoning, he employed two arguments which may resonate with EU lawyers: the indivisibly of sovereignty and its concomitant ‘no demos’ thesis. Calhoun noted that sovereignty is ‘an entire thing;—to divide, is,—to destroy it’. To him, this indivisible sovereignty lies with ‘the people of several states’ because there is ‘no other people’ at the union level. Therefore, his concurring majority model means that majority is only acceptable within states (because people there are sovereign) but not at the union level (where there is no demos nor sovereignty) and thus the union must function on the basis of unanimity.

Space precludes a full discussion of Madison’s reply to Calhoun (which is discussed elsewhere). It is worth noting here that unanimity voting undermines the ‘nemo judex’ rule by allowing one state majority to judge its own case and block legislation favourable to minorities across the entire union. In this sense, it amounts to the tyranny of the few. The Madison-Calhoun and their majority vs unanimity debate was ultimately resolved in Madison’s favour in two ways. First, the outcome of civil war relegated Calhoun to the history ‘dustbin odium’.

Second, many comparative case studies attest to the effectiveness Madison’s argument that majority voting in a multi-state union tends to, subject to some conditions, provide more egalitarian outcomes. Extensive literature covers this issue, citing examples such as the improvement of minority rights in the US when regulations shifted to the federal level compared to the state level, as previously discussed. In the EU, some highlight how the regulation of sex equality in the workplace became more egalitarian through joining the European Community compared to leaving the matter to domestic law. Other examples abound as discussed by Halberstam among others.

With this comparative and historical background in mind we can now explore how this debate influences Art 19 TFEU and the proposed treaty revision.

 

Calhoun vs Article 19 TFEU’s present

While the issue of slavery has receded into the annals of history, the rationale behind Calhoun’s unanimity theory has found echoes in the EU’s Article 19, albeit inadvertently. Article 19 mandates unanimity among Member States in the Council to ‘combat discrimination based on sex, racial or ethnic origin’ among other grounds. It must be noted that the similarity between the EU’s approach and Calhoun’s is only partial because of the divergent socio-political circumstances that he laboured under compared to today.

Nonetheless, this partiality does not exclude some similarity in essence and consequence. In essence, his mechanism aimed to ensure that the union would act only through consensus, this is comparable to Article 19 TFEU’s requirement for consensus to ‘combat discrimination’. In terms of consequence,  the similarity lies in perpetuating the status quo. At the heart of Calhoun’s theory is the desire to insulate the status quo from change as much as possible. Yet, the status quo, as Sunstein notes, is often ‘neither neutral nor just’. To insulate the status quo from change is to perpetuate the injustices befalling many of the underrepresented parts of the society. Article 19 TFEU insulates the status quo of EU minorities and its concomitant injustice. While Calhoun’s model was not applied, Article 19 TFEU has been applied.

Since its adoption, the legislative reliance on Article 19 TFEU has been exceedingly rare. The only two measures enacted using the article date back to 2000 and were induced by the Haider Affair as an ‘unusual twist of political fate’.  Nonetheless, after more than two decades, the consequence of Article 19 TFEU, as many have noted, has rendered the EU ‘minority agnostic’ and its contribution ‘limited’ to ‘all but the most anodyne of actions’, leaving minorities at the mercy of the ‘tyranny of veto’.

An example of the impact of unanimity in perpetuating inaction is highlighted in the recent report of the  EP’s Committee on Civil Liberties, Justice and Home Affairs. It laments the 16-year failure to pass the EU Horizontal Directive on equal treatment across different grounds in respect of goods and services which remains unadopted since the 2008 Commission proposal due to a ‘blockage’ at the Council level. The Council’s approach is in stark contrast to the Parliament, which, unshackled by unanimity, approved the proposal as early as 2009.

The impact of unanimity is also shown by comparing Art 19 TFEU to areas or institutions where unanimity is not required. Most obviously, sex equality, generally unshackled by unanimity remains the most protected ground where nine directives have been successfully enacted and transposed.

While space precludes a full analysis of the substance of EU non-discrimination law beyond gender, it suffices to say that unanimity has been criticised for slowing the development of this area of law to the detriment of racial, ethnic and religious minorities. For instance, the Commission blamed Article 19 TFEU’s unanimity requirement for leading to ‘an inconsistent legal framework and an incoherent impact of Union law on people’s lives’.  Moreover, de Búrca remarked the Race Equality Directive is a ‘more genuine framework in nature, in so far as it contains a general prescription … to which States must commit themselves, but without prescribing in detail how this is to be achieved’. Relatedly, the existing directives, as Bell argues, almost exclusively rely on the ‘passive’ protection through ‘complaints-based’ enforcement, which is particularly insufficient to rectify historical inequalities of racism. According to the Commission’s own reckoning, the existing legislative framework ‘is not enough to resolve the deep-rooted social exclusion’. Many has referred to the failure to prevent the ill-treatment of Roma minorities in many member states. Kornezov has showed that dangers of unanimity for minority rights extends even beyond inaction as it can make things worse for minorities domestically through disincentivizing states from providing any special advantages for its local minorities. He remarked that ‘virtually any right reserved for a special group of citizens of a particular Member State who belong to a minority must be opened up to any EU citizen from other Member States’. Thus, others have lamented the lack of EU legislative response to fix these hurdles as well matters such as affirmative actions and other proactive measures needed to combat non-discrimination.

Another example is related to how the inability to pass further legislative measure contributes to hindering jurisprudential development. Considering the failure to pass the horizontal 2008 directive, as EU law currently stands, it would be ‘lawful’ to deny services for someone manifesting a religious symbol, be it a Sikh turban, a Jewish yarmulke, or a Muslim headscarf.  The Court cannot simply extend the protection here to those minorities. As Advocate General Mazák noted, ‘Article 19 TFEU is simply an empowering provision’ and as such ‘it cannot have direct effect’. He cautioned that any judicial activism in this area ‘[n]ot only would … raise serious concerns in relation to legal certainty, it would also call into question the distribution of competence between the Community and the Member States, and the attribution of powers under the Treaty in general’.  Circularity and the ‘constitutional catch 22’ is obvious here.  Unanimity cannot be interpreted away,  and the Council with its current 27 Member States cannot easily agree to expand legislations beyond the existing measures.

Overall, the negative impact of unanimity of Art 19 TFEU is well-documented in the Commission’s  communications as well as scholarly work to warrant summary here.This dissatisfaction lies at the core of the proposed amendment of Art 19 to which we now turn.

 

Travaux préparatoires and Article 19 TFEU’s Future

Following the conference on the Future of Europe, which gathered input from European citizens and resulted in forty-nine proposals, the European Parliament tasked the Committee on Constitutional Affairs (AFCO) with finalising a report  on the draft proposed amendments. In November 2023, the Parliament voted in favour of a wide range of amendments and called for a convention to revise the treaty.

The vote included approving a draft proposal to amend Art 19 TFEU trough introducing majority voting instead of unanimity as well as expanding ‘non-discrimination protections to gender, social origin, language, political opinion and membership of a national minority’. While this a commendable step, the absence of reasoning from first principles in the accompanying Parliamentary reports raises an alarm from the travaux préparatoires of Art 19 TFEU (ex-Art 13 TEC).  The drafting history of the article channelled Calhoun (unanimity as a concomitant of indivisible sovereignty) but not Madison and his use of the European sources citing the nemo judex rule.

Archives show that the original draft of Article 19 (ex 13 TEC) in the Amsterdam Treaty contained qualified majority voting but pressure from a few Member States led by the UK managed to weaken the Article by requiring unanimity for its use. The UK Parliament’s archives demonstrates that the British view, which concurring member states hid behind, saw much like Calhoun, that the ‘the defence of sovereignty is bound up with the concept of veto’.

While certain parallels can be drawn between Calhoun’s argument from sovereignty and the position of the UK-led faction, it is essential to underscore an important distinction between the position of Member States endorsing majority voting and that of Madison. Whilst Madison made a clear recourse to first constitutional principles, representatives of European states supporting majority voting relied only on pragmatic arguments which were described as lacking a clear ‘direction’. Commentators noted that the Irish Presidency ‘failed to push the negotiations along’ and to articulate compelling criteria to determine which matters should be subject to qualified majority voting.

What is surprising is that Madison directly engaged with sources of European constitutional theory using the nemo judex  rule. The very same principle has been overlooked in the allocation of decision-making procedure within the Article negotiation. This oversight is striking considering that the principle was leitmotiv many foundational text of European Constitutional theory (e.g in Locke and Hobbes). More recently the maxim has been invoked before the CJEU  and lays the foundation of the right to an impartial tribunal enshrined in Article 47 of the Charter. The absence of foundational principles allowed the unanimity side to prevail on pragmatic grounds, without fostering the constitutionally enriching debate witnessed in the US.

The nemo judex argument and its history shows unanimity’s particularly disproportionate cost for racial, religious and ethnic minorities. Opting for unanimity for non-discrimination legislation speaks volumes about the priority of this domain. This demonstrates either complete discard for foundational constitutional theory or intentional discard of minorities. Whilst Article 2 TFEU upgrades minority rights to an EU value, the unanimity choice relegates its protection to the lowest level.

Advocates of reform should not be discouraged by their opponents wielding the sovereignty argument to defend unanimity. This argument would have been convincing had Article 19 not been directly preceded by Articles 18 (discrimination on grounds of nationality) which requires majority voting, as does 157 TFEU (equal opportunities of men and women). Moreover, recourse to majority does not threaten states and there are safeguards to states which I detail here.  Even in sovereignty-guarding states like the UK, since the Factortame II judgment, courts have reconciled EU powers with sovereignty on the premise that Member States have voluntarily transferred some powers to the EU and sovereignty is preserved through retaining the ultimate power to exit. Additionally, as Triantafyllou noted, despite the EU’s claim to being a ‘new legal order’ it is lagging behind in many international organisations, which now use majority voting rather than unanimity to amend their own charter.

To be clear, while the nemo judex rule is crucial for minority rights, it does not necessitate the removal of unanimity in areas such as the Common Foreign and Security policy (CFSP). Such area, for instance, does not necessarily involve a direct conflict between racial majorities and minorities where the nemo judex in causa sua rule applies. Therefore, addressing which voting procedure is suitable for this area may require balancing various competing factors, extending beyond the scope of the current blog and as explained elsewhere.

To conclude, the blog uses insights from comparative constitutional history to show how unanimity can function as a tool to perpetuate the unjust status quo to the detriment of minority rights. This analysis aims to support the European Parliament’s proposal of moving Article 19 to majority voting akin to articles 18 and 157 TFEU. This would allow the EU to strengthen its much-needed role in this area and to avoid the pitfalls that befell Calhoun’s racially motivated model. This can also enable the EU to uphold the values outlined in Article 2 TFEU, which explicitly include minority rights, and to respect the centuries-long history of the nemo judex in causa sua principle in Western constitutional theory. Overall, understanding the interlinkages between the constitutional principle of nemo judex and the unanimity versus majority debate is of timely relevance to larger debates within the EU.

Admittedly, treaty amendment is complex and difficult to secure, but history may counsel against despair. The introduction of the EU’s competence to include non-discrimination beyond gender in the first place was only made possible after relentless activism, contributions from the Kahn Commission Report, and the political efforts of the European Parliament. Now, revising the treaty seems to be ‘gradually gaining ground’ — possibly in anticipation of the EU’s further enlargement. If the Parliament’s call for a convention is materialised, heeding lessons from comparative history and reasoning from first principles of western constitutionalism can provide intellectual ammunition to the reform endeavours against Calhoun-like thinking.



Source link

22May

Bruce Schneier and Gillian Hadfield on Securing a World of Physically Capable Computers


Computer security is no longer about data; it’s about life and property. This change makes an enormous difference, and will inevitably disrupt technology industries. Firstly – data authentication and integrity will become more important than confidentiality. Secondly – our largely regulation-free Internet will become a thing of the past. Soon we will no longer have a choice between government regulation and no government regulation. Our choice is between smart government regulation and stupid government regulation.

Given this future, Bruce Schneier makes a case for why it is vital that we look back at what we’ve learned from past attempts to secure these systems, and forward at what technologies, laws, regulations, economic incentives, and social norms we need to secure them in the future. Bruce will also discuss how AI could be used to benefit cybersecurity, and how government regulation in the cybersecurity realm could suggest ways forward for government regulation for AI.



Source link

22May

David Autor, Katya Klinova & Ioana Marinescu on the Work of the Future: Building Better Jobs in an Age of Intelligent Machines


David Autor is Ford Professor of Economics and associate department head of the Massachusetts Institute of Technology Department of Economics. He is also Faculty Research Associate of the National Bureau of Economic Research, Research Affiliate of the Abdul Jameel Latin Poverty Action Lab, Co-director of the MIT School Effectiveness and Inequality Initiative, Director of the NBER Disability Research Center and former editor in chief of the Journal of Economic Perspectives. He is an elected officer of the American Economic Association and the Society of Labor Economists and a fellow of the Econometric Society.

Katya Klinova directs the strategy and execution of the AI, Labor, and the Economy Research Programs at the Partnership on AI, focusing on studying the mechanisms for steering AI progress towards greater equality of opportunity and improving the working conditions along the AI supply chain. In this role, she oversees multiple programs including the AI and Shared Prosperity Initiative.

Ioana Marinescu is assistant professor at the University of Pennsylvania School of Social Policy & Practice, and a Faculty Research Fellow at the National Bureau of Economic Research. She studies the labor market to craft policies that can enhance employment, productivity, and economic security. Her research expertise includes wage determination and monopsony power, antitrust law for the labor market, the universal basic income, unemployment insurance, the minimum wage, and employment contracts.

You can watch a recording of the event here or read the transcript below. Slides:  David Autor –  Katya Klinova

Anton Korinek:

Welcome to all the human and artificial intelligences around the globe, who have joined us for today’s webinar on the governance and economics of AI. I’m Anton Korinek. I’m an economist at the University of Virginia and a Research Affiliate at the Centre of the Governance of AI, which is part of the Oxford Future of Humanity Institute and is hosting the event. Let me thank the Centre and in particular Markus Anderljung and Anne le Roux, for their support.

Our presenter today is David Autor, Ford Professor of Economics at MIT. David has earned so many honors and awards that I could not list them if I used the entire webinar, so let me just say that he is the world’s top authority when it comes to analyzing the effects of automation on the labor market. As discussants, we have Katya Klinova from the Partnership on AI and Ioana Marinescu from the School of Social Policy at Penn who will share their comments on David’s presentation. I will tell you a little more about them when they take the stage to give us their comments.

The topic of our webinar is “The Work of the Future: Building Better Jobs in an Age of Intelligent Machines” – and the title in fact reflects the title of a report that David released late last year at the MIT Task Force on the Future of Work, which he is co-chairing. The task force was formed by the President of MIT three years ago to analyze the relationship between emerging technologies and work, to help shape the public discourse around realistic expectations of technology, and to explore strategies to enable a future of shared prosperity. I will leave it to David to tell you more about the task force and the report.

But before handing over the mic to David, let me perhaps emphasize a point that we are particularly interested in at the Future of Humanity Institute and that I hope you can speak to, David:

Your report reflects a really comprehensive reading of the labor market developments that were triggered by automation in recent decades and, in some sense, you are suggesting that the coming decades will kind of continue that trend.  

And indeed, with the narrow AI systems that we have in today’s world, that view may be well-justified and based on that, I believe that your report will be really an excellent guide for policy for perhaps the next decade.

But now let me do something that today’s narrow AI system cannot do, but we humans are very good at: to speculate about the future using just a tiny bit of data and a highly abstract meta model of the world.

So if we extrapolate based on our technological trajectory, our currently narrow and brittle AI systems are becoming broader and more robust, and in the next few decades, they may well reach a point where they surpass human capabilities in substantially all domains. So economically speaking, employing humans may be a dominated technology then, just like we don’t use horses for transportation anymore. Now this would be a marked rupture from the past, but it would not be the first time that human technology has fundamentally altered the course of history.  

I am afraid that we may lull ourselves in a little bit of a false sense of security if we do not consider this possibility when we speak about the Future of Work. And so I would really appreciate it if you can include your thoughts on this possibility in your presentation.

And now without further ado, the floor is yours, David!

David Autor:

Thank you. So the question that you just asked is not one I exactly was set up to answer. So it’s not in my slides, but let me let me sort of circle back to it later, rather than taking on the sort of the ITF horse scenario. Let me start with the report itself, and I’m going to share my screen to do that. And thank you all. It’s pleasure to be here. Thank you for inviting me. Thank you to Ioana and to Katya for agreeing to discuss, and thanks for all of your attention. And if you guys all want to now tune me out and watch the Biden inauguration, I don’t fault you for that.

The title of the report, Building Better Jobs in the Age of Intelligent Machines and this was a taskforce set out by President Rafael Reif of MIT. It was co led by Professor David Mindell who’s both a historian and an engineer. He’s in AeroAstro [Aeronautics and Astronautics], the School of Humanities, and Arts and Social Sciences at MIT. And Elizabeth Reynolds, who’s the head of the MIT Industrial Performance Center.

So since Anton said some of this, let me not linger. The purpose of the taskforce was constructive, to sort of survey the landscape, ask what’s changing, and then ask how can we design and leverage innovations and institutions to maximise the benefits and minimise the harms of changes that are underway. Lots of people were involved. And I won’t say them by name. And let me just kind of start, I’m going to start with the economic context. Why? One I’m an economist, first and foremost. And second of all, I think it’s, it’s the thing that frames the debate and motivates the entire discussion. And then I’ll step back and talk about technological and institutional forces. But I want to start with the economics.

As many of you will be aware, we’ve been talking about the obsolescence of human labour for quite a long time, for a couple of centuries. And there have been periods, periodic moments of great concern. So you’re all familiar with the Luddites. And that’s not the only time. Even prior to the Great Depression, the US Secretary of Labour was talking about the notion that machines would be used for, quote, “scrapping men”, as opposed to scrapping machines we would be scrapping people. In the 1960s President Johnson of the United States, set up a Blue Ribbon Commission on automation and employment. And the concern at that time was that in the post-war period, productivity was rising so fast, that it would threaten to outstrip demand, or at least that was the concern. And so the outcome would be mass unemployment, because there’ll be insufficient demand to keep up with all of the supply.

Of course, none of those scenarios, those dreaded, dreaded scenarios have come to pass, at least in the form that people envisioned them. This figure just reminds you that the fraction of the adult population here in the United States – this has to be true in most advanced economies – has risen: the fraction [of the population] working has risen over the last 100 years. It’s risen because women have left unpaid, often highly constrictive, domestic unpaid employment to enter the paid labour force. The fraction of males working has come down over time, but generally, that’s a positive feature, of reflecting the fact that people don’t have to work until the point at which they expire. So we have not, despite decades, you know, centuries of concern about the possibility of work, we haven’t seen it. That doesn’t mean we can’t ever see it. But there’s no current evidence that we’re anywhere close to running out of jobs. In fact, prior to the pandemic, we were in the United States as close to full employment as we have been in quite a long time. And that’s true, as the Economist has reminded us, throughout the developed world, throughout the industrialised world.

So let me say, Why haven’t we run out of work so far in any case? So really several answers to that question. They go from the kind of the most prosaic to the most interesting. The most prosaic answer is that when we automate, we become more productive, that makes us wealthier, you consume more, and that creates work. So that’s one answer.

The second answer, which I think is less obvious, but more important, in fact, is that automation, we think of automation, or many of us think of automation as eliminating tasks that would be true for artificial intelligence that will be true for many forms of machinery and so on. But automation does do that. It absolutely does eliminate certain types of work. Simultaneously, it makes people more productive in the work that remains – often because what automation does is give us better tools. You can see this at all levels: roofers use pneumatic nail guns to hang shingles; doctors deploy batteries of tests to make diagnoses; architects rapidly render designs on the computers; teachers deliver lessons through telepresence; long haul truckers use route planning software to make sure they never carry an empty load. So a second reason that automation doesn’t simply eliminate work is that it makes us much more productive with the work that we do, and that increases our marginal product, it lowers the price of goods and services, again, boosts demand. And so it makes people more valuable. There’s no way that we could command the wages we do – have such high marginal products – if we didn’t have vastly improved tools that come from our machines and our computers and artificial intelligence.

And the final reason that automation has not eliminated work, in addition to creating wealth and complementing the work that we do, is that it leads to a lot of new work. This is a figure from an ongoing working paper, actually a working project, not a paper yet, that I’m doing with Anna Solomons at Utrecht University and Brian Segmiller, who’s an MIT PhD student. And what we do here is we look at employment across 12 categories of jobs, covering all of US employment. In 1940, for example – the blue bar is the fraction of all employment – in 1940, more than 25% of work was in production, was in manufacturing, almost 20% was in farming and mining, and everything else was much smaller. So those two categories comprised about half of all jobs. If you look in 2018, the height of the maroon and the teal bar together, shows you that in 2018, about 22% of employment was in professional work, about 14% in clerical administrative work, and a good amount in personal services.

So the composition of employment has changed enormously. Now I want to draw your attention just to the maroon or the pink, depending on how it looks on your monitor. This shows you the fraction of work in 2018 that exists in occupations that had not yet been invented in 1940. I’ll define what that term means in one second, but let me just say, if you add these up, what you see is that more than two thirds of all the jobs that people are doing in 2018 are jobs that did not exist in 1940. Let me give you some examples of that and then explain where that comes from. So here are jobs that didn’t exist, that were added to the US Census by decade. So in 1940 automatic welding machine operators were added. In 1950, airplane designers – let me just read through the list – textile chemists, computer application engineers, controllers, remotely piloted vehicles, certified medical technicians, artificial intelligence specialists, wind turbine technicians, pediatric vascular surgeons. So in this list, what you can see is these are primarily jobs where new expertise is demanded by the introduction of new technologies, right? You didn’t need airplane designers before we had the Wright brothers, you didn’t need computer applications of engines before we had computers. And all these medical specialties, of course, come from deepening knowledge. And so when we create new technologies, we create new demands for human expertise to service, to implement, to design, to advance, to apply those technologies.

Part of where your work comes from is that as we make the world more complicated and interesting, we create new work for ourselves within it. Now let me turn to the other side of this figure that I was covering up. Here are some other jobs that were added by decade: gambling dealers, beauticians, pageant directors, hypnotherapists, chat room hosts, sommeliers, drama therapists, these are all jobs that obviously they do not obviously have a technological component, but instead reflect rising incomes, creating demand for new luxury goods and services. Right so many of these things, mental health counsellors, sommeliers, drama therapist would not have been perceived as needed, some decades ago, but now are obviously demanded and paid for, supplied by people, primarily. And so what this suggests is that new work doesn’t just emerge, per se, from technology, but even from rising incomes and market scale themselves create these new opportunities that people jump into.

Let me just say, from where did we get this list? To do this, we’re building on work by Jeff Lin, who’s an economist at the Federal Reserve Bank of Philadelphia. And what we’ve done is we’ve taken historical census documents and used these kind of micro lists of occupations and industries and catalogued their changes over decades. And then we’re analysing where those come from. I’d be happy to talk more about that. But let me know.

The point I want to take away here is, as we change the world, we eliminate work, we create work, and we also increase wealth. And those processes have roughly tended to keep in balance over the course of decades. There’s not a law that says that they have to do that. But they have tended to do that. Okay, so if that’s the case, what’s to worry about? My goal here is not just to tell you not to worry about anything. I think there’s plenty to worry about. It’s just not obvious to me we’re worried about the right thing.

The concern that, really, I find very focal is the disjuncture between productivity and compensation growth, and many people call this the Great Divergence. This just shows in the US, after the mid 1970s trajectory of productivity growth, remains pretty steep, not as fast in the immediate post war period. That’s this purple line, right. So this is the post war period, and then it slows down in the 70s, picks up back up, again, not as fast. This is average compensation growth, what the average worker receives, it keeps pace with productivity until the early 2000s, when they diverged here, that’s the falling labour share. But then the real cause of concern, of course, is the flattening of median compensation after this period. And so what it says is, productivity is rising, average earnings are rising, but inequality is increasing so fast, that the median person is seeing almost none, the median worker is seeing almost none of this productivity growth. Now, let me say you can quibble, it’s reasonable to quibble: Well, is the median really rising by zero? Or is it rising by more? Are we understating the growth of real incomes? And it’s possible that we are, but that would also cause us to understate productivity growth and mean compensation growth. So in other words, that gap is real, even if you think the levels are more dramatic than they should be. So this is a really important phenomenon. And I think if you ask why are people so concerned about automation, if we know that automation or technological progress raises productivity raises national incomes? Well, this figure shows you a very good reason for concern, because the median person could correctly say, looking at the last four decades of economic history, I see that the country has gotten a lot richer, and yet the typical worker has really not gotten richer. And so it’s quite possible to have a lot of productivity growth without a lot of shared prosperity. And I think that is a huge concern.

Now, let me not just be US focused here, you can say, “well, what if I drew this graph for other countries?” Well, I don’t have a version of this graph for other countries. But it is the case that in most advanced economies, the median has not grown as fast as the average, has not kept pace with productivity reflecting the growth in inequality as well. But the US is an outlier, as in many things, in the degree of this gap. And if you look across countries, and the OECD does put together statistics on that, the US has done extraordinarily badly, in this respect, if you consider this gap a bad thing, as I do.

Okay, simultaneously as you’ll be aware, there’s been enormous growth of earnings differentials. You can see that a lot of that failure of median drives reflects the failure of incomes to rise for people without four-year college degrees. But what’s going on here? What are the causes? What is the cause of this juncture, if we had this much productivity growth, and yet, there’s so little compensation growth for the typical worker, what is going wrong?

I would say there are three different things that are going wrong. The first is technology itself. Although I don’t think technology has eliminated work per se, the digitalization of work has made highly educated workers much more productive and made less educated workers easier to replace with machinery. One way we see that is in this kind of barbell shape of occupational structure growth. So this figure, again using the US data, looks at these 11 occupations here, ranks them from low pay to high pay. And of course, these high paid jobs are our professional, technical, managerial jobs that require lots of education. They are obviously, at present, highly complemented by information technology. On the left hand side, we see a lot of these in-person service occupations, personal services including protective services, and these are numerically growing rapidly, but they’re not becoming much more productive. And the supply of people who do that work is highly abundant because they are not specialised. Those are growing. And many of these medium skill, occupations in production, administrative support, and sales are contracting as a share of employment. And I don’t think you have to introspect too deeply to see how that is related to computerization, not AI, really, that has made many of those codifiable tasks much more subject to automation. So technology itself has played a big role in changing the structure of occupations.

And one interesting way, you can also see this, and this goes back to the work with Autor, Salomons, and Seegmiller is to look at the growth of new work over time. So this figure shows you the addition of work between 1940 and 1950. And what I want to draw your attention to is that a lot of the fastest growing categories of occupations are actually getting new titles, new jobs, not just more people, but new types of work are found in the middle: construction, transportation, production, clerical administrative support, and sales work. If I plot that same figure, for 2000 to 2018, what you’d see is all of the new work that’s being added is found really at the tails. On the one hand, in these highly paid specialised occupations and the other, in personal services. And so the direction of technological change is actually shifted in a way that it is not just displacing things in the middle but actually creating new activities at the edges.

And let me just link that to one other phenomenon. In the work with Anna and Brian, look at the relationship of new work creation to innovation. And innovation is measured by patents. And we show that you can predict where new work is appearing according to where innovation is appearing. And if you look at this figure, it divides patents across the 20th century into broad industry categories. And, over time, this amazing shift in the first part of the 20th century, from around 1900 to 1930, the largest categories of patenting are in manufacturing and transportation, so that’s the dark blue and the kind of maroon initially below it. In the next several decades that moves to chemicals and electricity, so that’s the brown to purple. And then if you look at the last 40 years, the majority of patenting has been just in two categories, instruments and information, which is the mustard coloured, and electricity and electronics. And so as the locus of innovation has shifted, the locus of new work creation has also shifted. And that’s very strongly tied to the growth of occupations at the top.

So now, let me say, technology is not the only factor that matters. Globalisation has been a huge positive for world welfare, but has placed a lot of pressure on manufacturing jobs, and manufacturing-intensive communities. In the United States, for example, we have what some would call the China Trade Shock, where when China joined the World Trade Organisation in 2001, its import penetration to the United States just accelerated remarkably, and US manufacturing employment fell pretty steeply. And although that only amounts to a couple of million jobs, in a labour market of 150 million workers, it was very, very strongly regionally concentrated and strongly felt. So a second factor that contributes to the type of changes in work and the divergence between productivity incomes, is trade pressure, or the way trade pressure has been managed.

The third factor, and I would say, what makes the US distinctive from other countries, is institutions. Weakened labour unions, historically low minimum wages and outdated employment regulations have been extremely harmful to the rank and file workers, to the median worker. You can see this in a variety of ways. For example, this just shows you the purchasing power adjusted hourly earnings of low-educated workers in 2015, according to the OECD. The US is here at $10.33. If you want to do a little better, just head to Canada, a little bit to the north, where wages for similar work are about a third higher. If you want to go lower, you would have to go to Portugal or Greece or the Czech Republic, to find low paid workers who are paid as little – and that cannot be a function of skills or differences in jobs. You would find McDonald’s workers in all of these countries doing basically the same work and yet wages vary dramatically among them. And I view that, especially when we look across these high income countries, as a function of institutional choice rather than underlying technology.

You can also see this in terms of collective bargaining. The US is an outlier in having extremely low and falling collective bargaining. The UK actually has a lot in common with the US labour market and has seen large declines as well, and a big opening up at the bottom. And then throughout the OECD, collective bargaining has been falling, but to a much greater extent, in some countries and others. Germany is another great example, and Germany is also a country that has seen a very rapid, a very sizeable increase in inequality, from the 1990s, to the present. Finally, I think I’ve already mentioned this and you will all be aware of it, the US minimum wage is remarkably low, almost meaningless. It actually in real terms, depending on how you deflate it is the same level of present as it was around 1950. And despite mass productivity growth, now, US states have taken the lead on this, we’ll see what happens in the Biden administration that is being sworn in as we speak.

I want to make sure that I leave enough time. So I’m going to stop speaking within 12 minutes and I want to leave time especially for Anton’s questions. So I had a section prepared on “are we getting a positive return on inequality”? Are we getting anything out of this? Let me just sort of say, if you look across countries, you will not find a positive correlation between labour force participation rates, economic mobility, and economic growth, you will not find those things to be positively correlated with high levels of inequality or high levels of divergence between productivity levels and the wages. In other words an argument was frequently made in the 1980s, and 1990s, that you could take your inequality in one of two forms, you could you could either have dispersed wages or you could have low labour force participation rates at the bottom, but you couldn’t have both high wages at the bottom and high employment. Or another way of saying that is, if you believe in the equity-efficiency trade off, that if you want more equity, you have to give up efficiency. And so if you want to have a more, a more egalitarian social system, you’d have to give up higher growth, you’d have to give up dynamism, and so on. There’s really no evidence, at least on a correlational basis that those patterns are visible.

Just to give you a couple examples, and not spend a lot of time on them, if you look across countries it is certainly not the case that more unequal countries have higher employment rates, which you might hope they would if they have very low wages, the US is a good example of having relatively low employment rates, especially among men. Um, if you look at economic mobility, this is the famous graph by Miles Corak, which Alan Krueger called The Great Gatsby curve. Just looking at the relationship between cross-sectional inequality and intergenerational mobility, more unequal countries have lower not higher intergenerational mobility, you might have hoped that high inequality would create a lot of rags to riches, but in fact, it seems to create a lot of permanent social stratification.

In the interest of time, that’s the economic structure or foundation I wanted to just lay out. And again, the main takeaway there is there has been an enormous divergence. It is not associated with falling employment, or the lack of creation of work. It is associated with divergence of incomes. And institutional factors, I believe, are as at least important in explaining the diversity of experience across countries as our differences in technology and globalisation. But now I was not talking about technology at all.

So let me spend a few minutes on that. And I can summarise the conclusions of our report on this in really a sentence, which is that the momentous impacts of technological change are unfolding gradually. What I mean by that is that new technologies themselves are often astounding. But it can take decades from the birth of invention to its commercialization, its assimilation into business processes, standardisation, widespread adoption, and broader impacts on the workplace.

There is often a line – a kind of headline – that is drawn from a laboratory invention to mass displacement of labour. And we have rarely, if ever seen that. I’m not aware of examples where we see that. And looking at the technologies we surveyed in our task force, we find a pattern consistent with this observation. So we look at autonomous vehicles, industrial robotics, intelligent supply chains, additive manufacturing, and artificial intelligence. In all cases, we came away saying they are remarkably important. Over the long run, they will do a great deal. In the short and medium run, they are often extremely limited. So autonomous vehicles will be one example. In the long run, it’s estimated that autonomous vehicles will displace 1.3, 2.3 million workers out of transportation jobs. This has strong regional implications. A lot of the people who drive for a living are located in the South although they drive all over the country. And this has potentially important consequences. But as all of you who’ve been reading the news will be aware, this is happening much more slowly than was predicted five years ago. So for example, here’s a headline I really like, from the Washington Post, “Shaken by hype, self-driving leaders adopt new strategy: Shutting up.”

And why is that the case? Well, there are two reasons. One is, of course, the technology itself was overhyped. Autonomous vehicles are just not as competent or reliable as they were initially claimed to be, although I think they eventually will be – I don’t mean to say they’re not amazing, they are. And eventually they’ll be much safer drivers than people and many good things will come from that.

Second, adopting that technology at a very large scale doesn’t just mean people buying a new car, it means changing infrastructure. A lot of vehicle miles are driven by heavy machines, long haul trucks that have a service life of a couple of decades. And they work in a complex web of roads and warehouses and so on. And they will not overnight be replaced. Even if tomorrow someone introduced a truck with really great autonomous capabilities, it will take decades for the infrastructure to turn over. So that’s why it’s important to recognise that there’s an enormous gap between what is possible, and what is occurring at scale.

Another great example that we looked at – and I enjoyed learning about at the task force – was additive manufacturing. Additive manufacturing will be quite revolutionary. So these are just some examples of things that were additively manufactured. This is a very early example, the MIT dome just out of a plastic. This is a custom metal hip implant made for a patient. That’s an aircraft fuel nozzle. A faucet, you’ll notice the faucet is hollow, the water travels in those little channels on the side, or an orthodontic retainer.

Additive manufacturing is what some people call 3D printing, but really, additive manufacturing is probably a better term. Most manufacturing is subtractive, where you start with a raw piece of material, and you remove parts of it until you get what you want. Sometimes it’s formative where you put things in a mould. But additive is the idea that you put the pieces on layer by layer, and it has a potential to transform how products are developed and realized. It can eliminate the need for product specific tooling. It can make highly complex parts. They consolidate multiple materials in ways that were previously impossible. And it makes it possible to envision manufacturing as a mostly digital process where the actual turning of materials into final objects occurs only at the very end of the supply chain and most of manufacturing is in the design and engineering and prototyping, itself enabled by additive manufacturing. So this is a very big deal. But at the moment, it’s a very tiny part of the market. And it will take a very long time. In the long run, I do think it will reduce the amount of employment in people making things, increase the employment of people designing things. So like many of these technologies, people slowly devalue a lot of the physical skills and make the cognitive skills more consequential.

Just to summarise what I’ve said here, these AI robotic applications, they take time, often decades to develop and deploy, especially if you’re talking about safety and production critical applications. For example, it’s noteworthy that airplanes used to have three pilots 50 years ago, they’re down to two, but they’ve been at that for quite a long time, despite incredible advances in autonomy. The largest labour market effects of information technology that we’re seeing at present still from stem from maturing technologies of two decades ago, like Internet, like mobile computing, like electronic health records and e-commerce. We can see lots of glimpses of what’s going on, but it’s going to take a long time to fully roll out. And this time window offers opportunity for investment, investment in skills in particular. Let me wrap up, because I want to leave time for discussion.

The main argument of our report is that institutional innovation must complement technological innovation. And in particular, we argue that if the wave of technology that we’re seeing now deploys into the institutions that we have in place, we will have bad results, as we have arguably had bad results for the last four decades. Those institutions have not done enough to translate rising productivity into anything like shared prosperity, and that has had real social and political costs. But that was not necessary. And we can see from a diversity of experiences across countries using the same technologies, facing the same force of globalisation, that they have done very differently, and not obviously at great cost; they haven’t given up a lot to get better results. So we talk about three places where innovation is most needed. One is, of course, investing in innovation and innovating in skills and training. And any economist who woke up in his or her sleep would tell you that’s necessary. And that’s true. In the long run, human capital is critical. If we did not continue to raise our skills to keep pace with the technologies that demand those skills, we would have a problem. And we have done that successfully over a century, but must keep doing it. The second is really ensuring the productivity gains translate into better quality jobs. And the third is expanding and shaping innovation itself. So I’ll say just a minute on each of these and then I will stop.

Instead of walking through this long laundry list, I know Katya will say more about this in her remarks, I will just say that we talk about a variety of ways in investing, innovating and skills and training at scale. And one positive developments, you know, it’s boring to talk about education: education, one of  the least sexy things you can do with technology. But it is the case that we are in a moment where the technology for education is suddenly much better, or at least potentially much better, although we have not found the kind of secret sauce for making online learning in every way as good as in person learning, in the long run, it will be better. Not only will it be better, it will be cheaper, it will be more accessible, and it’ll be much more immersive, when people can use tools like augmented reality and virtual reality, when they can do kind of hands on learning via simulation. And when they can do that, so when more of learning looks more like video games and Netflix and so on, it will be much more appealing to many more people and more broadly used. Although it is hard to improve education, and it’s especially hard to retrain adults – lots of research demonstrates that – I think the technology for doing this is going to help us a great deal both in terms of cost and even more so in terms of efficacy.

A second thing is really innovating institutions to go along with technological change. To use an example, in the US we would talk about modernising the unemployment insurance system, we would talk about making sure health care provision was not intimately dependent upon employment. Certainly restoring the real value of the US federal minimum wage and indexing it to inflation: all evidence suggests that this has benefits at low costs. And then a lot of what we talked about in the report that I won’t talk about here is strengthening and adapting labour laws, both enforcing existing protections, but also allowing for innovation in worker representation. The US has a highly atrophied form of collective bargaining but bringing it back in full strength in its current form is not necessarily desirable. It’s highly adversarial, and arguably not a good way to go about it. We talked about alternative models, but this is an area where experimentation is desperately needed. In addition, actually building legal protections for workers to organise without retaliation in non-traditional realms. Currently, US domestic and home-care workers cannot legally organise, nor can farm workers, and it’s unclear about independent contractors. So finding room for collective bargaining in an innovative way is critical.

Finally, we talk about expanding and shaping innovation itself. As you may be aware, the US has really slacked off in public investment in innovation. R&D in the US economy has stayed roughly stable as a share of GDP. But that’s because public sector has fallen enormously and private sector has risen. Now, you might say, well, isn’t that good enough? You know, are those substitutes, if the public sector doesn’t do it then the private sector does? Shouldn’t we be happy about that? And I would argue no: that these things are complimentary, that the public sector does a different type of innovation, earlier in the pipeline, more focused on public goods, it provides a lot of the fundamental science. And so these things go together. In fact, there’s nice research by my colleagues, Pierre Azoulay and Daniel Lee, showing that public sector R&D investment leads to private sector patents among other things.

So how do we do that? Again, you know, when I talk about policy, I’m constrained to speak about the US, or at least, it’s hard to speak about many countries simultaneously. One is not only to increase federal R&D spending but to use it to set the agenda. We forget, or many people forget, how important the government has been not just in paying for things but in deciding what things should be paid for, whether that was the telecommunications revolution, whether that was NASA – at one point, our space agency, was consuming more than 50% of all integrated circuits that were available in the United States. It’s the same with the internet, DARPA led the self-driving car, kind of initial catalytic moments. And so the federal government can set the agenda on innovation towards valuable things: towards education, towards things that increase worker productivity, and many others. A second is that innovation has a geographic element, and it can be used to bring prosperity to other places, not just to the coastal regions of the United States. And a third thing that we talked about the report which is quite controversial, and here we’re building on the work of Daron Acemoglu, Andrea Manera and Pascal Restrepo, is rebalancing our tax system, which at the margin subsidizes firms to eliminate workers and replace them with machines, which is not something that we think is desirable. Capital investment in general is good, but at the margin, could be counterproductive depending on where it goes.

So let me kind of conclude my canned remarks and then open this up. So I believe the work of the future is ours to invent. There is a palpable fear, and at least the way we perceive it, this could be a hypothesis but is not in fact proved, is that a lot of the fear comes from a consequence between advancing innovation and fairly stagnant labour market opportunity. And that will get worse if we don’t take countermeasures. But we do not think that there is a trade-off that we see between economic growth and strong labour markets as far as we can tell. These things are really either complementary or orthogonal. They are not at odds with one another. As I stress at the beginning, the majority of today’s jobs had yet to be invented a century ago. And a lot more are to come. The job of the President, as we understand it, is to build the work of the future, in a way and for a world that we all want to live and that is not a technology inevitability, it’s a matter, to an important extent, of social choice.

Let me talk very briefly to the question that Anton asked me. There is a scenario, the “horses” scenario, but of course, that has been with us for a long time. Many of the things that, you know, we used to do: dig ditches by hand, we used to pound tools out of wrought iron, we used to do bookkeeping using books, right? We have automated ourselves out of all kinds of work, right? If we were limited to things that we did 100 years ago, with employment in agriculture, we would be in big trouble. We have continued to complement ourselves with the machines we create, and this appearance of new work, which I’m arguing has been extremely important. Now, we don’t know that’ll stay in balance. There’s a nice theory paper by again by Acemoglu and Restrepo called The Race Between Man and Machine that talks about the forces that potentially counterbalance this and what do you need in equilibrium for labour to stay valuable, and what do you need for innovations that complement labour? So those are obviously hard to check – that’s true, there’s no evidence yet that we’re seeing what people are worried about. That we’re seeing vast displacement, what we’re seeing is some devaluation of skills. You’re not seeing an excess supply of labour, per se.

I perceive it as a challenge for at least for the next decade, maybe for next few decades as being distributional rather than pure human obsolescence. If we reach that point of pure human obsolescence, what we then have is a much bigger distributional problem. Because it’s not that we’re poor – at that point, we’re incredibly rich, labour is no longer scarce, and we own the machines – they don’t work for themselves. So we have a fabulously wealthy society with no obvious means of distributing income, in the sense that most of our income distribution systems are based on scarcity of labour. And where labour is no longer scarce, we must have income without really being clear who gets to make a claim on it. So I think that’s a huge social organisational problem, and I don’t look forward to that problem. I like the problem of scarce labour. I think it has many, many virtues. So I don’t see any early indications of that scenario. But I know that many will disagree with me. So let me let me stop there. And thank you very much for your attention – I look forward to the conversation that follows.

Anton Korinek:

Thank you so much, David, for your deep insights! I find it very impressive how you managed to distill this huge wealth of data into high-level insights that are both really intuitive and also so policy-relevant.

Let me invite everybody in the webinar to submit questions through the Q&A field and to upvote the questions that have already been submitted and that you find particularly interesting. So we have two discussants now, and after that we will open the floor to the questions that you are all posing.

Our first discussant is Katya Klinova. Katya directs the AI, Labor, and the Economy Research Programs at the Partnership on AI. She focuses on studying the mechanisms for steering AI progress towards greater equality of opportunity and improving the working conditions along the AI supply chain. And in the interest of full disclosure, I have had the pleasure of working with Katya as part of the AI and Shared Prosperity Initiative that is dedicated precisely to this cause of sharing the economic benefits of AI broadly across society.

Katya, the floor is yours.

Katya Klinova

Thank you very much, Anton. I want to start by thanking David and his colleagues from the task force for creating this encyclopaedia of a report. You know, in the last three to four years, there’s been really a flurry of future work reports, and then just a number of them offer really valuable insights. But really, really, really few come even close to them on the mentality and comprehensiveness and level headedness of this report. So really, my biggest problem with that is that it’s the final one from the task force. I wish they kept going, because it’s just been a really great service to the future or community-shaping and directing this conversation. So I want to use my time to start really invite a conversation about implementing the report’s recommendations, which I think are really excellent. But we as a society, I guess, still have work to do figuring out how exactly to implement them and what are the guardrails that need to be put in place. And because our time is limited, I picked two of my favourite ones, which I think are incredibly important. The first one is about allowing innovation in the new forms of representation of workers in the workplace and corporate decisionmaker making. And the second one is about committing to an innovative agenda that is targeted towards augmenting rather than replacing workers. So let’s take these two in turn.

Firstly, about the worker representation. Let me begin by showing you this graph that David already shared, that is slightly depressing, that shows a decline in union membership across the OECD and US has declined the most proportionally big and has come to the lowest point out of all the countries. And I think it’s instructive to look at this chart back to back with the graphs that showed the discrepancy in the forking of the growth between the highest wage earners and the lowest wage earners in the OECD countries. And I just want to draw your attention to the Nordics that have not experienced the fork. And also the growth there has just been much higher, like it’s switching 60%, over almost 80% of Sweden. Of course, not all of that has to do with just union participation. And yet, I think it’s insightful to look at these graphs side-by-side and kind of regain our appreciation for unions and really appreciate that the report is talking a lot about that and calling for innovation there.

So one thing that unions can do when it comes to technology, and this is again, the story that the report tells as well. UNITE HERE, a few years ago, was the first union that was able to put clauses around technological development into their union contracts. And now Marriott is obliged to give 165 days notice, if they’re about to bring automation to the workplace, and workers are entitled to retraining. Which is a great provision, but of course we know that the technological disruption into worker’s welfare and life can come not only from their employer bringing in technology and deploying technology, it can come from Silicon Valley. This is what happened in the hotel industry as well – the biggest players in the hotel industry are being blown out of the water by Airbnb. And of course, it’s not only that industry, you can be putting clauses around technological adoption in retail brick and mortar stores, but the main disruption can come from the rise of e-commerce, for example. And that is not necessarily bad, disruption can be a sign of very healthy and dynamic economy. It’s also not new or unique to the digital age. So it’s definitely not something new for unions to deal with.

And yet, if we are thinking of AI as a force that can dramatically expand the variety and the range of human tasks that can be automated, then we can be entering an age in which for really most workers around the globe, you know, some faraway Silicon Valley company can be as relevant in terms of its influence, its ability to influence their well-being, as their own employer. And of course, that relationship is not covered by traditional union contracts cannot be covered because Silicon Valley does not employ all of these people, not in the direct way, not in an indirect way. And that’s why in addition to all the reasons that the report is listing, such innovation is so badly needed in unions.

So you can be thinking about who can be advocating for the aggregate labour demand to stay high and for human labour to stay relevant. We know that that can take a hit in aggregate. These are graphs from a seminal paper by Professors Acemoglu and Restrepo that show that while automation kept pace with creation of new tasks for humans in the four decades following World War Two, in the last three decades, that really has changed and automation is now outpacing the new task creation by far. So you could, of course think about the government and the policymaker (benevolent one) as the one who would be thinking about this aggregate picture. And this is what brings us to the second recommendations about shaping technology to augment as opposed to replace workers.

And the graphs that David already showed here, again, that are slightly depressing, or actually quite a bit depressing that showed the erosion of both employment and earning for the medium worker, make us think about what are the barriers like why the median worker clearly doesn’t seem to be sharing in on the productivity growth that the economy has been experiencing, even if that productivity hasn’t been growingas quickly as previously in history, but it has been, and that does not really spread evenly across education levels and wage levels. Clearly, technologies of late have been hugely complimentary to the knowledge workers, but not necessarily to everyone else. So how can we commit to boosting productivity of a typical worker to boost the demand for their labour and their wages in turn. So when we think about this productivity boosting technology, the recent examples of that do raise concerns, because we need guardrails that are not yet in place between making workers more productive and really exploiting them. In an example I’m going to give you a few years ago, technology was patented for wristbands for warehouse workers, which gives them haptic feedback, they basically buzz if you’re putting a wrong item into a wrong bin. And this just sounds like a great idea. And it’s something that can tell workers about the mistake they’re making, and overall raise their productivity. But of course, that same wristband can track every single movement of a worker and can tell their employer how many times they took a break, went to the bathroom, any of that sort of details. And people are obviously raising concerns about that.

That’s not the only example. Now there are startups that literally advertise themselves by offering you to tell apart efficient and inefficient workers and build a map of who, where, when, and where is doing what kind of activity. And that might sound wonderful if you’re an employer, I don’t know if that’s compelling to many of them, but if you are a worker, you might be quite concerned about that, because you are human and there are days on which you might need to have more breaks or fewer breaks. And there are obvious concerns around that information become fully transparent.

Really, the workplace and the labour market is a market in which employers have way more power than workers. And the little power that workers have comes from the information asymmetry in their principal agent relationship with their employer. So when that information asymmetry goes away, all bets are off, in some sense. And then employers no longer need to worry about offering incentives or offering bonuses and carrots to try to induce higher performance, they can just limit their tools to the sticks and the punitive measures, if someone doesn’t meet their sometimes really exaggerated performance thresholds. And then lastly, of course, there is a lot of uncertainty, when we’re trying to anticipate the impact of new technology on labour demand. If you’re a central planner, or just the policymaker, thinking how much you want to incentivize or disincentivize the development of a certain technology, there are a lot of uncertainties that you’re dealing with and your calculation would look very different if you’re making it for a single country, especially, you know, a tech maker country that might have ageing workforce and might be in need of robots, or if you’re making the calculation for the world as a whole, in which millions of young people are entering the workforce every year in need of formal sector jobs. And then again, the very same technology that can be used to automate the jobs of truck drivers can be also used to automate going to the grocery store, which, you know, in many countries is still done by households that sell themselves and this is unpaid labour that would be converted into creation of new jobs for people who pack those groceries and send them to people’s home. So the very same technology — a lot of different ways to apply it, which creates additional uncertainty when we try to predict what would be the impact on labour demand. And that is just when we think about the labour demand, but there are of course, all kinds of different effects that we might or might not want to encourage with self driving cars. There’s, of course, first of all the considerations of safety and saving lives on the road.

So, I want to wrap to give it back over to Anton but just by saying that, we are trying to think about some of these questions. If you have advice for us or just want to get involved, please get in touch. My Twitter is here. And you can also sign up at the Shared Prosperity website, partnershiponai.org/shared-prosperity. Thank you very much.

Anton Korinek:

Thank you so much, Katya. Our second discussant is Ioana Marinescu. Ioana is an assistant professor at the School of Social Policy & Practice at the University of Pennsylvania, and a Faculty Research Fellow at the NBER. She studies the labor market to craft policies that can enhance employment, productivity, and economic security. Her research expertise includes wage determination and monopsony power, antitrust law for the labor market, the universal basic income, unemployment insurance, the minimum wage, and employment contracts.

Iona Marinescu:

Hello, everybody. David, I really enjoyed your presentation. And it touched upon many policy issues that my work and my thinking has also been concerned with. So I want to make here two points. The first one is very much already in the perspective of what David was talking about. But I want to point out for our audience, the paradox in a way of the institutions and the technology, in the sense that in the US, as David already mentioned, we have one of the lowest minimum wages among OECD countries. And you might think that, you know, if technology is skill-biased in favour of more skilled workers, and it’s going to put downward pressure on the wages of the less skilled, you might think in a free market, that it’s good to have a low minimum wage, that that will allow the economy to create more jobs. And so therefore, the US should be in a better position to weather those issues as compared to other countries, like the country I was raised in, in France, where the minimum wage is much higher.

And paradoxically, that’s not been the case. And employment rates are higher for prime wage workers in France than in the US. And it’s also the case in many other OECD countries, and yet they have higher minimum wage. So what gives, you know, why is it that by making workers more expensive, you know, we have higher employment and certainly not low employment. So here I want to introduce the idea that that’s possible because in many cases, workers are underpaid relative to their productivity, which we call monopsony power. Just by analogy with monopoly power in the product market, you have firms with market power who overcharged consumers relative to cost. So in the case of the labour market, firms with employers with market power will underpay workers relative to these workers’ productivity. And if that’s the case, then a minimum wage rather than decreasing employment as would be the classic prediction and the basic model, where if you increase the price of something, the demand for that decreases with monopsony power, as you increase the minimum wage, employment can in fact increase since firms can afford to pay more. And by paying more, they’re able to attract additional workers. So basically, if at baseline workers are underpaid, there is a margin to increase the minimum wage without destroying employment. And as we increase wages, we make jobs more attractive, which attracts more workers into this these jobs. And in fact, my work in the US shows that that’s exactly what happened across US states, that when there was less competition for workers, increasing the minimum wage has tended to increase employment. So that kind of solves a bit the paradox of the cross-country pattern, where you see countries with institutions that make labour more expensive, seemingly doing better in the face of technological developments that seem to go against the demand, especially for low-skilled labour.

So how do we solve this? Of course, as David said, by increasing the minimum and also by increasing or helping worker unionisation as well as other forms of worker bargaining power. Because unionisation in the recent research has been shown to be able to counteract firm’s monopsony power. So therefore resist pressure by firms to underpaid workers. And also, of course, law and legislation in particular, some of my work looks very much into antitrust law. So if we already have issues with competition in the labour market, we want to act to prevent behaviour by firms that could further diminish competition, including things like mergers, by which firms become bigger and more powerful, as well as things like no poaching agreements, wage fixing agreements where employers collude between themselves to keep wages low.

So that’s, that’s my first point explaining for the audience hear some of the reason I think is behind the paradox that countries with more expensive, seemingly more expensive labour have often been doing better than the US when facing these technological changes. And then the second point I want to talk about is the universal basic income. So as Anton said in his question to David, you know, it’s true that in the past, the labour market has been able to adapt for all the mechanisms that David has explained so well. But obviously, you know, we cannot be sure of what will happen in the future in 20, 30 years. And there is a huge corner of uncertainty. And it seems to me definitely possible that at least some workers will go the way of the horse so that their labour is simply not valuable anymore, in the face of, new technologies. And so, if that’s the case, then something like a universal basic income is one interesting policy innovation to think about. First of all, there’s already growing interest in something like a universal basic income, with what we’ve seen during this crisis with the stimulus checks, which have gone to almost everybody, up to 90% of US households without any conditions, they have, you know, allowed people to weather this crisis. And more generally, in the US, we have a social welfare system that is much less protective of people than in other ECD countries, there’s a lot of holes and a lot of the benefits you can only get if you work. Now imagine what would happen if technology were in fact massively killing jobs. And as they would say, we’re not there yet. But what if that were the case, then, you know, these people without a reform of our social protection system would no longer be able to make ends meet. So particularly in the US something like a universal basic income, which is cash for all, without questions asked, could be quite an interesting solution.

Now, many will say this is not targeted, right? Because at least in the pure system, everybody gets the same amount, no matter their circumstances and their income. But that feature allows it to really make sure that nobody falls through the cracks. So everybody gets it, definitely, there’s no need to apply, or it’s really minimal. So that’s the big advantage. And I want to say that even though it seems untargeted, if you add to that the financing side of it, so you have a basic income, then you have a conduit for financing this, which could be many things, including a carbon tax, sales taxes, additional income tax, wealth, tax, you name it, many possibilities. So these extra taxes, almost any tax you can think of is progressive, meaning that rich people end up paying more so that on net, even though everybody gets the same basic income, through the tax system, it turns out that, you know, rich people pay a lot more taxes to pay into this system relative to what they’re getting. And depending on your tax, you can make it as progressive as you want, by using a more progressive tax.

So therefore, if labour was going to go the way of the horse, I think that the universal basic income is one of the innovative ideas that’s very much worth thinking about in that context, and it already has many advantages today. So this is, you know, what I had to say for now, and I’m looking forward to the discussion.

Anton Korinek:

Let me perhaps also start with a follow-up on our discussion on human replacement, and that also weaves in two of the questions that were posed by members of the audience:

David, you observed that the main problem if we automate away all labor at some point in the future will be distributional. Ioana has spoken about one potential solution, the universal basic income. And you said it’s a distributional problem, but at that point, we will be incredibly wealthy, except of course, I suppose, if the automation mainly took the form of what Acemoglu and Restrepo call so-so technologies.

Now one interesting observation is that this distributional problem that we would be facing in that future, it’s in some ways just a continuation of the distributional problems between high and low skilled workers, that you have emphasised at the beginning of your talk. So there really is no fundamental difference between what we may face in the future, and what we have already been facing over the past decades, it’s just going to be more extreme, it’s a difference in degrees.

Now, let’s say that we may, indeed, in a couple of decades, be at that point where humans are no longer economically useful. Let’s say it is cheaper to pay robots and AI than to pay humans to buy the basic food and the amenities that we need to live. So in that world, it’s not that there is no work, it’s just that your competitive wage, your marginal product is worth very little, let’s say two cents an hour, and it costs you $1 a day to keep alive.

Now, there will, of course, be a transition between today and that future. And some may argue that we are already in that transition in the US, based on precisely your work on skill premia, and the growing inequality in the labour market. But of course, many may also disagree. But, David, my question to you is, if we want to prepare for the possibility of that future, what would be concrete policy measures that would make sense anyways, that would help us for the transition? And then let me also invite you to respond to all the broader points that were brought up by Katya and Ioana.

David Autor:

Okay, there’s so much to talk about here. And I really appreciate the observations from Katya and Ioana. And I agree that the policies are extremely hard to implement, you know, it’s hard to figure out how do we bargain over these things? What is the way? What is the form of collective bargaining that is not too restrictive, that doesn’t have too many loopholes simultaneously? And then, you know, how do we shape technology in the direction that we want it as well? Those are both very important and difficult, but important questions and difficult questions to answer. And then, of course, I very much agree with you Ioana saying about, you know, for too long, you know, sort of assume the labour market, somehow there are perfectly competitive markets for toothbrushes, cereals… of course they’re not and now people are sort of re-examining that presupposition. But let me try to tie this together a bit.

You know, Anton, you’re talking about a future which I view as relatively distant. But I agree that it connects, you’re right. And since if what we have done is made labour less scarce in some domains and more scarce in others, you’re talking about a future where labour is not scarce, right, where there’s no such thing as labour scarcity. I don’t think we are in that world at the moment, we’re in a world with extreme inequality in labour scarcity.

And so the policies that I’ve been advocating, are all ones that do some forms of redistribution, but not through post-market redistribution, through your tax and transfer, but through what you would call pre-market or within the market really, which is changing the quality of jobs. And I don’t think that skills, just simply skill-ing is going to be sufficient.

And so you know, this a point that Rodrik and Blanchard made in their very nice conference a year ago, about, look, there are three ways you can do this, you can do some supply side, by building better workers, you can do this through post market tax and transfer, or you can directly intervene to affect the quality of work. And I think the second one is the least exploited and the most direct.

And so how do we do that? One is, of course, through minimum wages. Another is through collective bargaining. And another is for focus on labour standards. And I agree that the technological headwinds are against us, right, in a way that they were not. So if we talk about the post war period to the 1970s, it’s clear that the new work creation was very much in the middle. And so technology was sort of helping create the middle class, even as regulation and norms and so on were complementing that. And now I don’t think that’s occurring. I think that the technology is creating new work at the very top and some of the bottom, and so we have to push harder against it. But on the other hand, if you think about the figures that content brought up, you know, product from economic growth and so on, countries have had remarkably different trajectories, facing these same set of forces. And it doesn’t seem they paid a high price for that. In fact, I would say the US has paid a very high price for not, you know, stepping in, the price of non-intervening has been much higher than the price of intervening.

So, if you say, well, how do you know well, how to prepare for the future? Well, one way is, we take action now on whatever manifestations of it we’re already seeing, basically start to build this social compact that invests in people that cares about the quality of jobs and that uses rising productivity to create rising prosperity, right? That’s the only way we’re going to be able to do this, if we wait till the day that comes and all of a sudden, Mark Zuckerberg owns everything. And then we all come after Mark Zuckerberg, right? Nothing against him personally, that’s not going to be a good system of distribution. So we have to basically create — and it’s politically hard, right, it’s not economically hard, it’s politically hard.

And that’s the problem. And it needs to be palatable in some way. So I’m not a big fan of UBI myself. First of all, I think it’s the answer at the moment to a problem we don’t have, which is the lack of work. I also think it’s not very politically palatable, at least certainly in the United States, people want other people to work if they if they’re getting money, they don’t want to give money to people who they don’t perceive as working for that money in some way. And so now, that could change, but the norms and projections of what is fair and reasonable will affect what types of tax policies you make.

And I finally would say, and then I’m going to stop, I think work is an intrinsic good. The whole economic model that people do work – which causes disutility – to get income to afford consumption, which is the only thing they enjoy, it’s completely backward. Work is incredibly important, because it gives people identity, it gives them a structure, many people enjoy the tasks they do, it gives them social esteem, and a set of relationships, and a way of life. And so I would like to – and I think people prefer to get income for their work relative to having a lousy low paid job and getting a supplementary check. I don’t think most people find that as appealing. And so, in my mind, there’s still plenty of room for improving work, rather than preparing for its demise. And so as long as work is a viable system, as long as there’s a lot of it, I think there is a lot of it, then working on improving equality, such that we get better distribution through employment for people who are capable of working, I think is much more socially palatable, much more psychologically healthy, and moves us in the direction of a kind of social compact that is more robust. I’ll pause there.

Anton Korinek:

Thank you, David. Let me maybe give Katya and Ioana an opportunity to jump in here. I think Ioana, you looked like you were about to contribute something?

Ioana Marinescu:

Yes. So you know, I don’t think that the two are contradictory. And, you know, that was the sense of my remarks. First, I said, you know, let’s raise the minimum wage, improve collective bargaining and so on and so forth. So I think that there should be policies out there that improve the quality of jobs as well as worker position in this bargain. So I don’t see this as a substitute.

But at the same time, I think that, as I mentioned before, the social protection system in the US is quite a bit less generous than in other countries. And so it’s worth thinking about how you could improve that. And you know, it’s not like basic income is yet highly popular, but I think it’s becoming more popular. And I think one of the things that it has going for it is the universal part of the universal in it, which, which means that it can, you know, no longer be perceived as a handout for someone. Why are they getting it, and I’m not getting it? We’re all getting it anyway. So that kind of potentially changes the perception of this, and therefore potentially can a little bit expand our budget possibility through politics. Now, if people really want this, then they might be willing to spend you know what it takes to, to get something like that. And then the final quick point I wanted to make is that, yeah, work is very important to many people. And it’s not just about the disutility of work. But it doesn’t necessarily always have to be a classic paid market work. And I think what something like UBI enables is for people, first of all, as I said, we should make more classic market work available that’s good work. But also people having the opportunity to do other types of non-market work, caring and volunteering and whatever else they want to do and artistic activities that are nonremunerated, and that having something like a basic income could enable people economically to engage in those sorts of activities that are work-like, but that are not necessarily remunerated by the market.

Anton Korinek:

Thank you, Ioana. Katya?

Katya Klinova

Yeah, I think Anton your question is very important, like, is there a discontinuity between the distribution problem that we’re facing now and the distribution problem we might be facing if labour is not, is not scarce anymore. And I think like I am with David all the way that I don’t want to face the problem. It seems like a qualitatively different problem. Because it’s a very fragile setup, it’s like much more fragile setup, right, then the one that we’re facing right now, in even if, for example, UBI is providing enough for people to live on and pursue their artistic and other interests, which, like I would be all in for, but it does rely on whoever is possessing all these productive, you know, forces to be willing to generation after generation continue to redistribute that, while they don’t really need the people to keep producing. And people don’t have the political power. So this political setup is what I worry the most about, in that, you know, hypothetical scenario that you did for us.

David Autor:

It would be like the resource curse, right, only for everything, right? So we know countries that have basically one source of income, right, they tend to be terribly governed, because it’s so easy to monopolise that, right, whether that’s oil or diamonds or something, then we would use the machines, where we would worry.

Anton Korinek:

Thank you, Katya. And also, David, let me maybe bring up one more question from the Q&A that relates to education. And let me maybe broaden it a bit. So David, what types of education would you advocate the most? And how much should public sector be involved versus a private sector? And let me perhaps also ask the question, is there going to be a limit to the human capacity to being educated? Like, let’s say, I’ve worked very hard to get a PhD degree, how much more will I have to educate myself to still be relevant in the labour market? Three decades from now? Will I be mentally able to process that?

David Autor:

Yeah, excellent question. So I really think the fundamental needs of education have not changed. But it’s not about learning specific skills. It’s about being able to read, it’s being able to think logically and analytically, right, and quantitatively, but that doesn’t mean math that means analytically, to be able to present and communicate, like people’s writing skills have actually gotten worse over time by the way. To communicate with group and to lead and work in a team and so on. And these things are incredibly foundational. Now, then you say, Well, what will people do for work beyond that? You know, I don’t know, I think that, you know, it’s very, very likely that they sort realm in which humans will maintain competitive advantage is in things that continue to require flexibility and interaction with others, but draw on a base of expertise that interacts with technology, right? You can’t make a living just being empathetic. But you can’t make a living just, you know, adding raw columns and numbers. One of those is not scarce because of human capacity and the other is not scarce because of machine capacity. You need to be in the place where those things are complements, not substitutes.

And now on the finite capacity of humans to learn, right, certainly there’s got to be a limit. However, it’s not clear how close we are to it. For one, during the high school movement in the turn of the 20th century, you know, the end of the 19th century, you know, there was this concern about sending kids to high school. One, isn’t that expensive, you know, all these teachers, all these books. Two, the opportunity costs are really high, they can’t work on the farm. But three, is it really reasonable to think that all these cretins could actually achieve this level of education because a high school diploma, right, which was considered elite, and we had this belief that maybe people, we just already hit the capacity of most people, right, in that era, that healthy era of eugenics, right, like, we knew who wasn’t gonna be able to do that.

And so it’s not, you know, we’ve gone a lot beyond that. And more than that, we’re getting more efficient at learning, I would argue, so I do think we’ll eventually hit a limit. But one way we deal with that limit is we specialise, right? There are, you know, hundreds of types of doctors now, whereas, you know, a century ago there was a dozen. And they’re more specialised because the technology has deepened, expertise has deepened, but people’s capacity for expertise is finite. And so they specialise, right? You see this in our field as well. There was a time when someone like Paul Samuelson could do all of economics. Now only Daron Acemoglu does all of economics, everyone else has to specialise. And so it’s possible that we will specialise and remain complementary to the tools we create through this specialisation.

Anton Korinek:

Katya, Ioana, would you like to add anything?

Katya Klinova:

I am excited about the potential capability of AI to facilitate and make more scalable teaching at the right level. I’ve definitely you know, I’m sure that my fellow panellists like, we’ve never been in this situation, but like, I’ve definitely sat in class where I was not able to follow as well as some of my classmates. And if there wasn’t a system that could catch me up to what I’m missing out on, I think that can really improve the quality of education and scale it to a lot of the countries where it’s scarce.

Ioana Marinescu:

Actually, I did have a comment about Katya’s potential political dystopia. And of course, this is a bit distant. I agree with David that even if all this is a threat, it’s like quite a few decades afield, but you know, I think this also speaks to thinking about modes of property, you know, who owns what, how, and, you know, I’m very committed to a market economy, but that doesn’t necessarily mean that we have to have a few people, you know, owning these machines. And so sort of trying to think about a potential transition again, from here to there, I think, you know, through perhaps, wealth taxes, which over time, if they’re high enough, would draw down the wealth inequality, you know, things like that, I think are very much worth thinking about in connection with technology. If we think that it’s a serious concern that in the longer run a few, you know, will own this productive capacity for everything, which indeed, I think is extremely politically dangerous for society.

Anton Korinek:

We are already towards the end of our webinar. David, would you like to make any concluding statement before we wrap it up?

David Autor:

Well, first of all, thank you. Thanks to all of you. It’s been a great conversation, it could go on for hours and it was a lot of fun, at least for us, I don’t know about the audience. I guess I think that you know, sort of something that Katya said, is this question: is it a discontinuity or is it a continuity? And I think that’s a really important question. And I think, you know, I view the singularity view, which has been around for a while, as being not realistic. And the notion that there just comes a day, there’s a crossing point, and boom, I actually think we hit diminishing returns on most things, not increasing returns. But I do think it’s useful to say, well, maybe we are seeing some manifestations of that. And if that’s true, in some sense, that’s good, because it means that there’s a transition path that we can work with, right, as opposed to arrives on Monday, like the last thing we want, is the whole economic system collapses on Monday. Right? Much better, that this occurs over a long period of time. So I think it’s a really important question to ask and sort of a very focal question for thinking about AI and the future of humanity: is this going to be a big bang or is this going to be a continuation and create these specific challenges, and maybe it’s more the latter. And hopefully we can shape that as well. And I think that’s probably something that we overlook, is our capacity, not only our capacity to shape where the technology goes, but the degree to which we have already done that. The degree to which the world in which we live in is the one we created, not the one that technology created. And in many ways, we did it intentionally.

Anton Korinek:

Thank you, David, for that uplifting conclusion that I hope we can all strongly agree with. And thank you all for your contributions. I also hope that the new administration that has been sworn in while we were having this webinar will listen carefully to everything that we have learned from David, Katya, and Ioana during this webinar, and I hope to welcome you all soon to the next GovAI webinar. Thank you.



Source link

22May

Audrey Tang and Hélène Landemore on Taiwan’s Digital Democracy, Collaborative Civic Technologies, and Beneficial Information Flows


The webinar conversation involved the following participants: 

Audrey Tang is Taiwan’s Digital Minister in charge of social innovation, open governance, and youth engagement. They are Taiwan’s first transgender cabinet member and became the youngest minister in the country’s history at the age of 35. Tang is known for civic hacking and strengthening democracy using technology. They served on the Taiwanese National Development Council’s Open Data Committee and are an active contributor to g0v, a community focused on creating tools for civil society. Audrey plays a key role in combating foreign disinformation campaigns and in formulating Taiwan’s COVID-19 response.

Hélène Landemore is an Associate Professor of Political Science at Yale University. Her research and teaching interests include democratic theory, political epistemology, theories of justice, the philosophy of social sciences (particularly economics), constitutional processes and theories, and workplace democracy.

Ben Garfinkel is a Research Fellow at the Future of Humanity Insitute. His research interests include the security and privacy implications of artificial intelligence, the causes of interstate war, and the methodological challenge of forecasting and reducing technological risks.

You can watch a recording of the event here.



Source link

21May

‘Economics of AI’ Open Online Course


GovAI’s Economics and AI Lead, Anton Korinek, recently released an open online course on the Economics of AI. The course introduces participants to cutting-edge research in the economics of transformative AI, including its implications for growth, labour markets, inequality, and AI control.

The course is free, supported by a grant from the Long-Term Future Fund. The structure involves six distinct modules, the first of which is accessible to anyone with a social science background, while the other modules are aimed at people with an economics background at the graduate or advanced undergraduate level. 

The course proceeds to analyse how modes of production and technological change are affected by AI, investigate how technological change drives aggregate economic growth, and examine how AI-driven technological change will impact labour markets and workers. The course closes by looking at several key questions for the future in AI governance: How can progress in AI be steered in a direction that benefits humanity, and what lessons does economics offer for how humans can control highly intelligent AI algorithms?



Source link

21May

Joseph Stiglitz & Anton Korinek on AI and Inequality


Joseph Stiglitz is University Professor at Columbia University. He is also the co-chair of the High-Level Expert Group on the Measurement of Economic Performance and Social Progress at the OECD, and the Chief Economist of the Roosevelt Institute. A recipient of the Nobel Memorial Prize in Economic Sciences (2001) and the John Bates Clark Medal (1979), he is a former senior vice president and chief economist of the World Bank and a former member and chairman of the US President’s Council of Economic Advisers. Known for his pioneering work on asymmetric information, Stiglitz’s research focuses on income distribution, risk, corporate governance, public policy, macroeconomics and globalization.

Anton Korinek is an Associate Professor at the University of Virginia, Department of Economics and Darden School of Business as well as a Research Associate at the NBER, a Research Fellow at the CEPR and a Research Affiliate at the AI Governance Research Group. His areas of expertise include macroeconomics, international finance, and inequality. His most recent research investigates the effects of progress in automation and artificial intelligence for macroeconomic dynamics and inequality.

You can watch a recording of the event here or read the transcript below:

Joslyn Barnhart [0:00]

Welcome, I’m Joslyn Barnhart, a Visiting Senior Research Fellow at the Centre for the Governance of AI (GovAI), which is organizing this series. We are part of the Future of Humanity Institute at the University of Oxford. We research the opportunities and challenges brought by advances in AI and related technologies, so as to advise policy to maximise the benefits and minimise the risks from advanced AI. Governance, this key term in our name, refers [not only] descriptively to the ways that decisions are made about the development and deployment of AI, but also to the normative aspiration that those decisions emerge from institutions that are effective, equitable, and legitimate. If you want to learn more about our work, you can go to http://www.governance.ai.

I’m delighted today to introduce our conversation featuring Joseph Stiglitz in discussion with Anton Korinek. Professor Joseph Stiglitz is University Professor at Columbia University. He’s also the co-chair of the high-level expert group on the measurement of economic performance and social progress at the OECD, and the chief economist of the Roosevelt Institute, a recipient of the Nobel Memorial Prize in Economic Sciences in 2001, and the John Bates Clark Medal in 1979. He is a former senior vice president and chief economist of the World Bank, and a former member and chairman of the US President’s Council of Economic Advisers, known for his pioneering work on asymmetric information. Professor Stiglitz’s research focuses on income distribution, risk, corporate governance, public policy, macroeconomics and globalisation.

Professor Korinek is an associate professor at the University of Virginia, Department of Economics and Darden School of Business, as well as a research associate at the NBER, research fellow at the CEPR, and a research affiliate at the Centre for the Governance of AI. His areas of expertise include macroeconomics, international finance, and inequality. His most recent research investigates the effects of progress in automation and artificial intelligence [on] macroeconomic dynamics and inequality.

Over the next decades, AI will dramatically change the economic landscape and may also magnify inequality both within and across countries. Anton and Joe will be discussing the relationship between technology and inequality, the potential impact of AI on the global economy, and the economic policy and governance challenges that may arise in an age of transformative AI. We will aim for a conversational format between Professor Korinek and Professor Stiglitz. I also want to encourage all audience members to type your questions using the box below. We can’t promise that [your questions] will be answered but we will see them and try to integrate them into the conversation. With that, Anton and Joe, we look forward to learning from you and the floor is yours.

Anton Korinek [3:09]

Thank you so much, Joslyn, for the kind introduction. Inequality has been growing for decades now and has been further exacerbated by the K-shaped recovery from COVID-19. In some ways, this has catapulted the question of how we can engineer a fairer economy and society to the top of the policy agenda all around the world. As Joslyn has emphasised, what is of particular concern for us at the Centre for the Governance of AI is that modern technologies, and to a growing extent artificial intelligence, are often said to play a central role in increasing inequality. There are concerns that future advances in AI may in fact further turbo-charge inequality.

I’m extremely pleased and honoured that Joe Stiglitz is joining us for today’s GovAI webinar to discuss AI and inequality with us. Joe has made some of the most pathbreaking contributions to economics in the 20th century. As we have already heard, his work was recognised by the Nobel Prize in Economics in 2001. I should say that he has also been the formative intellectual force behind my education as an economist. What I have always really admired in Joe — and I still admire every time we interact — is that he combines a razor-sharp intellect with a big heart, and that he is always optimistic about the ability of ideas to improve the world.

We will start this webinar with a broader conversation on emerging technologies and inequality. Over the course of the webinar, we will move more and more towards AI and ultimately the potential for transformative AI to reshape our economy and our society.

Let me welcome you again, Joe. Let’s start with the following question: Can you explain what we mean by inequality? What are the dimensions of inequality that we should be most concerned about?

Joseph Stiglitz [5:33]

[Inequality is the] disparities in the circumstances of individuals. One is always going to have some disparities, but not of the magnitude and not of the multiplicity of dimensions [that we see today]. When economists talk about inequality, they first talk about inequalities of income, wealth, labour income, and other sources of income. [These inequalities] have grown enormously over the last 40 years. In the mid-1950s, Simon Kuznets, a great economist who got a Nobel Prize, had thought that in the early stages of development, inequality would increase, but then [in later stages of development], inequality would decrease. And the historical record was not inconsistent with that [model] at the time he was writing. But then beginning in the mid-1970s and the beginning of the 1980s, [inequality] started to soar. [Inequality] has continued to increase until today, and the pandemic’s K-shaped recovery has exposed and exacerbated this inequality.

Now beyond that, there are many other dimensions of inequality, like access to health[care], especially in countries like the United States [without] a national health service. As a result, the US has the largest disparities in health among advanced countries, and even before 2019, had an average decline in life expectancy and overall health standards. There are disparities in access to justice and other dimensions that make for a decent life. One of the concerns that has been highlighted in the last year is the extent to which those disparities are associated with race and gender. That has given rise to the huge [movement], “Black Lives Matter.” [This movement] has reminded us of things that we knew, but were not always conscious of, [including] the tremendous inequalities across different groups in our society.

Anton Korinek [8:23]

Thank you. Can you tell us about what motivated you personally to dedicate so much of your work to inequality in recent decades? I’ve heard you speak of your experience growing up in Gary, Indiana. I have heard a lot about your role as a policymaker, as a chair of the President’s Council of Economic Advisors, and as a chief economist of the World Bank in the 1990s. How has all of this shaped your thinking on inequality?

Joseph Stiglitz [8:55]

I grew up, as you said, in Gary, Indiana, which was emblematic of industrial America, though of course I didn’t realise that as I was growing up. [In Gary], I looked at my surroundings, and I saw enormous inequalities in income and across races; [I saw] discrimination. That was really hard to reconcile with what I was being taught about the American Dream: that everybody has the same opportunity and that all people are created equal. All those things that we were told about America, which I believed on one level, seemed inconsistent with what [I saw].

That was why I had planned [to study economics]. Maybe it seems strange, but I had wanted to be a theoretical physicist. [But with all] the problems that I had seen growing up around inequality, suddenly, at the end of my third year in college, I wanted to devote my life to understanding and doing something about inequality. I entered economics with that very much on my mind, and I wrote my thesis on inequality. But life takes its turn, [so I spent] much of the time [from then until] about 10 years ago on issues of imperfect information and imperfect markets. This was related, in some sense, to inequalities because the inequalities in access to information were very much at the core of some of the inequalities in our society. [For example,] inequalities in education played a very important role in the perpetuation of inequalities. So, the two were not [part of] a totally disparate agenda.

From the very beginning, I also spent a lot of time thinking about development, which interacted with my other work on theoretical economics. It may seem strange, but I did go to Africa in 1969: [I went to] Kenya not long after it got its independence. I’m almost proud to say that some people in Africa claim me to be the first African Nobel Prize winner: [Africa] had such an important role in shaping my own research. That strand of thinking about inequality between the developing countries and the developed countries was also very important [to my understanding of inequality].

Finally, to answer your question, when I was in the Clinton administration, we had a lot of, you might say, fights about inequality. Everybody was concerned about inequality, but some were more concerned than others. Some wanted to put it at the top of the agenda, and [others] said, “We should worry about it, but we don’t have the money to deal with it.” It was a question of prioritisation. On one side Bob Reich, who was the Secretary of Labour, and I were very much concerned about this inequality. We were concerned about corporate welfare: giving benefits to rich corporations meant that we had less money to help those who really needed it. Our war against corporate welfare actually led to huge internal conflicts between us and some of the more corporatist or financial members of the Clinton team.

Anton Korinek [13:53]

That brings us perhaps directly to a more philosophical question. What would you say is the ethical case for [being concerned with] inequality? In particular, why should we care about inequality in itself and not just about absolute levels of income and wealth?

Joseph Stiglitz [14:17]

The latter [question] you can answer more easily from an economic point of view. There is now a considerable body of theory and empirical evidence that societies that are marked by large disparities and large inequalities behave differently and overall perform more poorly than societies with fewer inequalities. Your own work has highlighted the term “macroeconomic externalities,” which [describes when a system’s functioning] is adversely affected by the presence of inequality. An example, for instance, is that when there are a lot of inequalities, those at the bottom engage in “keeping up with the Joneses,” as we say, and that leads them to be more in debt. That higher level of debt introduces a kind of financial fragility to the economy which makes it more prone to economic downturns.

There are a number of other channels through which economic inequality adversely affects macroeconomic performance. The argument can be made that even those at the top can be worse off if there’s too much inequality. I reflected this view in my book, the Price of Inequality, where I said that our society and our economy pay a high price for inequality. This view has moved into the mainstream, which is why the IMF has put concerns about inequality [at] the fore of their agenda. And as Strauss-Kahn, who was the Managing Director of the IMF at the time said, [inequality] is an issue of concern to the IMF because the IMF is concerned about macroeconomic stability and growth, and the evidence is overwhelming that [inequality] does affect macroeconomic performance and growth.

[There is a] moral issue which economists are perhaps less well-qualified to talk about rigorously. Economists and philosophers have used utilitarian models and equality-preferring social welfare functions. [These models build on] a whole literature of [philosophy], of which Rawls is an example. [Rawls] provides a philosophical basis [for] why, behind the veil of ignorance, you would prefer to be born into a society with greater equality.

Anton Korinek [17:40]

So that means there is both a moral and an economic efficiency reason to engage in measures that mitigate inequality. Now, this brings us to a broader debate: what are the drivers of inequality? Is inequality driven by technology, or by institutions [and] policies, broadly defined? There is a neoclassical caricature of the free market as the natural state of the world. In this caricatured description of the world, everything is driven by technology, and technology may naturally give rise to inequality, and everything we would do [to mitigate inequality] would be bad for economic efficiency. Can you explain the interplay of technology and institutions more broadly and tell us what is wrong with this caricature?

Joseph Stiglitz [18:46]

[To put it in another way:] is inequality the result of the laws of nature, or the laws of man? And I’m very much of the view that [inequality] is a result, overwhelmingly, of the laws of men and our institutions. One way of thinking about this, which I think [provides] compelling evidence for my perspective, is that the laws of nature are universal: globalization and [technological advancement] apply to every country. Yet, in different countries, we see markedly different levels of inequality in market incomes and even more so in after-tax and transfer incomes.

It is clear that countries that should be relatively similar have been shaped in different ways by the laws. What are some of those laws? Well, some of them are pretty obvious. If you have labour laws that undermine the ability of workers to engage in collective bargaining, workers are going to get short stacked; they’re not going to be treated well. You see that in the United States: one of the main [reasons for] the weakening of the share of labour in the United States is, I believe, the weakening of labour laws and the power to unionise.

At the other extreme, more corporate market power [allows companies to raise prices, which] is equivalent to lowering wages, because [people] care about what [they] can purchase. The proceeds of [higher prices] go to those who own the monopolies, who are disproportionately those at the top. During Covid-19 we saw Jeff Bezos do a fantastic job of making billions of dollars while the bottom 40% of Americans suffered a great deal. The laws governing antitrust competition policy are critical.

But actually, a host of other details and institutional arrangements that we sometimes don’t notice [drive inequality]. United States [policy] illustrates that we do things so much worse than other countries. Bankruptcy laws, which deal with what happens if a debtor can’t pay [back] all of the money, give first priority [to banks]. In the United States, the first claimant is the banks, who sell derivatives – those risky products that led to the financial crisis of 2008. On the other hand, if you borrow money, to get ahead in life or to finance education, you cannot discharge your debt. So, students are at the bottom, and banks are at the top. So that’s another example [of how laws drive inequality].

Corporate governance laws, that give the CEOs enormous scope for setting their salaries in any way they want, result, in the United States, in the CEOs getting 300 times the compensation of average workers. That’s another example [of how laws create inequality].

But there are a whole host of things that we often don’t even think of as institutions, but [they] really are. When we make public investments in infrastructure, do we provide for public transportation systems, which are very important for poor people? When we have public transportation systems, do we connect poor people with jobs? In Washington D.C. they made a deliberate effort not to do that. When we’re running monetary policy, are we focusing on making sure that there’s [as] close to full employment as possible, which increases workers bargaining power? Or do we focus on inflation, which might be bad for bondholders?

Monetary policy, in the aftermath of the 2008 crisis, led to unprecedented wealth inequality, but didn’t succeed very well in creating jobs. 91% of the gains that occurred in the first three years of that recovery went to the top 1% in the United States. So, [inequality stems from] an amalgam of an enormous number of decisions.

Now, even when [considering] the issue of technology, we forget that [it is] man-made to a large extent — [it is] not like the laws of quantum mechanics! Technology [itself], and where we direct our attention [within technology], is man-made, and the extent to which we make access to technology available to all is our decision. Whether we steer technology to save the planet, or to save unskilled jobs, we can determine whether we’re going to have a high level of unemployment of low-skilled people or whether we’re going to have a healthier planet. [We witnessed] fantastic success in quickly developing COVID-19 vaccines. But now the big debate is, should those vaccines be available only to rich countries? Or should we waive the intellectual property rights in order to allow poor countries to produce these vaccines? That’s an issue being discussed right now at the WTO. Unfortunately, although a hundred countries want a waiver, the US and a few European countries say “no”. We put the profits of our drug companies over [peoples’] lives, not only over [lives] in developing countries, but possibly over the lives of people in our own country. As long as the disease rages [in developing countries], a mutation may come that is vaccine resistant, and our own lives are at risk. It’s very clear that this is a battle between institutions, and that right now, unfortunately, drug companies are winning.

Anton Korinek [26:04]

It’s a battle of institutions within the realm of a new technology.

If we now turn to another new technology, AI, you hear a lot of concern about AI increasing inequality. What are the potential channels that you see that we should be concerned about? To what extent could AI be different from other new technologies when it comes to [AI’s] impact on inequality?

Joseph Stiglitz [26:36]

AI is often lumped together with other kinds of innovations. People look historically, and they say “Look, innovations are always going to be disturbing, but over the long run, ordinary people gain.” [For example,] the makers of buggy whips lost out when automobiles came along, but the number of new jobs created in auto repair far exceeded the old jobs, and overall, workers were better off. In fact, [automobiles] created the wonderful middle class era of the mid-20th century.

I think this time may be different. There’s every reason to believe that it is different. First, these new technologies are labour replacing and labour saving, rather than increasing the productivity of labour. And so [these technologies are] substituting for labour, which drives down wages. There’s no a priori theory that says that an innovation [must] be of one form or the other. Historically, [innovations] were labour augmenting and labour enhancing; [historically, innovations] were intelligence-assisting innovations, rather than labour-replacing. But the evidence now is that [new innovations] may be more labour replacing. Secondly, the new technologies have a winner-take-all characteristic associated with them: [these new technologies] have augmented the potential of monopoly power. Both characteristics mean there will be a less competitive market and greater inequality resulting from this increased market power, and almost everybody may lose.

In the case of developing countries, the problems are even more severe for two reasons. The first [reason] is that the strategy that has worked so well to close the gap between developing and developed countries, which was manufacturing export-led growth, may be coming to an end. Globally, employment in manufacturing is declining. Even if all the jobs in manufacturing shifted, say, from China to Africa, [this shift] would [hardly] increase the labour force in Africa. I and some others have been trying to understand: why was manufacturing export-led growth so successful? And what [strategies] can African countries [employ] today if [manufacturing export-led growth] doesn’t work? Are there other strategies that will [be effective]? The conclusion is that there are other things that work, but they’re going to be much more difficult [to implement]. And there won’t likely be the kind of success that East Asia had beginning 50 years ago.

The second point [concerns] inequalities that occur within our country [as a result of AI]. [For example,] when Jeff Bezos [becomes] richer or Bill Gates [becomes] richer, we always have the potential to tax these gainers and redistribute some of their gains to the losers. The result, [which] you and I wrote about in one of our papers, shows that in a wide class of cases we can make sure that everybody could be better off [via redistributive taxation]. While [implementation is] a matter of politics, at least in principle, everybody could be made better off. However, [AI] innovations across countries [drive down] the value of unskilled labour and certain natural resources, which are the main assets of many developing countries. [Therefore, developing countries are] going to be worse off. Our international arrangements for redistribution are [very limited]. In fact, our trade agreements, our tax provisions, and our international [arrangements] work to the disadvantage of developing countries. We don’t have the instruments to engage in redistribution, and the current instruments actually disfavour developing countries.

Anton Korinek [32:44]

Let me turn to a longer-term question now. Many technologists predict that AI will have the potential to be really transformative if it reaches the ability to perform substantially everything that human workers can do. This [degree of capacity] is sometimes labelled as “transformative AI,” though people have also [described] closely-related concepts like Artificial General Intelligence and human-level machine intelligence. There are quite a few AI experts who predict that such transformative advances in AI may happen within the next few decades. This could lead to a revolution that is of similar magnitude to or greater magnitude than the agrarian or industrial revolution, which could make all human labour redundant. This would make human labour, in economic speak, a “dominated technology.”

[When we consider inequality,] the dilemma is that in our present world labour is the main source of income. Are you willing to speculate, as a social scientist, and not as a technologist, [about] the likelihood and timeframe of transformative AI happening? What do you see as the main reasons why it may not be happening soon? [Alternatively,] what would be the main arguments in favour of transformative AI happening soon? And how should we think about the potential impacts of transformative AI, from your perspective?

Joseph Stiglitz [34:36]

There is a famous quip by Yogi Berra, who is viewed as one of the great thinkers in America. I’m not sure everybody in the UK knows about him. He was a famous baseball player who had simple perspectives on life and one of them was “forecasting is really difficult, especially about the future.”

The point is that we don’t know. But we certainly could contemplate this happening, and we ought to think about that possibility. So as social scientists, we ought to be thinking about all the possible contingencies, but obviously devote more of our work to those [scenarios] that are going to be most stressful for our society. Now, you don’t think that people should train to be a doctor to deal just with colds. You want your doctor to be able to respond to serious maladies.  I don’t want to call [transformative AI] a malady – it could be a great thing. But it would certainly be a transformative moment that would put very large stresses on our economic, social [and] political system.

The important point is that […] these advances in technologies make our society as a whole wealthier. These [advances] move out what we could do, and in principle, everyone could be made better off. So the question is: can we undertake the social, economic, [and] political arrangements to ensure that everyone, or at least a vast majority, will be made better off [by advances in AI]? When we engage in this sort of speculative reasoning, one could also imagine [a world in which] a few people [are] controlling these technologies, and that our society [may be] entering into a new era of unprecedented inequality – with a few people having all the wealth, and everybody else just struggling to get along and [effectively] becoming serfs. This would be a new kind of serfdom, a 21st century or 22nd century serfdom that is different from that of 13th and 12th century [serfdom]. For the vast majority [of people, this serfdom would not be] a good thing.

Anton Korinek [37:59]

For the sake of argument, let’s take it as a given that this type of transformative AI will arrive by, say, 2100. What would you expect to be the effects of [transformative AI] on economic growth, on the labour share, and in particular, on inequality? What would be the [impact] on inequality in non-pecuniary, non-monetary terms?

Joseph Stiglitz [38:36]

The effect [of transformative AI] on inequality, income, wealth, and monetary aspects will depend critically on the institutions that we described earlier in two key [ways]. If we move beyond hoarding knowledge via patents and other means, and gain wide[spread] and meaningful access to intellectual property, then competition can lower prices and the benefits of [transformative AI] can be widely shared.

This was what we experienced in the 19th and 20th century [during the Industrial Revolution]. Eventually, when competition got ideas out into the marketplace, profits eroded. While the earlier years of the [Industrial] Revolution were not great for ordinary workers, eventually, [ordinary workers] did benefit and competition [served to ensure] that the benefit of the technological advances were widely shared. There is a concern about whether our legal and institutional framework can ensure that that will happen with artificial intelligence. That’s one aspect of our institutional structure.

Even if we fail to do the right thing in that area, we have another set of instruments, which are redistributive taxes. We could tax multibillionaires like Jeff Bezos or Bill Gates. From the point of view of incentives, most economists would agree that if multibillionaires were rewarded with 16 billion dollars, rather than 160 billion [dollars], they would probably still work hard. They probably wouldn’t say “I’m going to take my marbles and not play with you anymore.” They are creative people who want to be at the top, but you can be at the top with 16 [billion dollars], rather than 160 billion [dollars]. You take that [extra tax revenue] and use it [for] more shared prosperity. Then, obviously, the nature of our society would be markedly different.

If we think more broadly, right now, President Biden is talking a lot about the “caring economy.” Jobs are being created in education, health, care for the aged, [and] care for the sick. Wages in those jobs are relatively low, because of the legacy of discrimination against women and people of colour who have [worked] in these areas. Our society has been willing to take advantage of that history of discrimination and pay [these workers] low wages. Now, we might say, why do that? Why not let the wages reflect our value of how important it is to care for these parts of our society? [We can] tax the very top, and use that [tax revenue] to create new jobs that are decently paid, [which would create] a very different outcome [for the economy]. I think, optimistically, this new era could create shared prosperity. There would still be some inequality, but not the nightmare scenario of the new serfdom that I talked about before.

Anton Korinek [42:49]

Let’s turn to economic policy. You have already foreshadowed a number of interesting points on this theme. But let’s talk about economic policy to combat inequality more generally. People often refer to redistribution and pre-distribution as the main categories of economic policy to combat inequality. Can you explain what these two [policy categories] mean? What are the main instruments of redistribution and of pre-distribution? And how do [these policies] relate to our discussion on inequality?

Joseph Stiglitz [43:37]

Pre-distribution [looks at] the factors that determine the distribution of market income. If we create a more equal distribution of market income, then we have less burden on redistribution to create a fair society. There are two factors that go into the market distribution of income. [The first factor] is the distribution, or the ownership, of assets. [The second factor] is how much you pay each of those assets. For instance, if you have a lot of market power, and weak labour power, you [end] up with capital getting a high return relative to workers and [high] monopoly profits relative to workers’ [incomes] — that’s an example of the exercise of market power leading to greater inequality. The progressive agenda in the United States emphasises increasing the power of unions and curbing the power of big tech giants to create factor prices that are conducive to more market equality.

We can [also consider] the ownership of two types of assets: human capital and financial capital. The general issue here is: how do we prevent the intergenerational transmission of advantage and disadvantage? Throughout the ages, there have always been parents who want to help their children, which is not an issue. [Rather, the issue is] the magnitude of that [helping]. In the United States, for instance, we have an education system which is locally-based. We have more and more economic segregation, which means that rich people live with rich [people] and poor [people] with poor [people.] If schools in [rich] neighbourhoods give kids a really good education and conversely [in poor neighbourhoods, then even] public education perpetuates inequality.

The most important provision in the intergenerational transmission of financial wealth is inheritance tax and capital taxation. Under Trump, [Congress] eviscerated the inheritance taxes. So 1716328364 the question is how to [reinstate these taxes] to create a more equal market distribution, called pre-distribution.

Anton Korinek [47:34]

[You began to address] taxation in the context of estate taxation. For the non-economists in the room, I should emphasise that among the many contributions that Joe has made to economics is a 1976 textbook with Tony Atkinson that is frequently referred to as the “Bible of Public Finance” which lays out the basic theory of taxation and still underlies basically all theoretical economic work on taxes.

In recent decades, the main focus of this debate has been on taxing labour versus capital. A lot of economists argue that we should not tax capital, because it’s self-defeating: [taxation of capital] will just discourage the accumulation of capital and ultimately hurt workers. My question to you is: do you agree? [If not,] what is wrong with this standard argument?

Joseph Stiglitz [48:43]

It is an argument that one has to take seriously: that attacks on capital could lead to less capital accumulation which would in turn lead to lower wages, and even if the proceeds of the tax were redistributed to workers, workers could be worse off. You can write down theoretical models in which that happens. The problem is that this is not the world we live in. In fact, [there are] other instruments at [our] disposal. For instance, as the government taxes [private] capital, [the government] can invest in public capital, education, and infrastructure. [These investments lead to an increase in] wages. Workers can be doubly benefited: not only [do workers benefit] from direct distribution, but [they also benefit from a greater] equality of market income caused by capital allocation to education and infrastructure.

Many earlier theories were predicated on the assumption that we were able to tax away all rents and all pure profit. We know that’s not true: the corporate profit tax rate is now 21% in the United States, and the amount of wealth that the people at the top are accumulating [provides evidence that] we are not taxing away all pure profits. Taxing away [these pure profits] would not lead to less capital accumulation, [but instead] could lead to more capital accumulation.

[Let’s] look broadly at the nature of capitalism in the late 20th and early 21st century. We used to talk about the financial sector intermediating, which meant [connecting] households and firms by bringing [households’] savings into corporations. [This process] helped savings and helped capital accumulation. [However,] evidence is that over the last 30 or 40 years, the financial sector has been disintermediating. The financial sector, [rather than] investing monopoly profits, has been redistributing [these profits] to the very wealthy, [to facilitate] the wealthy’s consumption or increase the value of their assets, [including their international assets], and their land. [Ultimately], this simple model [of financial intermediation] doesn’t describe [late] 20th and [early] 21st century capitalism.

Anton Korinek [52:17]

Should we think of AI as [the same kind of] capital described in theories of capital taxation in economics, or is AI somehow inherently different? Should we impose what Bill Gates calls a “robot tax” [on AI]?

Joseph Stiglitz [52:36]

That’s a really good question. [If we had had more time, I would have] have distinguished between intangible capital, called R&D, and [tangible capital, like] buildings and equipment. 21st century capital is mostly intangible capital, which is the result of investment in R&D. [Intangible capital] is more productive in many ways than buildings, and so in that sense, it is real capital, and is [well-described by the word] “intangible.” [Intangible capital is also the] result of investment: people make decisions to hire workers to think about [certain] issues, or individuals decide themselves to think about these issues, when [employers or individuals otherwise] could have done something else. [In this way, intangible capital] is capital: it requires resources, which could have been put to other uses, [and these alternative uses are foregone] for future-oriented returns.

The question is: is this [intangible] capital getting excess returns? Are there social consequences of those investments, that [the investors] don’t take into account? We call [these social consequences] externalities. People who invest in coal-fired power plants may make a lot of money, but [their investment] destroys the planet. If we don’t tax carbon, then society — rather than the investor — bears these costs. Gates’s robot tax is based on the same [concept]. If we replace workers, and [these workers] go on the unemployment roll, then we as a society bear the cost of [these workers’] unemployment. [Gates argues that] we ought to think about those costs, [though] how we balance the tax and appropriate its excess returns is another matter. Clearly, [the robot tax] is an example of steering innovation. You and I, [in our research,] have [also argued that we must] steer innovation to save the planet [rather than] create more unemployment.

Anton Korinek [55:32]

How would you recommend that we should reform our present system of taxation to be ready for not only [our present time in the] 21st century but also for a future in which human labour plays less of a role? How should we tax to make sure that we can still support an equitable society?

Joseph Stiglitz [56:02]

Let me first emphasise that not just taxation, but also investment, is important. [Much of the economy’s direction is determined by] the basic research decisions of the National Science Foundation and science foundations in other countries. [These decisions inform which] technologies are accessible to those in the private sector. Monetary policy [is also important]. We don’t think the central bank [affects] innovation, but it actually does. [At a] zero interest rate, the cost of capital is going to be low relative to the cost of labour, [which will] encourage investors to think about saving labour rather than saving capital. So monetary policy is partly to blame for distortions in the direction of innovation. The most important thing is to be sensitive to how every aspect of policy, including tax policy, shapes our innovative efforts and [directs where we] devote our research. Are we devoting our research to saving unskilled labour or to augmenting the power of labour? We talked before about intelligence-assisting innovations like microscopes and telescopes which make us more productive as human beings. We can replace labour, or we can make labour more productive. [While this distinction can be] hard to specify, it’s very clear that we have tools to think about these various forms of innovation.

Anton Korinek [58:16]

On the expenditure side, one policy solution that a lot of technologists are big fans of is a universal basic income. What is your perspective on a UBI: do you advocate it or do you believe there are other types of expenditure policy that are more desirable? Do you think [UBI] may be a good solution if we arrive at a far-future – or perhaps near-future – [scenario] in which labour is displaced?

Joseph Stiglitz [58:53]

I am quite against the UBI [being implemented] in the next 30 or 40 years. The reason is very simple: for the next 30 years, the major challenge of our society is the Green Transition, which will take a lot of resources and a lot of labour. Some people ask if we can afford it, and [I argue that] if we redirect our resources, labour, and capital [toward the Green Transition] then we can afford it. Ben Bernanke [describes] a surplus of capital and a savings glut. However, if [we look] at the challenges facing the world, [we understand Bernanke’s assertion] is nonsense. Our financial system isn’t [developing] the [solutions] our society needs [like] the Green Transition.

I also see deficiencies in infrastructure and in education in so many parts of the world. I see a huge need for investments over the next 30 to 40 years such that everybody who wants a job will be fully employed. It is our responsibility [to ensure] that everybody who wants a job should be able to get one. We must have policies to make sure that [workers] are decently paid. This should be our objective now.

[If] in the far-future [we don’t need] labour, we have the infrastructure that we need, we’ve made the Green Transition, and we have wonderful robots that produce other robots and all of the goods, food, and services that we need, then we will have to consider the UBI. We would [then] be engaged in a discussion of what makes life meaningful. While work has been part of that story of meaningfulness, there are ways of serving other people that don’t have to be monetised and can be very meaningful. While I’m willing to speculate about [this scenario,] it’s a long way off, and [is] well after my time here on this earth.

Anton Korinek [1:01:46]

Would you be willing to revise your timelines if progress in AI occurs faster than what we are currently anticipating?

Joseph Stiglitz [1:01:59]

I cannot see [a scenario where we] have excess labour and capital [over] the next 30 or 40 years, even if [AI] proceeds very rapidly, given the needs that we have in public investment and the Green Transition. We could have miracles, but I think if that happens, we could face that emergency of this unintended manna from heaven and we would step up to that emergency.

Anton Korinek [1:02:51]

We are already [nearing] the end of our time. Let me ask you one more question, and then I would like to bring in a few questions posed by the audience. My question is: what are the other dimensions of AI that matter for inequality, independent of purely economic [considerations]? What is your perspective [on these dimensions of inequality] and how we can combat them?

We’ve talked about meaning in life and meaningful work. If AI takes away work, we will have to find meaning in other places. In the shorter term, AI will take away routine jobs, which will mean that we as a society will be able to devote more labour to non-routine jobs. This should open up possibilities [for people to be] more creative. Many people [have] thought the flourishing of our society is based on creativity. It would be great for our society if we could devote more of our talents to doing non-routine, creative things.

The audience had a question about workplace surveillance, which is one element of [AI] that could potentially greatly reduce the well-being of workers. What are your thoughts on [workplace surveillance]?

Joseph Stiglitz [1:05:06]

I agree [that AI could reduce the well-being of workers]. There are many [adverse effects of AI] we haven’t talked about. We are in an early stage [of AI policy], and our inadequate regulation allows for a whole set of societal harms from AI. Surveillance is one [example of these harms]. Economists talk about corporations’ ability to acquire information in order to appropriate consumer surplus for themselves, or in other words, to engage in discriminatory pricing. Anybody who wants to buy an airline ticket knows what I’m talking about: firms are able to judge whether you really want to [fly] or not. Companies are using AI now to charge different prices for different people by judging how much [each consumer] wants a good. The basis for market efficiency is that everybody faces the same price. In a new world, where Amazon — or the internet — uses AI, everybody [faces] a different price. This discrimination is very invidious: it has a racial, gender, and vocational component.

Information targeting has other adverse [implications], like manipulation. [AI] can sense if somebody has a predilection to be a gambler and can encourage those worst attributes by getting [the person] to gamble. [AI] can target misinformation at somebody who is more likely to be anti-vax and give [them] the information to reinforce that [belief]. [AI] has already been used for political manipulation, and political manipulation is really important because [it impacts] institutions. The institutions — the rules of the game — are set by a political process, so if you can manipulate that political process, you can manipulate our whole economic system. In the absence of guardrails, good rules, and regulations, AI can be extraordinarily dangerous for our society.

Anton Korinek [1:08:25]

That relates closely to another question from the audience: do you think there is a self-correcting force within democracy against high inequality and in particular against the inequality that AI may lead to?

Joseph Stiglitz [1:08:47]

I wish I felt convinced that there were a self-correcting force. [Instead], I see a force that [works] in the [opposite] direction. This [perception] may be [informed] by my experience as an American: [in the US], a high level of inequality [causes] distortions and [gives] money a role in the political system. This has changed the rules in the political and economic system. Money’s [increasing] power in both the political system and the economic system has reinforced the creation of that kind of plutocracy that I talked about [earlier].

[The changes] we’ve seen in the last few years in the United States are shocking, but in some ways are what I predicted in my 2010 book The Price of Inequality. The Republican Party has openly said, “We don’t believe in democracy. We want to suppress voters and their right to vote. [We want to] make it more difficult for them to vote.” [They’ve said this without] any evidence of voter fraud. It’s almost blatant voter suppression. In some sense, this [scenario] is what Nancy MacLean [described] in her book Democracy in Chains, though it has come faster [than she predicted].

I’ve become concerned that what many had hoped would be a self-correcting mechanism isn’t working. We hope we are at a moment when we can turn back the tide. As more and more Americans see the extremes of inequality, they will turn to vote before it’s too late, before they lose the right to vote. This will be a watershed moment in which we will go in a different direction. I feel we’re at the precipice, and while I’m willing to bet that we’re going to go the right way, I would give [this path] just over 50% odds.

Anton Korinek [1:11:37]

I think fortunately Joe and all his work on the topic is part of the self-correcting force.

The top question in terms of Q&A box votes is whether AI will be a driver for long run convergence or divergence in global inequalities. Do you believe that current laggards, or poor countries, will be able to catch up with the front runners more easily or less easily [because of AI]?

Joseph Stiglitz [1:12:12]

I’m afraid that we may be at the end of the era of convergence that we saw over the last 50 years. There was widespread convergence in China and India, and though some countries in Africa did not converge, we broadly saw a convergence [occurring]. I think [that now] there is a great risk of divergence: AI is going to decrease the value of unskilled labour and many natural resources, which are the main assets of poor countries. There will be [complexity]: oil countries will find that oil is not worth as much if we make the Green Transition. A few countries like Bolivia, that have large deposits of lithium, are going to be better off, but that will be more the exception than the rule. Access to [AI] technology may be more restricted. A larger fraction of the research is [occurring] inside corporations. The model of innovation [used to be] that universities were at the centre, and [innovators received a] patent with a disclosure, which means that the information was public and others built on that [information]. However, AI [innovation] so far has been within companies that have better hoarded information. [Companies can’t protect all information]: one non-obvious [path forward] is that [members of the public] could still access the underlying mathematical theorems that are in the public domain. While that’s an open possibility, I [still] worry that we will be seeing an era of divergence.

Anton Korinek [1:14:41]

Thank you so much, Joe for sharing your thoughts on AI and inequality with us. We are almost at time for our event. I am wondering if I may ask you a parting question that comes in two parts. What would be your message to, on the one hand, young AI engineers and, on the other hand, young social scientists and economists, who are beginning their careers and who are interested in contributing to make the world a better and more equitable place?

Joseph Stiglitz [1:15:30]

Engineers are working for companies, and a company consists of people. Talented people are the most important factors in the production of these companies. In the end, the voice of these workers is very important. We [must] conduct ourselves in ways that mitigate the extent to which we contribute to increases in inequality. There are many people, understandably, within Facebook and other tech giants that are using all their talents to ink the profits of, say, Facebook, regardless of the social consequences and regardless of whether it results in a genocide in Myanmar. These things do not just happen, but rather are a result of the decisions that people make.

To give another example, I often go to conferences out in Silicon Valley. When we discuss these issues, they say, “there is no way we can determine if our algorithms engage in discrimination.” [However], the evidence overwhelmingly is that we can. While the algorithms are always changing, taking in new information, and evolving, at any moment in time we can assess precisely whether [algorithms] are engaging in discrimination. Now, there are groups that are trying — at great cost — to see who is getting [certain] ads. You can create sampling spaces to see how [ads] are working.

I think it is nihilistic to say [that gauging discrimination is] beyond our ability and that we have created a monster out of our control. These companies’ workers need to take a sense of responsibility, because the companies’ actions are a consequence of their workers’ actions. When working for these companies, one has to take a moral position and a responsibility for what the companies do. One can’t just say, “Oh, that’s other people that are doing this.” One has to take some responsibility.

For social scientists, I think this is a very exciting time because AI and new technologies are changing our society. They may even be changing who we are as individuals. There is a lot of discussion about what [new technologies] are doing to attention span and how we spend our time. [These technologies] have profound effects on the way that individuals interact with each other.

Of course, social science is about society and how we interact with each other. [It is about] how we act as individuals. [It is about] market power [and] how we curb that market power. The basic business model of many tech giants [relies on] information about individuals. Policy [determines] what we allow those corporations to do with our [personal] information and whether [these corporations] can store [our information] and use it for other purposes. It is clear that AI has opened up a whole new set of policy issues that we had not even begun to think about 20 years ago. My Nobel Prize was in the economics of information, but when I did my work, I had not thought about the issue of disinformation and misinformation. [At the time], we thought we had laws dealing with [misinformation], which [are called] fraud laws and libel laws. We put [misinformation] aside because we thought it was not a problem. Today, [misinformation] is a problem. I mention that because we are going to have to deal with a whole new set of problems that AI is presenting to our society.

Anton Korinek [1:21:46]

Thank you, Joe. Thank you for this really inspiring call to action. Let me invite everybody to give a round of virtual applause. Have a good rest of the day.



Source link

21May

Margaret Roberts & Jeffrey Ding on Censorship’s Implications for Artificial Intelligence


Molly Roberts is an Associate Professor in the Department of Political Science and the Halıcıoğlu Data Science Institute at the University of California, San Diego. She co-directs the China Data Lab at the 21st Century China Center. She is also part of the Omni-Methods Group. Her research interests lie in the intersection of political methodology and the politics of information, with a specific focus on methods of automated content analysis and the politics of censorship and propaganda in China.

Jeffrey Ding is the China lead for the AI Governance Research Group. Jeff researches China’s development of AI at the Future of Humanity Institute, University of Oxford. His work has been cited in the Washington Post, South China Morning Post, MIT Technology Review, Bloomberg News, Quartz, and other outlets. A fluent Mandarin speaker, he has worked at the U.S. Department of State and the Hong Kong Legislative Council. He is also reading for a D.Phil. in International Relations as a Rhodes Scholar at the University of Oxford.

You can watch a recording of the event here or read the transcript below

Allan Dafoe  00:00

Welcome, I’m Allan Dafoe, the director of the Center for the Governance of AI, which is organizing this series. We are based at the Future of Humanity Institute at the University of Oxford. For those of you who don’t know about our work, we study the opportunities and challenges brought by advances in AI, so as to advise policy to maximize the benefits and minimize the risks from advanced AI. It’s worth clarifying that governance, this key term, refers descriptively to the ways that the decisions are made about the development and deployment of AI, but also the normative aspiration: that those decisions emerge from institutions that are effective, equitable, and legitimate. If you want to learn more about our work, you can go to governance.ai. I’m pleased today to welcome our speaker Molly Roberts, and our discussant Jeffrey Ding. Molly is Associate Professor of Political Science at the University of California, San Diego. She’s a scholar of political methodology, the politics of information, and specifically the politics of censorship and propaganda in China. She has produced a number of fascinating papers, including some employing truly innovative experimental design, probing the logic of Chinese web censorship. Molly will present today some of her work co-authored with Eddie Yang, on the relationship between AI and Chinese censorship. I was delighted to learn that Molly was turning her research attention to some issues in AI politics. After Molly’s presentation we will be joined by Jeffrey Ding in the role of discussant. Jeff is a researcher at FHI and Oxford DPhil PhD student in a pre-doctoral fellow at CSAC, at Stanford. I’ve worked with Jeffrey for the past three years now, and during that time, have seen him really flourish into one of the premier scholars on China’s AI ecosystem and politics. So now Molly, the floor is yours.

Molly Roberts  01:57

Thanks, Allan. And thanks for so much for having me. And I’m really excited to hear Jeffrey’s thoughts on this since I’m a follower of his newsletter, and also his work on AI in China. So this is a new project to try to understand the relationship between censorship and artificial intelligence. And I see this as sort of the beginning of a larger work on this relationship between censorship and artificial intelligence. So I’m really looking forward to this discussion. This is joint work with Eddie Yang, who’s also at UC San Diego. So you might have heard, and probably on this webinar series, that a lot of people think that data is the new oil, data is the input to a lot of products. It can be used to predict financial, to make financial predictions that can be used to then trade stocks or to predict the future of investments. And and at the same time, that data might be the new oil, we also worry a little bit about the quality of this data. So how good is this data? How good is data that’s inputted into these products, applications that are that we’re using now a lot in our AI world. So we know there’s a really interesting new literature in AI about politics and bias within artificial intelligence. And this idea behind this is that this huge data that powers AI applications is affected by human biases that are then encoded in that training data, which then impacts the algorithms that are then used within user facing interfaces or products that encode, that replicate or enhance that bias. So there’s been a lot of great work looking at how racial and gender biases can be encoded within these training datasets that then are put into these algorithms and user facing platforms. For example, there’s been – I don’t know why my tex didn’t work here – but Latanya Sweeney has some great work on ad delivery, speech recognition, there’s been also some great work on word embeddings and image labeling.

Sweeney, Latanya. “Discrimination in online ad delivery.” Communications of the ACM 56.5 (2013): 44-54.

Koenecke, Allison, Andrew Nam, Emily Lake, Joe Nudell, Minnie Quartey, Zion Mengesha, Connor Toups, John R. Rickford, Dan Jurafsky, and Sharad Goel. “Racial disparities in automated speech recognition.” Proceedings of the National Academy of Sciences 117, no. 14 (2020): 7684-7689.

Davidson, Thomas, Debasmita Bhattacharya, and Ingmar Weber. “Racial Bias in Hate Speech and Abusive Language Detection Datasets.” Proceedings of the Third Workshop on Abusive Language Online. 2019.

Caliskan, Aylin, Joanna J. Bryson, and Arvind Narayanan. “Semantics derived automatically from language corpora contain human-like biases.” Science 356.6334 (2017): 183-186.

Zhao, Jieyu, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. “Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints.” In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. 2017.

Li, Shen, Zhe Zhao, Renfen Hu, Wensi Li, Tao Liu, and Xiaoyong Du. “Analogical Reasoning on Chinese Morphological and Semantic Relations.” In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pp. 138-143. 2018.

So in this talk, we’re going to explore another institution that impacts AI, which is censorship. Censorship impacts the training data, which then impacts NLP models and applications that are used. So we’re going to look instead of at institutional or human biases that might impact training data, here, we’re going to look at how censorship policies on behalf of governments impact training data. But then how this might have a downstream impact on applications. So we know that large user-generated datasets are the building blocks for AI. So this could be anything from Wikipedia corpuses to social media data sets, government curated data, that more and more data is sort of being put online and this is being used in downstream AI applications. But we also know that governments around the world influence these datasets and have political incentives to influence these datasets which are then used downstream. And they can influence these datasets through fear, through threats or laws that create self censorship that make it so people won’t put things on social media or that they’re whatever their activities are not reflected in government curated data, they can influence these these data sets through friction, what I call friction, which is sort of deletion or blocking of social media posts, or preventing certain types of posts on Wikipedia or preventing some sort of data to be uploaded to a government website, for example. And they can also influence these datasets through flooding, or coordinated addition of information. So we think about coordinated sort of internet armies or other types of government organized groups trying to put information on Wikipedia or on social media, or to to influence the information environment.

Molly Roberts  06:05

So this data is then used in other user facing applications. So increasingly, AI is taking data available on the internet through common crawl through Wikipedia, through social media, and then using it as a base for algorithms in entertainment applications and productivity applications in algorithmic governance and a lot of different downstream applications. So our question is, how does censorship, how does this government influence on these data sets then affect the politics of downstream applications? And it could be that, it could be that even if some of these applications are not in themselves political, that because of this political censorship, they could have some political implications. Deciding which corpus to use, for example, could have political implications on downstream applications. So this paper looks particularly at censorship of Wikipedia corpuses. So we study censorship of Chinese online encyclopedias, and we look at how these different online encyclopedias have different implications for Chinese language, NLP (natural natural language processing). And I’m sorry that my citations aren’t working, but we use word embeddings, they’re trained on two Chinese online encyclopedia corpuses. These are trained by Lee et al., which are, are trained in the same way on Baidu Baike encyclopedia corpus and Chinese language Wikipedia. So we look at Chinese language Wikipedia, which is not blocked within China, and Baidu – sorry, which is blocked but uncensored and Baidu Baike which is not blocked within China but has pre-publication censorship restrictions on it, We look at  how using each of these different corpuses, which have different censorship controls can have different implications for downstream applications. We measure political word associations between these two corpus, and we find that word embeddings, which I’ll go over in a second, trained on Baidu Baike associate more negative adjectives with democracy in comparison to Chinese language Wikipedia, and more positive associations with CCP and social CCP and other types of social control words. And we find with a survey that Baidu Baike word embeddings are not actually more reflective of views of people within China. And therefore, we don’t think that this is coming from simply people’s contributions to Wikipedia, but we think it is coming from the censorship of Wikipedia. And then we also identify a tangible effect of the decision to use pre-trained word embeddings on Baidu Baike versus pre-trained word embeddings on Chinese language Wikipedia in downstream NLP applications. And we’ll talk a little bit at the end about what the strategic implications this might have for politics and AI. So pre-trained word embeddings, some of you may be familiar with what these are, but just by way of introduction in case you’re not: natural language processing, which are algorithms that are used on text, rely on sort of numerical representations of text. So, we have to figure out how to represent text numerically in order to use it then in downstream application. So anything that is doing AI on social media data, Wikipedia data, encyclopedia, on predictive text, this is all relying on a numerical representation of that text. So one way that is common within the social sciences, is to represent text is to simply give each word a number essentially, and say, is this word included within this document or not? This is called the bag of words representation, a one or zero whether or not you use this particular word or not within a text. But another way to represent this text that has become very, very popular in computer science and increasingly also in social sciences is to use word embeddings to represent text. So the idea behind this is that each word, word embeddings estimate a K-dimensional vector, sometimes a 200 300 length vector for any word within a within a huge dictionary of words. And this vector encodes the similarity between words. So words that are likely to be used as substitutes for each other words that are often used in the same context will be in more similar areas of this k dimensional space than other words, and this allows so using pre-trained word embeddings, which already have a K-dimensional vector trained on a large corpus allows an NLP application to know, at the start, how these words might be similar to each other. So often, these word embeddings are pre-trained on very large corpuses, and then they’re used as inputs in smaller NLP tasks. So I already know that two words are more similar to each other than another word, even before starting to train my data.

Molly Roberts  11:13

So often, pre-trained word embeddings are made available by companies, by academics. So this is just one example of a screenshot from fastText: Facebook makes available a lot of different pre-trained word vectors that are trained, these ones are trained on common crawl and Wikipedia. So really large corpuses, they’re using these in 157 languages, you can then go download them and use them as inputs into your NLP model. So here’s just an example,  sort of to fix your fix ideas of what word embeddings are doing. So say that I have two documents. This is from an IBM Research blog, they have two documents document x on the left here, says I gave a research talk in Boston and document y on the right, this is a data science lecture in Seattle, these actually don’t share any words. But if you have word embeddings as a representation of the text, you would know – so this is a very simple two dimensional word embeddings, but imagine in a 300 dimensional space, right? – you would know that actually, these two documents are quite similar in content to each other. Because [garbled] Boston are often reusing the same, often in the same context as each other, [garbled] similar research and science or similar talk and lecture are similar. So the place that these words are within space would be pre-trained on a large corpus, then you could use this word embedding as an input, which would give you more information about those documents. So here, we come to censorship of training data. So these are often pre-trained word embeddings are often trained on very large data sets, like Wikipedia: because they’re user generated, they cover lots and lots of different topics. So we think that they’re sort of representative of how people talk about many different things. In China, or in the Chinese language, however, this is complicated by the fact that the Chinese government has blocked Chinese language Wikipedia. And therefore there’s also been the development of another Wikipedia corpus Baidu Baike, which is unblocked within China but is censored. So both of these are online encyclopedias in China, they’re both commonly used as training data for NLP and if you look at CS literature, you’ll see both of them as used as training data. Chinese language Wikipedia is uncensored as I said, but it is blocked and Baidu Baike is censored in that there are a lot of regulations of what can be written on Baidu Baike, but it is unblocked in that it is available within mainland China. So for example, if you want to create a post or a entry on Baidu Baike about the June 4th movement, it automatically tells you you cannot create this post. Also, there are a lot of regulations about political topics have to follow Chinese official government, Chinese government official news sources, so there’s so there’s a lot of pre-censorship of these entries, unlike Chinese language Wikipedia, where you can contribute without pre-censorship. There’s been some great work by Zhang and Zhu in 2011, American Economic Review, censorship of Wikipedia has reduced contributions to it. So they show that when Chinese language Wikipedia was censored, that there are many, many fewer contributions to Chinese language Wikipedia, because there’s a decrease in that in the audience of the of the site. And because of this, we are apparently at least because of this, Baidu Baike is many, many times larger than Chinese language Wikipedia with 16 times more pages. And therefore it’s increasingly an attractive source of training data for Chinese language NLP.

Molly Roberts  14:49

So what we do in this paper is we compare word embeddings. Between, we compare essentially where word vectors sit in pre trained word embeddings, pre trained on Chinese language Wikipedia versus Baidu Baike. So I’ll just give you a really simple example of what this might look like. So we have word embedding a say this is Baidu Baike. These are a few different word vectors from this trained on this corpus, and we have word embedding b., say this is Chinese language Wikipedia What we’re interested in is how some target words, for example words like democracy, or other types of political words, where they sit in relation to adjectives, positive and negative adjectives. So in this case, is democracy closer in word embedding space to stability, or is it closer to chaos. And we could compare where democracy sits between these two positive and negative added adjectives on in Chinese language Wikipedia versus Baidu Baike. So what we do is we come up with groups of target words, and we have many different categories of target words, each category of target words, has about 100 different words associated with that category. So we use democratic concepts and ideas, we use categories such as democratic, democracy, freedom and election. So each of these categories that has about 100 different words that are sort of synonymous with this within it. And then we also use non targets of propaganda. So for example, social control, surveillance, collective action, political figures, CCP, or historical other historical events, we also find lots of lots of words associated with these in these categories. And then we look at how they are related to attribute words, like adjectives for example, or evaluative words, which are these words in blue. So we use a list of propaganda attribute words, words that we know from reading and studies of propaganda that are often associated with these concepts. And we also use general evaluative words from these big adjective evaluated word lists in Chinese that are often used in Chinese language NLP. So what we do is we take each target word vector, so this is xi, where xi is either that word that part of the vector of the target word from Baidu, or from Wikipedia. And then we take the attribute word vectors, where A is positive attribute that that vector in either Baidu or Wikipedia, and B has the negative word vector for other by doing Wikipedia. And then for each embedding for Baidu, or for Wikipedia, we examine the cosine similarity between the target word and the positive attribute words, minus the mean cosine similarity between the target word and the negative attribute words. And then we take the difference between these differences across all of the word target words within a category to get the relationship or how much closer positive words are overall to the target category in comparison of Baidu to Chinese language Wikipedia. So if Baidu is Category A, and Wikipedia is Category B, if this is very negative, it means that the target, the target category is more associated with negative words. If this is more positive, this means that this target category is more associated with positive words. And the difference between these would be negative which means that Baidu would be associated more negatively than Chinese language Wikipedia. To assess statistical significance, we do a permutation test where we permute the assignment of the word vectors to A B and then we see how extreme our result is in comparison to that permutation. So the theoretical expectation of this is that overall, freedom, democracy, election, collective action, negative figures, all of these categories will be more associated with negative attribute words in Baidu Baike as in comparison to Chinese language Wikipedia. On the other hand, categories like social control, surveillance, CCP, historical events, and positive figures should be more positively associated. And this is exactly what we find. So here is the effect size for propaganda words for each of these categories. And here’s the effect size for evaluative words for each of these categories along with the p value of statistical significance. And we find that overall, Baidu Baike target words in the categories of freedom, democracy, election, collective action, negative figures, are more associated with negative attribute words in Baidu Baike than they are in Chinese language Wikipedia, and the opposite for categories such as social control, surveillance, CCP, etc.

Molly Roberts  19:31

So this could be one possibility that you might think is that perhaps it’s just simply that mainland Chinese internet users view target categories differently than overseas internet users contributing to Chinese language Wikipedia. And therefore this just this difference in word associations between these two, these two sets of internet users is creating this difference in online encyclopedias. So to try to get at this, we did an online survey of about 1000 response in mainland China and we asked people if they thought that we asked them between the following options, which do you think better describes a particular target word. And we took the closest attribute word from Baidu Baike, in the word embedding space, and the closest attribute word for Wikipedia, and we asked people to evaluate that. And what we found is that overall, neither of neither of the Baidu Baike nor Chinese language Wikipedia seemed to better reflect the associations of our survey respondents. So for some words, Chinese so this on the x axis is the likelihood of choosing the Baidu word for some categories and for some lists of attribute words, Chinese language Wikipedia was preferred, and in some categories for some list of attribute words Baidu Baike was preferred. So we didn’t see one of these to necessarily dominate the other in terms of users evaluations of them. So we didn’t we that sort of rejected this sense that Baidu Baike is just better reflecting people’s work associations. So the third thing that we did was we evaluated the downstream effect of these word embeddings on a machine learning task. So the task that we set out to do is classify news headlines according to sentiment. And we use a big general Chinese news headlines data set as our training data. So this is saying, so say you were, you wanted to create a general sentiment news headline classifier, say to create a recommendation system, or to do content moderation on a social media website, for example, say you were creating this algorithm, you might use a general Chinese news headline data set as training data. And then we’re going to look at how the algorithm that that was trained, performs on news headlines that contain these target words, like words related to democracy and election and freedom and social control and surveillance and words with historical or figures that might be of interest to CCP. So we use three different models, Naive Bayes, SVM, and neural network. And we look at how using the same training data, the same models, but simply with different pre-trained word embeddings one that comes from Baidu Baike, one that comes from Chinese language Wikipedia, how just using different pre-trained word embeddings can influence that systematic classification error of this downstream of this downstream task. So do overall do models trained with pre-trained Baidu Baike word embeddings have a just a slightly more negative classification of headlines that contain democracy, than models that contain word embeddings that are models that were trained using pre-trained Chinese language Wikipedia word embeddings.

Molly Roberts  22:57

So this is an example of so for example, this is a headline “Tsai Ing-wen: Hope Hong Kong can enjoy democracy as Taiwan does” the Wikipedia label here comes out as positive when we train this, but the Baidu Baike label when we use the same classifier, same training data, just a different word embeddings comes out as negative, or “Who’s shamed by democratization in the kingdom of Bhutan”, the Baidu Baike label here is coming out as negative, the Wikipedia label is coming out as positive even though the human label here is negative. So, um, so, you know, what are the sort of systematic mistakes that these classifiers are making? Overall, we see that these classifiers actually have very similar accuracy. So it’s not that Baidu Baike models trained on Baidu Baike word embeddings have a higher accuracy than the models trained on Chinese language Wikipedia word embeddings, we see that the accuracy is quite similar between each each of these different word embeddings. But we see big effects on the classification error in each of these different categories. So me, LIJ is the human labeled score for a news headline for target word I in category J. So this would be a negative one if it was a negative sentiment and a positive one if it was a positive sentiment. And if we use the if we get the predicted scores from Baidu and from Wikipedia, our models trained on Baidu Baike word embeddings versus Wikipedia word embeddings. And then we create a dependent variable that is the difference between Baidu and the human label and Wikipedia and the human label for a category, then we can we can estimate how the difference between how the difference between the human label and the predicted label changes by category for the Baidu classifier versus the Wikipedia classifier. So our coefficient of interest here is beta J. How does, how is, is there systematic differences in the direction of classification for a certain category for the algorithm trained with Baidu word embeddings versus Wikipedia word embeddings. And what we find is that there are quite systematic differences across all different machine learning models in the direction that we would expect. So Baidu Baike overall is much are the classifiers trained with pre-trained word embeddings on Baidu Baike overall are much more likely to categorize headlines that contain target words in the categories of freedom, democracy, election, collective action, negative figures, as more negative than social control, surveillance, CCP, historical and positive figures. So just to sort of think about a little bit of the implications of the potential implications of this. So there are sort of strategic incentives. So given that what I hope I convinced you so far is that censorship of training data can have an impact, a downstream impact on NLP applications. And if that’s true, one thing that we might try to think about is, are there strategic incentives to manipulate training data? So we do know that there are lots of government-funded AI projects to create more training data, to gather more training data that then can be used in AI in order to sort of push AI along, might there be sort of strategic incentives to influence this part, the politics of this training data? And how could this play out sort of downstream. So you might think that there would be a strategic benefit to, for example, a government, for example, to manipulate the politics of the training data. And we might think that that could be that their censored training data could in some circumstances reinforce the state, right. So in applications like predictive text, where we’re creating predictive text algorithms, the state might want sort of associations that are reflective of its own propaganda, and not reflective of things that it would like to censor, to sort of replicate themselves within these predictive text algorithms, right. Or in cases like recommendation systems or search engines, we might think that a state might want these applications trained on on data that they themselves curate.

Molly Roberts  27:20

On the other hand, and this I think is maybe less, less obvious when you first start thinking about this, but became more obvious to us as we started thinking about this more: censored training data, it might actually make it more difficult for the state to see society in a ways that it might actually undermine some applications in in certain ways. So for example, content moderation, there are a lot of new AI algorithms to moderate content online, whether it’s to censor it, to remove content that that violates the terms of service of a website, etc. If content moderation is trained on data that has all of sensitive or objectionable topics removed, in fact, it might be worse actually distinguishing between these topics, from the state’s perspective, than if that initial training data were not censored, right, and so we can think about ways in which censorship of training data might actually undermine what the state is trying to achieve. The other way in which it could be problematic for the interests of the state is to see in public opinion monitoring. So if, for example, a lot of training data were censored, in that removed opinions or ideas that were in conflict with the states, it might also if that was used as training data to then understand what the public thinks on on in, for example, by looking at social media data, which we know a lot of states do. And this could bias the outcome of this data in ways that would make it harder for the state to sort of see society. So just to give a plug for another paper that I’ve been that it’s coming out in the Columbia Journal of Transnational Life, I work with some co-authors on the Chinese legal system. And we show that sort of legal automation, which is one of the objectives of the Supreme People’s Court in China is sort of undercut by a data missing this within this big legal data set that the the Supreme People’s Court has been trying to curate. So in summary, data reflects, we know, the institutional and political contexts in which it was created. And not only do human biases replicate themselves in AI, but also political policies impact training data, which then has downstream applications. We showed this in word embeddings and my downstream NLP applications as a result of Baidu Baike and Chinese language Wikipedia word embeddings. But of course, we think that this is a much more general phenomenon that is potentially worthy of future study. And this could have an effect In a wide range of areas, including public opinion monitoring, conversational agents, policing and surveillance and social media curation. So AI, is in some sense can can, in some sense, replicate or enhance sort of an automation of politics. And there have been some discussions about trying to de-bias AI, we also think that this is, would be might be difficult to do, especially in this context where we’re not really sure what a de-biased political algorithm would look like. And so, thanks to our sponsors, and really looking forward to your questions and comments.

Allan Dafoe  30:41

Thanks, Molly. 66 people are applauding right now. That was fantastic. I know I had troubles processing all of your contributions, and I did a few screenshots, but not enough. So I’m sure we’ll, or I think there’s a good chance we’ll have to have you flick back through some of the slides. A reminder to everyone, there’s a function at the bottom where you can ask questions, and then you can vote up and down on people’s questions. So that’d be great to have people engaging. So now, over to Jeffrey Ding for some reflections.

Jeffrey Ding  31:13

Great. Yeah, this was really cool, really cool presentation. Dr. Roberts sent along like a early version of the paper beforehand, so we can get into a little bit and unpack the paper a little bit more in the discussion. But I just wanted to say off the bat that it’s just a really cool paper and it’s a good example of kind of flipping the arrow because a lot of like related work in this area most people look at the effect of NLP and language models on censorship. And just flipping the arrow to look at the reverse effects is really cool. And it also speaks to like this broader issue in NLP research where the L matters in NLP. So much of the time, most of the time, we talk about like English language models, but we know there are differences in terms of languages, in terms of how NLP algorithms are applied. So we see that with like low resource languages, like Welsh and Punjabi, there’s still a lot of barriers to developing NLP algorithms. And your presentation shows that even for the two most resourced languages, English and Chinese, there are still significant differences, tied to the censorship. And finally, the thing that really stuck out to me from the papers, just in the presentation is just an understanding and the integration of the technical details about about how AI actually works, and tied to political implications. And then one line that really stuck out and then you emphasize in the presentation is that the differences that you’re seeing and the downstream effects don’t necessarily stem from the training data in the downstream applications, or even the model itself, but from the pre-trained word embeddings, that have been trained on another data set. So that’s just a really cool, detailed finding, and kind of a level of nuance that you just don’t really see in the space. So really excited to dig in. I just have kind of three buckets, and then a couple of just thoughts to throw at you at the end. So the first bucket is which words to choose, which target words to choose. And I find it really interesting just like thinking through this, because for example, you pick election and democracy as two of the examples. And for democracy, it actually brings up an interesting question in that like, the CCP has kind of co-opted the word democracy, mínzhǔ. And and it actually like ranks second on the party’s list of 12 core values that they published in December 2013. Elizabeth Perry has written on this sort of like, populist dream of Chinese Democracy. So I’d be curious if you thought about that of like, when you you know, when you’re in different Chinese cities, and you see like the huge banners with democracy plastered along all these banners? And just, I wonder, like, what if you picked a target word like representation, or something that might speak to this more kind of populist dream or populist co-option of what democracy means?

Jeffrey Ding  34:07

And then on the point about I think the second point is sort of, on the theoretical expectations kind of tied to this democracy component, whether we should expect kind of more negative connotations related to democracy in the first place, is this idea of the negative historical events and negative historical figures. And the question is, why should we expect a more negative portrayal if these events and figures have been erased from the corpus? So shouldn’t it be, shouldn’t it be just basically not positive or not negative, kind of like just a neutral take? And I think in the paper, you gestured, you kind of recognize this and say that there’s very little information about these historical figures, sSo so their word embeddings do not show strong relationships with the attribute words and I’m just curious if we should expect the same thing with the negative historical events as well, like Tiananmen Square is the most obvious example. And then on the results I just had a quick thing that surprised me a little bit was that you showed that Baidu Baike and Wikipedia perform at the same level of accuracy overall. And then kind of the setup of the initial question is that Baidu Baike has just become a much better corpus, and there’s much more time spent on the corpus, it’s 16 times larger. So I’m just curious why we didn’t see that Baidu Baike corpus perform better.

Jeffrey Ding  35:37

And then yeah, I had, I had some comments on kind of threats to inference, kind of like alternative causes other than censorship that are producing the results. And actually one of them was just a different population of editors. And it’s cool that you all have already done a survey experiment to kind of combat that. That kind of alternative cause I was just thinking, like, as you’re talking about the social media stuff, I wonder if the cleanest way to kind of show the censorship as the key driving factor would be to like, train, train a language model based off of a censored version of like Weibo posts, a sample Weibo posts versus like the population that includes all the Weibo posts from a certain time period. And some, no, that’s like something that other researchers have used to study censorship. And then my last thought, just to open it up to like, kind of bigger questions that I actually don’t know that much about, but it would be cool to know, there’s a lot of technical people on the webinar as well, they could chime in on this point. But the hard part about studying these things is the field moves so fast. So now people are saying that it’s only a matter of time before pre-trained word embeddings and methods like word2vec are just completely replaced by pre-trained language models like, OpenAPI’s work, Google’s work, ELMo GPT2, GPT3. And the idea is that pre-trained word embeddings, kind of they only incorporate previous knowledge into the first layer of the model. And then the rest of the network still needs to be trained from scratch. And recent advances have basically taken kind of what people have done with computer vision and just taken a, to pre-train the entire model with a bunch of hierarchical representations. So I guess like word2vec, would be just wanting the edge and then these pre-trained language models would be learning like the full hierarchy of all of the features from like edges to shapes. And it’d be interesting to explore whether, to what extent, these new language models would still fall into the same traps, or whether they will provide ways to kind of combat some of the problems that you’ve raised. But yeah, looking forward to the discussion.

Molly Roberts  37:53

For great, fantastic comments, and thank you so much, I really appreciate that. And just to sort of pick up on a few of them. And, yeah, we were actually we didn’t, we had certain priors about the category of democracy, we thought that overall, it would be more negative. But of course, we did discuss this issue of mínzhǔ and how and how it’s been used within propaganda within China. The way that we did it was we took, we used both sets of word embeddings to look at all of the closest words to democracy, and get all hundred of those. So it’s not just mínzhǔ, but it’s also all of the other things that are sort of subcategories of democracy. And so it could be that for one of these words, it might be different than others. Right. And so I think that we’re seeing sort of like the overall category, but I think it’s something we should look a little bit more into, because it could sort of piece out some of these mechanisms. Yeah, so one of the things we find with negative historical events and figures is we get less decisive results in these categories. And we think that this is because Baidu Baike just doesn’t have entries on these negative historical events and figures. I think this is one example of how censorship of training data can make can sort of undermine the training corpus, because even from the perspective of the state, if algorithms were using this, and for example, social media, or censorship down the road, you would expect the state would want the algorithm to be able to distinguish between these things, but in fact, because of censorship itself, the algorithm is maybe going to do less well, there might we haven’t shown this yet, but we wouldn’t expect it to do less well on the censorship task than it would have if the training data weren’t censored in the first place. So that’s sort of an interesting kind of catch 22 of this for the from the state’s perspective, right. So and it is interesting that Baidu Baike and Wikipedia at least in our case performed with about the same level of accuracy. And there are papers that show that for certain really more complicated models, the magnitude of the Baidu Baike corpus is better. But of course, I think it sort of depends on your application. In our case, there wasn’t really a difference between the performance or the level of accuracy.

Molly Roberts  40:21

And I really liked this idea of looking at censored versus uncensored corpuses of Weibo posts to try to understand how that could have a downstream effect on training, I think that’d be great way to kind of piece that out. And then this point that you have about pre-trained language models sort of superseding this sort of like pre-trained embeddings, and this transfer learning task, I think that that’s really, really interesting development. And I think that this only makes these questions of where what the trick with initial training data is in transfer learning become more and more important, right? Because these are just sort of the biggest data, whatever has the most data, is that data itself has been amplified by the algorithm downstream. And it’s hard to sort of think about how to delete those biases without actually just fixing the training data itself, or making it more representative of whoever gets, you know, of the population or the language, etc. So yes, I’m looking forward to more discussion. And thank you so much for this awesome comments.

Allan Dafoe  41:35

Great, Jeff, do you want to say any more?

Jeffrey Ding  41:38

Yeah, that last point is really interesting, because there’s some people that are saying that, like, basically, NLP is looking for it’s kind of ImageNet, and kind of, you know, this big, really representative really good data set that you can then just train, you know, your train the language models on and then you do you do transfer learning and learning to all these downstream tasks? And yeah, I think your paper really points to like, if Baidu Baike becomes the ImageNet of Chinese NLP, and you know, I don’t know enough of the technical details in terms of like, if there’s ways in transfer learning to like, do some of the debiasing from the original training set, but yeah, I think, yeah, I think, obviously, the paper will still be super relevant to kind of wherever the NLP models are going.

Allan Dafoe  42:30

Great. Well, I have some thoughts to throw in, and I see people asking questions and voting. So that’s good. We’ll probably start asking those two eventually. And also, you too, should continue. Yeah, saying whatever you want to say. So but just some brief thoughts from me. So I also really like this idea of doing this kind of analysis on a corpus of uncensored data, and then you have an indicator for whether the post was censored. And of course, Molly, I think was it your PhD work, in which you did this. Yeah. So Molly’s already been a pioneer in this research design. And it’s not to say that this, I think it would just be a nice complement to this project. Because this project, you know, you have two nice corpuses, but it’s not obvious what’s causing the differences. It’s like, it could be censorship, it could be fear, it could be different editorial, you know, editors, or just contributors. And whereas that would really, I mean, you’d get the result anyway, so I think that’d be really cool. Okay, so a question I have is maybe one way to phrase it is how, like, how deep are these biases in, in a model trained on these corpuses? And I mean, we, I’d say, currently, you know, we don’t know of a solution for an easy solution for how you can kind of remove notions of bias or, or meaning from, from a language model. You can often remove a kind of the, the connections that you think of, but then there’s still lots of, you know, hidden connections that you may not want. Now, here’s maybe an idea for how you can look at how robust these biases are. I think it was your study three. I don’t know if you’re able to flick to that slide. So you had this, you have a pre trained model. And then in study three, you gave it some additional, right, some additional training data. Okay, yeah.

Molly Roberts  44:36

I’ll go to the setup.

Allan Dafoe  44:39

Yeah, exactly. Yeah. So you have your pre-trained model, and then you give it some additional, right, these Chinese headlines, training data. And sort of the graph I think I want to see is your outcome, so how biased it is, as a function of how much training it’s done, and so on. Initially, it should be extremely biased. And then as you train, if I, well, I think this applies to study three, but if not study three, you could just do it for another corpus where you have sort of the intended associations in that corpus, and see how long it takes for these sort of inherited biases to diminish. You know, maybe they they barely diminished at all, maybe they very rapidly go down with, you know, the first. Well, anyhow, with not too large of a data set, maybe, you know, you never quite get to no bias, but you get quite low. So, yeah, that might be one interesting way of looking at how deep are these biases? How hard is it to extract them? And and I guess another question for you,a nd a question for anyone in the audience who is a natural language expert? Is, are there techniques today or likely on the horizon that could allow for unraveling or flipping or kind of, without requiring almost overwhelming the pre-trained data set, having some way of sort of doing surgery to change the biases in a way that’s not just superficial, but fairly deep? And so, you know, like, for example, maybe with if you have this censored, uncensored and censored data set, you could infer what are the biases being produced by censorship. And then, even if you have a small dataset of this uncensored sensor data, set the biases that you would learn from that. You could then, like, subtract that out of these larger corpuses. And I guess the question is, how effective would that be? And I don’t expect we know the answer, but might be worth reflecting on.

Molly Roberts  46:50

Those are really great points and really interesting points, and I am I, you know, we’re really standing on the shoulders of giants here. This, there’s this whole new literature, and I’m embarrassed that my tech didn’t work. And I’ll I’ll post links to these papers later, that are, is this whole literature on bias of within AI, with respect to race and gender. And, and certainly one of the things that this literature has started to focus on is what are the harms and the downstream applications? So when you talk about like, how deep are these biases, I think one of the things that we have to quantify sort of downstream is like, what are what are there? How are they being used? How is this data being used within applications? And then how does the application that affect people’s decision making or people’s opportunities, etc, or it’s up down down the road? And I think that’s a really hard thing to do. But, but it’s important. And I think that’s one of the ways that where we want to go sort of inspired by this literature. I think how bias is a function of the training data is really interesting. And I think we’ve done a little bit of a few experiments on that, but I think we should include that as a graph within the paper. And certainly, as you get more and more training data, the word embeddings will be less important. Right? And it’s, that would be at least my prior on that. And I think this idea of trying to devise, like, how would you sort of subtract out the bias? I like the idea of trying to figure out, so if you had a corpus, which included both uncensored and included an entire uncensored corpus, and then what information was censored, and then trying to reverse engineer, what are the things that are missing, right, and then adding that to back into the corpus, that would be sort of the way to go about it. It seems hard at one of the things that it doesn’t overcome is self censorship. Because of course, if people didn’t originally add that information to the corpus, even if it were, you never even see that within the data. And, and also sort of, with training data itself is affected by algorithms, because you know, what people talk about. So for example, on social media might be that a lot of people are talking about a lot of different political topics, but certain conversations are amplified, say, by moving them up the newsfeed and other applications are down the newsfeed. And so then you get sort of this feedback loop on what the training data is like. But then, if you then use that, again, into training data, amplifies that again. So I think that there’s so many complicated feedback, AI feedback loops within this space that they’re really difficult to piece out. But that doesn’t mean we shouldn’t try. Yeah, yeah.

Allan Dafoe  49:33

Yeah. A thought that occurred during your talk, is I can imagine the future of censorship is more like light editing. So I submit a post and then the language model says, let’s use these slight synonyms for the words you’re using that have a better connotation than that you can imagine just the whole social discourse being run through this filter of the right associations. And I guess a question for you then also on this is what is is there like an arms race with citizens? So if if citizens don’t entirely endorse the, the state pushed associations, how can what countermeasures can they take? So can they, you know, if one word is sort of appropriated, can they, you know, deploy other words? And I know, there’s kind of like symbolic, you know, games where you can kind of use a symbol as a substitute for a censored term. And, and, and so yeah, are we is there this kind of arms race dynamic happening, where the state wants to control associations and meanings, and people want to express meanings that are not approved of, and so then they change the meaning of words. And you know, maybe even in China, we would see, like a faster cycling or evolution of the meaning of words, because you have this cat and mouse game?

Molly Roberts  50:55

Yeah, I think that’s absolutely right. And I have definitely talked to people who have created applications for like suggesting words that would get around censorship, right. And, and, and that’s, you know, would be like an interesting technology, cat and mouse game around this with AI being used to censor and also adding news to evade censorship. I think one of the interesting implications of what we’re like if you think about the sort of the political structure of AI, as you think about, you know, maybe a set of developers who aren’t necessarily political in themselves, they’re creating applications that are, you know, productivity applications, entertainment applications that are being used in a wide from a wide variety of people. And they’re looking for the biggest data, right, and so and like the most data, the data that’s going to get them the highest accuracy. And because of that, I think the state has a lot of influence over what types of training data sets are developed. And, and has a lot of influence on these applications, even if the application developers themselves are not political. And I think that’s an interesting like, interaction. I’m not, you know, I think, I’m not sure how much states around the world have thought about the development, the politics within training data, and maybe, but I think it could be something that they start thinking about, and might be something to sort of try to understand that. You know, how they might, as training data begin somewhere more important, how they might try to influence it. Yeah.

Allan Dafoe  52:28

Good. Well, we’re at time. So yeah, the remaining questions, I’m afraid will will go unanswered. There was a request for your attention. What was the paper, automating fairness paper? And also, I think, I’m sure people are excited for this paper to come out. So yeah, we look forward to seeing this come out. And, you know, continuing to read, you’re really fascinating and creative. And I guess, yeah, just especially an empirical, like, your work is really thoughtful and effortful, and the extent to which you use sort of different quantitative designs and experimental designs to answer these, and almost kind of field experimental, I guess, designs where you’re, you’re really? Yeah, you can only deploy these experiments if you know the, the nature of the political phenomenon well enough, and I guess, have the resources to devise these experiments that you have been doing. So it’s very exciting work, and thanks for sharing the latest today.

Molly Roberts  53:39

Thanks. Thanks so much for having me. And yeah, thanks, Jeff, also for your fabulous comments.

Molly Roberts  53:47

Thanks, everybody, for coming.



Source link

21May

GovAI Annual Report 2018 | GovAI Blog


The governance of AI is in my view the most important global issue of the coming decades, and it remains highly neglected. It is heartening to see how rapidly this field is growing, and exciting to be part of that growth. This report provides a short summary of our work in 2018, with brief notes on our plans for 2019.

2018 has been an important year for GovAI. We are now a core research team of 5 full-time researchers and a network of research affiliates. Most importantly, we’ve had a productive year, producing over 10 research outputs, ranging from reports (such as the AI Governance Research Agenda and The Malicious Use of AI) to academic papers (e.g. When will AI exceed human performance? and Policy Desiderata for Superintelligent AI) and manuscripts (including How does the Offense-Defense Balance Scale? and Nick Bostrom’s Vulnerable World Hypothesis).

We have ambitious aspirations for growth going forward. Our recently added 1.5 FTE Project Manager capacity between Jade Leung and Markus Anderljung, will hopefully enable this growth. As such, we are always looking to help new talent get into the field of AI governance. If you’re interested, visit www.governance.ai for updates on our latest opportunities.

Thank you to the many people and institutions that have supported us, including our institutional home–the Future of Humanity Institute and the University of Oxford–our funders–including the Open Philanthropy Project, the Leverhulme Trust, and the Future of Life Institute–and the many excellent researchers who contribute to our conversation and work. We look forward to seeing what we can all achieve in 2019.

Allan Dafoe
Director, Centre for the Governance of AI
Future of Humanity Institute
University of Oxford

Below is a summary of our research, public engagement, in addition to our team and growth.

Research

On the research front we have been pushing forward a number of individual and collaborative research projects. Below is a summary of some of the biggest pieces of research published over the past year.

AI Governance: A Research Agenda
GovAI/FHI Report.
Allan Dafoe

The AI Governance field is in its infancy and rapidly developing. Our research agenda is the most comprehensive attempt to date to introduce and orient researchers to the space of plausibly important problems in the field. The agenda offers a framing of the overall problem, an attempt to be comprehensive in posing questions that could be pivotal, and references to published articles relevant to these questions.

Malicious Use of Artificial Intelligence
GovAI/FHI Report.
Miles Brundage et al [incubated and largely prepared by GovAI/FHI]

Artificial intelligence and machine learning capabilities are growing at an unprecedented rate. These technologies have many widely beneficial applications, ranging from machine translation to medical image analysis. Countless more such applications are being developed and can be expected over the long term. Less attention has historically been paid to the ways in which artificial intelligence can be used maliciously. The report surveys the landscape of potential security threats from malicious uses of artificial intelligence technologies, and proposes ways to better forecast, prevent, and mitigate these threats.

The report was featured in over 50 outlets, including the BBC, The New York Times, The Telegraph, The Financial Times, Wired and Quartz.

Deciphering China’s AI Dream
GovAI/FHI Report
Jeffrey Ding

The Chinese government has made the development of AI a top-level strategic priority, and Chinese firms are investing heavily in AI research and development. This report contextualizes China’s AI strategy with respect to past science and technology plans, and it also links features of China’s technological policy with the drivers of AI development (e.g. hardware, data, and talented scientists). In addition, it benchmarks China’s current AI capabilities by developing a novel index to measure any country’s AI potential and highlights the potential implications of China’s AI dream for issues of AI safety, national security, economic development, and social governance.

Cited by dozens of outlets, including The Washington Post, Bloomberg, MIT Tech Review, and South China Morning Post, the report will form the basis for further research on China’s AI development.

The Vulnerable World Hypothesis
Manuscript.
Nick Bostrom

The paper introduces the concept of a vulnerable world: roughly, one in which there is some level of technological development at which civilization almost certainly gets devastated by default, i.e. unless it has exited the “semi-anarchic default condition”. Several counterfactual historical and speculative future vulnerabilities are analyzed and arranged into a typology.

Discussed in Financial Times.

How Does the Offense-Defense Balance Scale?
Manuscript.
Ben Garfinkel and Allan Dafoe

The offense-defense balance is a central concept for understanding the international security implications of new technologies. The paper asks how this balance scales, meaning how it changes as investments into a conflict increase. To do so it offers a novel formalization of the offense-defense balance and explores models of conflict in various domains. The paper also attempts to explore the security implications of several specific military applications of AI.

Policy Desiderata for Superintelligent AI: A Vector Field Approach
In S. Matthew Liao ed.  Ethics of Artificial Intelligence. Oxford University Press.
Nick Bostrom, Allan Dafoe, and Carrick Flynn

The paper considers the speculative prospect of superintelligent AI and its normative implications for governance and global policy. Machine superintelligence would be a transformative development that would present a host of political challenges and opportunities. The paper identifies a set of distinctive features of this hypothetical policy context, from which we derive a correlative set of policy desiderata — considerations that should be given extra weight in long-term AI policy compared to in other policy contexts.

When Will AI Exceed Human Performance? Evidence from AI Experts
Published in Journal of Artificial Intelligence Research.
Katja Grace (AI Impacts), John Salvatier (AI Impacts), Allan Dafoe, Baobao Zhang, Owain Evans (Future of Humanity Institute)

Our expert survey, we report the results from a large survey of machine learning researchers on their beliefs about progress in AI. The piece was the 16th most discussed article in 2017 according to Altmetric. It was reported on in e.g. the BBC, Newsweek, NewScientist, Tech Review, ZDNet, Slate Star Codex and The Economist.

Governing Boring Apocalypses: A New Typology of Existential Vulnerabilities and Exposures for Existential Risk Research
Published in Futures.

Hin-Yan Liu (University of Copenhagen), Kristian Cedervall Lauta (University of Copenhagen), and Matthijs Maas

This article argues that an emphasis on mitigating the hazards (discrete causes) of existential risks is an unnecessarily narrow framing of the challenge facing humanity, one which risks prematurely curtailing the spectrum of policy responses considered.  By focusing on vulnerability and exposure rather than simply existential hazards, the paper proposes a new taxonomy which captures factors contributing to these existential risks. The paper argues that these “boring apocalypses” may well prove to be the more endemic and problematic, than those commonly focused on.

Syllabus on AI and International Security
GovAI Syllabus.
Remco Zwetsloot

This syllabus covers material located at the intersection between artificial intelligence and international security. It is designed to be useful to (a) people new to both AI and international relations; (b) people coming from AI who are interested in an international relations angle on the problems; (c) people coming from international relations who are interested in working on AI.

Scaling Up Humanity: The Case for Conditional Optimism about Artificial Intelligence
In Should we fear artificial intelligence? an in-depth analysis for the European Parliament by the Scientific Foresight Unit.
Miles Brundage

This paper makes a case for conditional optimism about AI and to fleshes out the reasons one might anticipate AI being a transformative technology for humanity – possibly transformatively beneficial. If humanity successfully navigates the technical, ethical and political challenges of developing and diffusing powerful AI technologies, AI may have an enormous and potentially very positive impact on humanity’s wellbeing.

Public engagement

We have been active in various public fora – you can see a sample of the presentations, keynotes, panels and interviews that our team has engaged in here.

Allan Dafoe has been giving several talks each month, including at a hearing to the Security and Defense subcommittee of the European Parliament, Oxford’s Department for International Relations, and being featured in the documentary “Man in the Machine”, by VPRO Backlight (Video, at 33:00). He has done outreach via the Future of Life Institute, Futuremakers and 80,000Hours podcasts.

Nick Bostrom participated in several government, private and academic events, including DeepMind Ethics and Society Fellows event, Tech for Good Summit convened by French President Macron, Sam Altman’s AGI Weekend, Jeff Bezos’s MARS, World Government Summit – Dubai, Emerging Leaders in Biosecurity event in Oxford, among others. In 2018 his outreach included circa 50 media engagements, including BBC radio and television, podcasts for SYSK, WaitButWhy, print interviews, and multiple documentary filmings.

Our other researchers have also participated in many public fora. Jeffrey Ding has, on the back of his report on China’s AI Ambitions, interviewed with the likes of the BBC and has recently been invited to lecture at Georgetown University to DC policy-making circles. Additionally, he runs the ChinAI newsletter, weekly translations of writings on AI policy and strategy from Chinese thinkers, and has contributed to MarcoPolo’s ChinAI which presents interactive data on China’s AI development. Matthijs Maas presented work on “normal accidents” in AI at the AAAI/ACM conference on Artificial Intelligence, Ethics, and Society and presented at a Cass Sunstein masterclass on human error and AI (video here). Sophie Fischer  was recently invited to China as part of a German-Chinese Young Professional’s program on AI, and Jade Leung has presented on her research at conferences in San Francisco and London, notably at the latest Deep Learning Summit on AI regulation.

Moreover, we have participated in Partnership on AI working groups on Safety-Critical AI, Fair, Transparent, and Accountable AI in addition to AI, Labor and the Economy. The team has also interacted considerably with the effective altruism community, including a total of six talks at this year’s EA Global conferences.

Members of our team have also published in select media outlets. Remco Zwetsloot, Helen Toner and Jeffrey Ding published “Beyond the AI Arms Race: America, China, and the Dangers of Zero-Sum Thinking” in Foreign Affairs, a review of Kai-Fu Lee’s “AI Superpowers: China, Silicon Valley, and the New World Order.” In addition, Jade Leung and Sophie-Charlotte Fischer published a piece in the Bulletin of the Atomic Scientists on the US Defense Department’s Joint Artificial Intelligence Center.

Team and Growth

We have large ambitions and demands for growth. The Future of Humanity Institute has recently been awarded £13.3 million from the Open Philanthropy Project, we have received $276,000 from the Future of Life Institute, and we have collaborated with Baobao Zhang on a $250,000 grant from the Ethics and Governance of Artificial Intelligence Fund.

The team has grown substantially. We are now a core research team of 5 full-time researchers, with a network of research affiliates who are often in residence, coming to us from across the U.S. and Europe at institutions such as ETH Zurich and Yale University. As part of signalling our growth to date, as well as our planned growth trajectory, we are now the “Center for the Governance of AI”, housed at the Future of Humanity Institute.

We continue to receive a lot of applications and expressions of interest from researchers across the world who are eager to join our team. We are working hard with the operations team here at FHI to ensure that we can meet this demand by expanding our hiring pipeline capacity.

On the operations front, we now have 1.5 FTE Project Manager capacity between two recent hires, Jade Leung and Markus Anderljung, which has been an excellent boost to our bandwidth. FHI’s recently announced DPhil scholarship program as well as the Research Scholars Program are both initiatives that we are looking forward to growing in the coming years in order to bring in more research talent.



Source link

21May

Noah Feldman, Sophie-Charlotte Fischer, and Gillian Hadfield on the Design of Facebook’s Oversight Board


Noah Feldman is an American author, columnist, public intellectual, and host of the podcast Deep Background. He is the Felix Frankfurter Professor of Law at Harvard Law School and Chairman of the Society of Fellows at Harvard University. His work is devoted to constitutional law, with an emphasis on free speech, law & religion, and the history of constitutional ideas.

Sophie-Charlotte Fischer is a PhD candidate at the Center for Security Studies (CSS), ETH Zurich and a Research Affiliate at the AI Governance Research Group. She holds a Master’s degree in International Security Studies from Sciences Po Paris and a Bachelor’s degree in Liberal Arts and Sciences from University College Maastricht. Sophie is an alumna of the German National Academic Foundation.

Gillian Hadfield is the director of the Schwartz Reisman Institute for Technology and Society. She is the Schwartz Reisman Chair in Technology and Society, professor of law and of strategic management at the University of Toronto, a faculty affiliate at the Vector Institute for Artificial Intelligence, and a senior policy advisor at OpenAI. Her current research is focused on innovative design for legal and regulatory systems for AI and other complex global technologies; computational models of human normative systems; and working with machine learning researchers to build ML systems that understand and respond to human norms.

You can watch a recording of the event here or read the transcript below:

Allan Dafoe  00:09

Okay, welcome. Hopefully you can all hear and see us. So I am Allan Dafoe, Director of the Center for the Governance of AI, which we often abbreviate GovAI, which is at the University of Oxford’s Future of Humanity Institute. Before we start today, I wanted to mention a few things. One, we are currently hiring a project manager for the center, as well as researchers at all levels of seniority, for GovAI and the rest of the Future of Humanity Institute, including interested in further work on this topic. So, for those of you in the audience, take a look. A reminder that you can ask questions in this interface at the bottom and you can vote on which questions you find most interesting, we can’t promise that we will answer them, but we will try to see them and integrate them into the conversation.

Okay, so we have a very exciting event scheduled, we will hear from Professor Noah Feldman about the Facebook oversight board and his views about what a meaningful review board for the AI industry would look like. Noah is a professor of law at Harvard Law School, an expert on constitutional law, and a prominent author and public intellectual. We’re also fortunate to have two excellent discussants with us today. Gillian Hadfield,  whose in my bottom right, maybe it’s the same for you, is a longtime friend of GovAI. She is the director of the Schwartz Reisman Institute for Technology and Society at the University of Toronto, where she’s also professor of law and of strategic management. She also has affiliations at the Vectra Institute for artificial intelligence and OpenAI. Gillian has produced a lot of fascinating work, including some on AI governance, and specifically, I’ll call out her work on regulatory markets for AI safety. She’s also doing some interesting work on how machine learning can learn and adapt to human norms. Our second discussant is Sophie Charlotte Fisher. I’ve actually known Sophie from before GovAI was even established. She was a founding member of GovAI’is predecessor, which was called the Global Politics of AI Research Group, that was in 2016 at Yale, if you can believe those old days. She is currently a PhD candidate at the Center for Security Studies at ETH Zurich. And she continues to work with us as a GovAI affiliate. She has done great work on a range of topics, including the ethics and law of autonomous weapons, US export controls for AI technology, and on the idea of a CERN for AI. And specifically, perhaps one might put it in Switzerland. In 2018, she was listed as one of the 100 brilliant women in AI ethics. And in 2019 she was the Mercator technology fellow at the German Foreign Ministry. So thank you, both of you, for joining us.

Now I’ll share a little bit of background of how I came to this topic and learned about Noah Feldman. So I first learned about him actually, from reading one of his recent books, fairly recent book on the US founding father, James Madison, at the time I was and I still am struck by how much of the work in AI governance has this character of a constitutional moment, we have this opportunity it seems to set not just norms for the future, but also to deeply rooted institutions, which could shape decisions for decades to come. So at the time, I wanted to learn more about James Madison, as he seemed to be one of the best examples of how someone who has is deeply committed to scholarship can have a centuries long impact through the formation of long lasting institutions. I know Feldman, as I understand it, came to this conversation the other way around. So in 2017, he was had just finished publishing this biography of James Madison, and he was visiting the Bay Area talking to some people in the tech community, staying with his friend Sheryl Sandberg when this insight, this constitutional insight came to him that what Facebook needed was a Supreme Court. He sketched what that would look like; Sheryl Sandberg and Mark Zuckerberg were interested. And now two plus years later, the oversight board is on the cusp of starting its work and represents, in my view, a radical new experiment in corporate governance and technology governance. I find this origin story so fascinating, because it shows like with James Madison’s life, how a life of committed scholarship can suddenly and potentially profoundly, offer useful insights that can shape history. So with that, we will now hear from Professor Noah Feldman, about his thoughts on the Facebook oversight board and the governance of AI. Noah.

Noah Feldman  04:30

Thank you so much for that generous introduction, I’m really grateful for it. I have a special feeling for Oxford from when I was a doctoral student. And you say that 2017 was a long time ago, but I am of the generation now who when we were students at Oxford, the computer center was one room by the old parsonage hotel with a bunch of mainframe computers that you could use for email. And the idea that the university which treated my work on medieval Islamic political philosophy [garbled] talk about Aristotle and Plato than it was to talk about the Middle Ages would eventually become a leader in spaces like the governance of AI was literally unimaginable. So I’m thrilled by that. And very excited to be to be here with all of you today, under the auspices of the GovAI webinar.

As you mentioned, I came to the issues here from the standpoint of governance, and specifically from the governance standpoint of constitutional design. If you think about it, constitutional design, as a field is a field about the management of complex social conflicts through the creation of governance institutions. That’s not a terrible summing up of what the whole field of constitutional design is about. And as you mentioned too, I was thinking about constitutional design in a specifically American context, because of this book I wrote about James Madison, who was, after all, the chief designer of the US Constitution, but I’d also been lucky enough to work most recently in Tunisia on constitutional design there after the Arab Spring and earlier, Iraq on constitutional design there, although under much different and worse circumstances of US occupation. So the design issues were of recurring interest to me. But those were always in the context of states. It was always in the context of the state as the locus for the creation of a governance institution to manage some form of social conflict. And I was in fact in in Stanford giving at Stanford giving a talk at Michael McConnell’s seminar. And we’ll come back to Michael McConnell in a few moments, because some of you may know he’s one of the new chairs of this Facebook oversight board. And he is a constitutional law professor, former judge, and I was speaking entirely about Madison, and that was what was on my mind. And then as you say, I was also having some conversations with people at Facebook about content moderation, because like so many other people in the field of free expression, which is one of my fields, I was trying to figure out what free expression was gonna look like, in a world where more and more expression took place on platforms. It was that juxtaposition of thoughts about the development of new challenges for content moderation and simultaneously, the idea of institutional design to manage social conflicts through constitutional mechanisms, that I think led me to think on a long bike ride up in the hills, behind Palo Alto, that actually, Facebook and other platforms could benefit by the introduction of a governance mechanism that has traditionally been used, and intensively for the last, you know, 50 or 60 years in liberal democracies, to manage the social conflict around what speech should be allowed and what speech should not be allowed, namely, the Constitutional Court or Supreme Court model. That is, in its essence, a model where there is an independent body that is not directly answerable to the primary first order decision maker that has a set of principles that are clearly articulated, on which it relies to make decisions, that transparently describes to the world why it has made the decisions that it has made via an explicit balancing of competing social values, such as the value of equality, the value of dignity, the value of safety, and the value of free expression.

And so I thought, perhaps an institution like that could be tried in the context of the private sector of a corporation, even though essentially it had never been done before. And the reason that it hadn’t been done, I think, is largely that we imagined the institutional governance solutions associated with states as solely appropriate for the public sector, and not as appropriate for private actors or private entities. And of course, the difficulty with that restrictive view is that it then deprives us of a whole realm where serious attempts to solve institutional governance problems have been made. On the ground that well, this is the private sector and not the public sector. If you imagine the kind of cognitive divide that people often make, they think, well, if the government is going to regulate us, then it would be appropriate for the government to use its institutional governance mechanisms. But if we’re going to be regulated by a private sector entity, a whole other set of mechanisms are appropriate and kick in. And that is really an arbitrary and artificial divide. I wouldn’t say it’s arbitrary, but it’s an artificial divide in the negative sense of the word artificial. I know with this audience, the word artificial is itself subject to deep analysis. But it’s a divide that is not necessarily valuable pragmatically, it’s simply something to treat as an opening reality that one can then explore and potentially explode. And so that’s essentially what Facebook subsequently did. And I was lucky enough to be advising them throughout this last, you know, two and three quarters years, to the point now where their oversight board is in existence, it has four co-chairs, 20 total members. One of them, as I mentioned, is Michael McConnell. Others are people of different backgrounds. There’s a former prime minister of Denmark, there’s a prominent constitutional law professor at Columbia University Law School, there’s the Dean of the law school in Colombia, who’s also a special rapporteur for the United Nations on free expression. And they’re a diverse group of people from all over. The core design remains the one that basically struck me on the bike ride. And again, just to reiterate that it’s independent, its members are appointed to three year terms that are renew automatically renewable, and therefore are not hired and fired by Facebook. They are paid, but they’re not paid by Facebook, they’re paid by an independent trust, that was funded by Facebook and then spun out of Facebook to become independent. Their decisions will be rendered transparently and publicly, they will give reasons and reason giving is hugely important in this context. And their decisions will balance competing values. And not least, in addition to its so called community standards, which are the content moderation rules that Facebook has. Facebook has also articulated a set of high level principles, a set of values, what they call values, that function effectively as a set of constitutional values here that are also relevant to the decisions that will be made, international legal principles will inform but not dictate results.

So that’s the basic structure of what’s going on here. I’m thrilled to answer questions about the technical sides of this, the difficulty of it, the design of it, I want to say one or two words about its purpose overall. And about two ways to look at it a more optimistic way and a more cynical way. And then from there, I’m going to tack to talking about ways that similar or related governance models could potentially be used in other contexts, including in the context of the governance of AI. So that’s my thought roadmap. So let me start with two ways of thinking about the purpose of the oversight board, let’s call them a publicly interested way, and a more cynical corporate interest way. So let’s start with the more publicly interested approach. And because I do constitutional law as a as my day job, we always have to look at everything through these two lenses, right. Every constitution in the world expresses high flown values, and is institutionally implemented by people who often really believe in those values. And yet every constitution is also a distribution of power by real people in the context of real governments and real estates, where politics and self interest dominate decision making, as we all understand it in the real world. So for those of you who don’t move in a world where these two frames are constantly going back and forth, I just want to flag that these are the two frames that I use all the time, and I’m going to use them here.

From the publicly interested frame it’s just really clear the crucial decisions on issues that affect billions of people should not be made, ultimately, by unelected tech founders, CEOs and COOs. And I think that’s rather obviously true in the case of free expression, as Mark Zuckerberg is the first to acknowledge: he should not be deciding himself whether the President of the United States has or has not breached some ethical principle of safety, when he criticizes Black Lives Matter protesters, and therefore should have his content taken down or whether the President of the United States running for office is participating in a political process that needs to be facilitated, and therefore what he says should be left up. That’s just much too important decision to be left to Mark or to Mark and Cheryl, or to the excellent teams that they nevertheless put together. I would argue that it goes even beyond those kind of hot button issues and extends to the more you know, in the weeds, but hugely important questions. What counts as hate speech? What hate speech should be prohibited? What hate speech should be permitted, because it’s necessary to have some forms of free expression. What forms of human dignity are respected by displays of the human body? What forms of human dignity might be violated by certain displays of the human body or certain human behavior or conduct? These are questions on which reasonable people can and do disagree. They’re questions that implicate major forms of social conflict. I’m not a relativist, I don’t think there are no right answers on these questions. But I do think there’s a lot of variation in what different societies might come up with as the right answers. And especially when you consider the platform’s cross social and political and legal boundaries, it just makes almost no sense for the power to make those ultimate decisions to be concentrated in just a few people. Now, that doesn’t mean that the decisions don’t have to be made, there has to be responsibility taken. And so the objective of a revolutionary strategy, which is what we with the oversight board uses, is to ensure that there are people who are making these decisions, who are accountable in the sense that they give reasons for their decisions, accountable in the sense that they explain transparently what they’re doing, accountable in the sense that they can be criticized, but are nevertheless not easily removable by the for profit actors who are involved. The result of this, again, speaking in terms of the public interest should be – it may not be, but experimentally, it ought to be – some legitimacy for the decision-making process. And now what I’m talking about is legitimacy in what philosophers call the normative sense of legitimacy. That is something is legitimate, because it should be legitimate, it ought to be considered legitimate. And from a publicly interested perspective, we should all want important decisions to be made in ways and with values that ultimately serve the goal of public legitimacy. Now, let me turn briefly to the cynical perspective, the cynical, self interested perspective. Facebook is a for profit company, it is governed under corporate law of the United States. And by virtue of being governed in that way, its management and its board of directors have certain fiduciary duties to its shareholders, which includes the duty to make it an effective and profitable company.

If Facebook’s senior management hadn’t believed that it was in the interests of the company, to devolve decision making power on these issues away from senior management, they would actually have been in breach of their fiduciary duties to advocate and then adopt the strategy. So in that sense, when somebody says to me, and people do say to me all the time, well, Facebook just did this because it’s in Facebook’s self interest. My answer that is twofold. First of all, yes, that’s absolutely right, they would have been in breach of their own fiduciary obligations if they had thought they were acting against the company’s interests. And my second is, please show me any example anywhere in the world, of any person or entity with power, giving out a power for any reason other than that, they believed in that given circumstance, they have more to gain by giving up that power than by keeping that power. I mean, this is an insight from constitutional studies. Any would be dictator would like to just be the dictator all the time. It’s really nice to be the dictator. But we recognize that governments based on dictatorial principles are frequently, not always, but are frequently unstable, and effective lead to bad outcomes, not just for the general public, but also to bad outcomes for the dictators who have a bad habit of ending up dead rather than, you know, beloved and in retirement. And so systems of power that involve power sharing are always shot through with structures of self interest.

So then that raises the question of why should anybody trust an oversight board or any other devolutionary governance experiment that is adopted by for profit actors? We might imagine that if the state imposes something, then it would reflect public interest. But if it’s adopted by private actors, we might say it should never be trusted. Well, part of the answer that is that even state bodies aren’t purely publicly interested. You know, political science as a field has spent much of the last half century showing the ways that state actors, including governmental actors, are privately interested, notwithstanding that they have jobs where in principle, they’re supposed to be answerable to the public. So there is no perfect world where everybody is perfectly publicly interested. But more importantly, the reason that the public should be able under some circumstances to trust a system put in place through the self-interest of corporate actors, is that it is in the self-interest of those corporate actors to be trusted. And to do so in this day and age they must impose transparency, independence and reason given, not because it’s their first order of preference. After all, this is not how content moderation was designed in any major platform initially, but because they realize that they have so much to lose by continuing the model that they’ve been following, and they need to try something new. So the cynical view is that in this day and age, companies can’t get away with appearing to devolve power or appearing to be legitimate, they have to actually go ahead and do it. You might say, well, their most effective game theoretic strategy is to appear to be not getting away with it, but actually to be getting away with it. That might be true. And it’s an empirical proposition to say that in this day and age, with as much scrutiny and skepticism as exists, it’s very difficult for a corporate actor to get away with that, in a way that might not have been true as recently as a quarter century ago.

Okay, let me say a word now about other contexts and other possible governance solutions. You know, having spent the better part of the last three years incredibly focused on the problem of content moderation and the solution of a governance institution modeled loosely on a Constitutional Court, I have now shifted my own attention to trying to think about other kinds of governance institutions, they could also be borrowed from different places, and shapes and contexts that might be appropriate to the governance of other kinds of social conflicts that arise in technology spaces. And here, I come close to the topic of your ongoing seminar and to your program, namely, the question of the governance of AI. Now, we do have, and I actually had it right in front of my face as I was originally – not right when I was designing this thing in the very beginning, but very quickly in the process – the Google AI committee that came into existence and came out of existence in a incredibly short period of time, a story that you all know better than I, had. So I had in front of me exactly what not to do from an early moment.

So we can stipulate that the model of the corporate appointed group of potential advisors on its own without more with a high risk and unstable models to adopt if we in circumstances where the corporate actor would react very negatively to criticisms of the membership of the board. But that doesn’t mean that there aren’t other mechanisms that are worthy of being explored. And these are other models of governance. So let me just name a few of them. And then we can talk more in our conversation about which of these might be adaptable in different circumstances to different aspects of AI governance. So one interesting model that comes not purely from the public sector, but from the educational and medical sector, is the model of the institutional review board or IRB. So those of you who are social scientists aren’t used to dealing with IRBs, the same will be true of those of you in the harder sciences, whose work interfaces with important ethical considerations. IRBs are quasi-independent bodies, typically, constituted and composed by institutions, universities and hospitals, most typically, that have full authority to approve or disapprove proposed research plans or programs. Their power is enormous, as anybody who’s ever dealt with an IRB knows. It’s subject to abuse, like all great powers, and the question of how to govern IRBs is itself a rich and important question. But the IRB model is a model that remarkably, hasn’t really been tried in the private corporate sector. Sometimes there’s overlap, because if you are a researcher at Harvard Medical School, and you have a great idea, you form a company, but you also continue to do research in the university. And so you need to both go through an IRB and then discuss it with your investors. So there are some points of overlap. But we don’t really have in the private corporate sector, an institutionalized IRB model in place. Now, IRB is have something in common with the oversight board that Facebook has created, because they’re meant to be institutionally independent, but they still belong to the institution. So the Supreme Court of the United States is the Supreme Court of the United States, it’s part of the US government, but it’s also independent and its independence is assured by certain institutional features, life tenure, etc, etc. It’s not without government influence. We see that right now in the United States. We’re in the middle of a huge fight over our next Supreme Court appointees. So you see, there’s a politicization of one aspect of the process. But part of the reason for that that intensity of that fight is that once appointed, the Justice will have complete independence.

IRBs are typically part of technically the university or hospital with which they’re affiliated. So in that sense, they’re part of that entity. Therefore, they internalize some sense of responsibility, but their members typically come from the outside and they cannot have their judgment overruled by the institutional actor that convenes that. So could corporations create IRBs on their own? One option is that corporations could create independent IRBs of their own. If they offloaded management and devolved it through [garbled] foundation in the way that Facebook has done. That’s very expensive. And it requires long term commitments, but it can be done. Another alternative is to have IRB-like independent entities created by third parties. Those could be nonprofit foundations that produce their own IRBs, that are then selectively employed by companies that are looking for independent judgment. Or you can also imagine, and I’m toying with trying to create one of these right now, one can also imagine a private entity being created either for profit or not for profit, but a private entity not growing out of an existing foundation that maintains an IRB, or multiple IRBs with subject matter expertise, that can be as it were rented by the corporation, which says, gee, we’re going to be making the following difficult decisions over deploying our AI for the next two years or five years. We publicly commit ourselves to submit those decisions at a given juncture point to this independent IRB, which has AI subject matter expertise alongside possessing ethicists, stakeholder interest and other sorts of interests. Now, there are all kinds of technical issues that need to be worked out here, which I’m happy to talk about. But I think they’re all in the realm of tractable problems. The overall model, though, would be to actually devolve some meaningful power to these IRBs, and for their decisions to be not merely advisory, but to function as actual choke points for the corporate actor. You may ask why would any corporate actor ever want to agree with that, agree to do that? And the answer is self interest: the corporate actor might be aware that in order to get credibility for its decisions, it needs to have those decisions blessed by a body that can only give a meaningful blessing if it can also prohibit certain lines of conduct, or blocks or lines of conduct or behavior. And I think there is a game theoretic situation where that becomes desirable and even necessary from the standpoint of the company. Transparency is a really interesting issue here. And I don’t need to tell all of you the transparency, challenging as it is, in any corporate domain is doubling or tripling hard in the context of AI where you don’t only have to deal, you have to deal first with proprietary technologies, but also with the – fascinating to me as an outsider to AI – conceptual problem of what counts as transparency in the case of certain, certain AI function, certain machine learning functions, where they may not be fully interpretable. I mean, there’s a fascinating conceptual question. I’m sure you’ve all spent time on this. When I taught a seminar on some of these issues a couple of years ago, we spend a couple of sessions on this fascinating issue of what counts as transparency in an in a situation where you have genuinely uninterpretable algorithm where again [garbled] I understand is also debatable term, but an algorithm that is not, we are not able to interpret under given circumstances. There are very rich and fascinating questions there deserve close scrutiny and attention.

That said, it is possible using an IRB structure to maintain selective confidentiality. So you could imagine a FinTech company, that is using a proprietary machine learning algorithm, to sort credit-worthiness of applicants, profound social conflict is inevitably going to arise there. And I can say a few more words about that if people are interested. But profound social conflict is inevitably going to arise. And there are many subtle questions to be worked through. For example, does the algorithm pick out discriminatory patterns that already exist in society? Does it reinforce those if the algorithm is quote unquote, “formally instructed” to ignore those, will it then replicate them nevertheless, by virtue of picking out a proxy that the algorithm is capable of picking out? These are incredibly rich, fascinating issues. I know you’ve spoken about them before here here, and I’m actually happy to discuss them as well. But one could imagine a private company with a proprietary algorithm, just saying to the IRB, listen, we will show you what’s under the hood. You will agree not to share that with anybody else, but in your public account what you will say is we have been under the hood, and we say that what we consider to be the cutting edge techniques that can be used to manage and limit the discriminatory effects have been employed here. And those are techniques such as such and such or such and such. Right. So imagine you think, with a very, very brilliant new professor at Columbia Law School, Talia Gillis, a recent graduate of the PhD and  SJD programs at Harvard, who worked with me that one of Talia’s arguments is that the only really reliable mechanism for evaluating discriminatory effects in a range of algorithmic contexts is running empirical tests of those algorithms and measuring outcomes, much in the way that historically actors who are trying to constrain governmental – sorry, governmental actors were private actors who were trying to use existing law to constrain private discriminatory conduct in say the housing context or the employment context, historically ran empirical tests to see whether a given company or institution was discriminating. So imagine one has Talia’s view – it’s not the only possible view but imagine one has that view – well, if one has that view, then what the IRB would do is it would say, we self-certify that we’ve run those tests, we’ve done the cutting edge, you know, approach, and we’ve created a protocol or supervisor protocol where those tests will be run regularly on the data as it develops. And so we’re not showing you what’s under the hood, but we’re telling you what our approach is transparently, we’re telling you what the research is transparently, and we’ll probably be able to show you the results transparently, or compel the private actor, the corporate actor that has the proprietary algorithm to do so. That’s just a sketch out an example of how this kind of institutional governance mechanism might potentially work.

So that’s the sort of, you know, private IRB or independent IRB type of approach, then there are some other potential governance mechanisms that are also worth thinking about that go outside of the IRB context, and that could also be borrowed from various institutional structures. There’s industry-level regulatory bodies, that could be created, that are always subject to the skepticism that they’re just like the Motion Picture Association, you know, the MPAA, controlled largely by their members. But it’s possible to create a more robust, industry wide regulatory actors, which again, by use of transparency and independent funding, and real independence from the corporations that constitute them, could engage in regulation of a kind that is analogous to what a governmental regulatory agency might do, but could do it more efficiently than a government regulatory agency. And it could potentially also maintain certain kinds of confidentiality to a greater degree than a government institution might be able to do. So there, you have a full range of different regulatory mechanisms, you know, European governance is different from Chinese governance is different from US governance, one can pick and choose in the institutional design process to obtain the best features or the most appropriate features here. And so there’s a full scale set of options for what I would call private, collective regulatory governance, that, again, look familiar in the context of state regulation, but avoid some of the problems that scientists and corporate actors alike inevitably fear. When they start thinking about government regulation, among them external political influence on that regulation, among them, a tendency to always be as conservative as possible to avoid criticism, sort of, you know, cover yourself in the worst case scenario, danger or risk. So that’s yet another set of techniques that can be borrowed from the public sector, suitably adopted and tweaked.

I could go on about, you know, other possible directions, I won’t, because I want to leave as much time as possible for conversation. So I’m going to pause there and just say in conclusion, that I’m eager to talk both about the particularities of how the Facebook model is working. But I’m also really eager to speak about other potential directions and options that might be more suitable in some of these AI contexts. Then the full on Constitutional Court like review board, and those may be IRB style, they may be regulatory style, and they may there may be other techniques, too. I have some other ideas for other things. They’re not as well developed, but maybe you can get me to throw them out there in conversation, that are potential directions all have in common billing is to say that we can learn from institutional governance solutions from different contexts and try to adapt and adopt them and we should never say that will never work here because that comes from this context. Rather, we should say in these other contexts, these things have these benefits and these costs, how might we try to adapt them to our needs, such that we will capture some of the benefits while reducing some of the costs. So thank you all for listening, and I’m looking forward to our conversation.

Allan Dafoe  35:17

Thank you, Noah. That was fantastic. And I’m sure if we were live, there would be a very enthusiastic round of applause from the 90 plus people in this room. I have lots of thoughts and questions. And I was very stimulated by this. But it’s the honour of Gillian now and our honor to hear from her to share her thoughts.

Gillian Hadfield  35:36

Great. Thanks. Thanks, Allan. And thanks Noah, that was terrific, really. It’s such an important thing to be discussing, and I really, the way you you wind up there with: we need to be looking for alternative regulatory models, we should look and draw on other models out there, and then thinking creatively about what the demands are regulating in the AI context, and, and, and how do we how do we meet those demands? Lots and lots for us to discuss, I want to try and just keep focused on a couple of a couple of points. First, I think it’s fantastic that somebody with your background and thinking about the origins of democracy and the development of the constitutional system, I think it’s really great that context here because I do think we are – maybe Alan is the one who used – a constitutional moment, I do think we are at a point in time, where we are seeing kind of the same question like, you know, Magna Carta 1215, where, you know, you know, we have entities, they’re not monarchs, but they’re private corporations that have become the dominant aggregations of wealth and power, and they’re defining so much about the way our lives work. That to be exploring, okay, so how do we democratize that process, is absolutely critical. I think it does raise the question, is that the right way to do this? Is it both feasible, and is it desirable to democratize private technology companies? You’re exactly right to frame it up, as I’m pretty sure I agree with you, people who are running these corporations that they hate, look, I don’t want to be the person deciding, you know, what, what Trump’s tweets can be left up or what postings can be there. But as you pointed out, this is this a global platform with two and a half billion people on the Facebook platform. It’s just something we’ve never seen before. And so I think both the question of democratizing that – both is it feasible, and is it desirable, where the [garbled], so I think that’s exactly the way we have to be thinking about this, this moment in time. And then I want to say a little bit about, you know, the appeal to existing institutions. And then, you know, we’re talking about the Facebook Supreme Court, particularly, but your comments, sort of in a broader context of thinking more generally about that, IRBs, and so on. But one of the things I think we’re also at a point of is, you know, the set of institutions that we created over the last couple of hundred years, in the context, originally of the commercial revolution, then the Industrial Revolution, the mass manufacturing economy, the nation state-based economy, and society that those institutions, you know, in many ways worked fabulously well, for significant periods there. But there are lots of ways in which they are no longer working very well. So we’re talking here, you’re sort of using a model of the high level Constitutional Court, but a lot of the issues are facing is like, you know, I’m user 5262 if I got in there early, I guess. And I’ve got a picture from my, you know, party that I that I want to post or I have a political statement I want to make, and those numbers are in the millions I was trying to get. Yeah, so 8 billion pieces of content were removed, something like that, in 2019, from Facebook, these are just massive, massive numbers. And one of the things we know about our existing institutions, which are heavily process-based, phenomenally expensive, the vast majority of people have zero access to those existing institutions. Now, certainly, that’s true if we go up to the level of the Supreme Court, and you’re not proposing that we create something that is going to be responsive to every individual who has a complaint. So I totally get that you have focuses, you know, even jump up to the Supreme Court rather than to say, you know what, let’s start with our trial courts or, but, but I think that’s actually a really critical thing for us to be thinking about is that both processes are incredibly expensive. They end up being like little pinholes of light into this big, big area. What can we be doing to, in fact, bring many, many more people into this process of expressing and communicating and constituting the norms of what we consider to be okay and not okay, on our platforms just to focus on that that framing in the context of, you know, free speech. And the other things, the other values that that are lined up against against free speech, what can we be doing to incorporate that? Now I think that’s where we just have to come to grips with this massive mismatch between a huge volume and a cost to the process and the – and I’ll go back to that – that language of democratization of that process. I think we will not develop methods that will be responsive that don’t ultimately involve AI. You’ve mentioned some of those. And actually, your concept of privately recruiting an IRB to review under confidentiality provisions, you know, what’s under the hood, in a model, and so on, I think is great.

I’ve been thinking about comparable models, Alan mentioned at the outset some of this work on regulatory markets, and I think we do need to be figuring out, how are we going to simultaneously get investment and methods of, in this instance, content, moderation, that are still responsive and legitimate, but I think we’re gonna have to figure out ways to incorporate a lot more people. So I’m particularly interested in thinking through the, the technical as well as legitimacy challenge of how can you get many more people involved in that in that process. And I actually think that’s really important, not just from the point of view of thinking about equality, or thinking about equal participation, but also because it’s fundamentally critical for the constituting of social order, that, that our norms are deeply rooted in ordinary people, their ordinary lives and, and communities. And of course, once you start talking global, that’s where it becomes tremendously difficult. I worry a little bit that, you know, the, the model of the Supreme Court, with an elite group, and so on, is going to be actually pretty difficult to try and make that progress. I think one of the things that I also want us to think about is, if we are so there’s a way to go back to this point about democratizing the private technology company, one of the things you might say is I can let them recruit the market here, in the sense that part of the global issues, you know, Facebook is such a massive platform and dominates the space. So incredibly, is there a role for communities to be working? You know, can we develop groups within our platforms, multiple platforms where people can, basically sort of in that in the [garbled] from political scientists voting with your feet, you know, voting with your with your browser, for which platform in which values and which norms you want to, you want to follow? I think there’s a lot of challenges to think about, I think this is the great challenge of how do we figure out how to respond to this massive scale, and respond to the global nature of these platforms, without taking decision making further and further and further away from ordinary people and their and their experiences and their experience of being a member of the community who’ve seen and recognized in these environments. I’ll stop there. Thanks.

Allan Dafoe  43:14

Thanks, Gillian, that was great. I’m thinking actually, perhaps Noah, we can give you some time to reflect and respond. And I’ll use the fact that I have the mic right now to add on to Gillian the specific question of the the choice of whether to have a global court and sort of a global moderation policy versus culturally specific policies. Yeah. And I you discussed at one point having regions or countries but yeah, that’s sort of just an interesting question.

Noah Feldman  43:43

Thank you. Thank you, Gillian. Those are really very rich, and hugely important issues that you’re raising. So let me just say a few a few words about them. If I if I could summarize, and maybe slightly oversimplify the argument that you’re making, it’s that we need greater democratization, greater public access, more people involved, I think you said, in order to aim at constituting social order. And you expressed a concern, which is completely correct, that the Constitutional Court model – and this I think would also be true of an IRB model – tends to rely on a smaller, elite group of people to make the relevant decisions. And I think that’s a correct analysis of what’s going on in both of those contexts. So I want to start by just acknowledging how incredibly challenging this problem is in democracies, not in you know, platforms, not in AI, not not on on social media, but just in democracy, right. How does one get genuine public participation in decision making?

It remains the central problem in most developed democracies: some have great turnout or lots of people show up to vote, but many have relatively weak turnout, where not that many people show up to vote. Voting, as political science has repeatedly demonstrated, is subject to all kinds of strange problems of agent principle control, and doesn’t always give ordinary people all the options that they would like to see represented. And there are tweaks for that, proportional representation tweaks, which have their own consequences, like the production of a parliament with so many parties in it, that it becomes very difficult for anything to get done. Even though a greater set of points of view are represented, there’s a set of complex trade offs that arise there as well. You know, the strongest critics of contemporary liberal democracy would probably say that one of the worst things about contemporary liberal democracy is that it purports to give the public opportunities to participate, and doesn’t actually give that to them, or gives them some simulacrum of participation. So that’s just to deepen the problem that you’re describing. Even if we could borrow some of the features that come from democracy, that might not solve our problems, because democracy itself is struggling. Making it much harder is the problem that I like to sum up with the example that I’m sure most or all of you know about. The example that arose a few years ago, when Great Britain decided to use an online voting process to name a new warship. As you will recall, the eventual winner was not Intrepid, or Valor, or Harry and Kate, but Boaty McBoatface. And sometimes in our conversations at Facebook, around the difficulties of democratization, we just summed up that this problem, which I’ll say a word about, in that phrase, Boaty McBoatface. And, you know, to formalize the Boaty McBoatface problem is that so far, it seems that when voting techniques are used online, the fact that the UN user is very far from internalizing the costs of his his or her vote, make it appealing, and not unappealing maybe more importantly, to cast votes that are silly or frivolous, or humorous. And so, you know, Facebook had actually experimented more than a decade before I got involved with them with a regulatory democratization approach, which is famous only in the small circles of people who care about online, regulatory democratization. It was an utter disaster, they said they were certain they wouldn’t have they wouldn’t make basically, they wouldn’t make certain kinds of major changes on the platform without getting a certain number of votes from a certain percentage of users. They couldn’t get participation, comparable to what they needed to get anything done. And it was also very subject to capture, again, a political science concept that’s very familiar here by small groups of concentrated people who had interest and could generate votes. And it’s actually I mean, sometimes I wonder, like, how did it happen that, you know, a random constitutional law person made a suggestion and Facebook decided to do it. Of course, the reason was that when I came to Cheryl with this idea, and then she brought it to Mark, Mark had actually been thinking for years, for at least a decade, about potential ways to devolve power. But the problem that he and the very, very smart people around Facebook kept bumping into was that if you devolve power, you want to democratize it. And if you democratize it, you run into cycling problems, and capture problems, and Boaty McBoatface problems. And I think what was actually when I just to finish the thought, and then by all means, yeah, but from the outside, when I look at it from the outside, I think, oh, no wonder they like this solution. Because it was it was about devolution, without democratization. It was devolution into an institutional structure like a court, that is not technically a, you know, small democratizing structure. So this is all by way of acknowledgement. And then I’ll say a word, you should say speak Gillian, and then I’ll say a word about what I think might be scary.

Gillian Hadfield  48:58

Yeah. And I only just want to jump in there and say, because I, I think the challenges of developing the regulatory model here that are democratically responsive is, is as big a challenge as building AI. And it’s also why sort of I’m under a focus on regulatory markets models, because I think we need to attract investment into this problem in the same way that we attract investment into building AI. So when I think about, okay, we want Yes, okay, getting voting, I think that’s inevitably going to be a poor that that was a technology that worked in various time, but don’t think that is going to work here. You you’ve said that’s been tried. But being able to read the normative environment, is something that we have now tremendous tools at our disposal, like what what is the reaction to different kinds we could build? I think we could be building machine learning models that are reading the rich, dense, massive volume of responses. And I think we should be figuring out how to do that and how to make that more legible, but I don’t think it’s only voting, I think that that we just sort of say, we want the idea of voting. But it’s not, as you say that that’s kind of broken in our current in our offline worlds. And I’m not surprised it doesn’t carry over. Anyway, I just wanted to jump in there with that, with that thought as well.

Noah Feldman  50:15

A couple of couple of thoughts on that. First, I actually think it’s harder than AI, because we’re in an early stage still of AI. And yet, the problem that you and I are talking about now of giving the legitimate public access to democratic participation, was posed explicitly by Plato and Aristotle. And in about 2500 years, smart people have been thinking about it, and nobody’s really solved it, you could say that the most intense process probably goes back to the French Revolution, of trying to mobilize a mass democratic public to make decisions effectively. So let’s just say it’s been the last 200 plus years that people have been trying to do it. And a lot of really smart people have focused on it, and haven’t really solved it. So I think it’s even harder. I also think it’s interesting when you say maybe we could use AI in order to solve it. And there will be hundreds of people out there, if there are hundreds of people listening, I’ve got a 222 people mark at the bottom of mine, but I don’t know if that means that’s the number of people listening. But if there are, then there are 221 people better than I, at answering the technical question of whether current techniques of aggregation are promising for doing what I would call normative political theory, you know, substantive analysis of what people are saying out there so as to glean a direction, maybe, but so as to glean a set of arguments about legitimacy. That’s a hard problem. I don’t I don’t claim to say that it’s an insoluble problem, just that it’s a genuinely hard problem. And if we were in over in the seminar room, where we talk about administrative and regulatory law, and and and Gillian were to say, you know, we should improve our legitimacy by using machine learning tools to get a sense of what all the comments are out there, I would say, interesting. doubtful, tell me more, I guess, is what I would have said. And maybe maybe it would work.

Just a last thought on this, I have a kind of approach to the problem that Gillian is talking about. And the approach is to say that we actually have a series of legitimating techniques that we use when mass voting doesn’t work very well. And those include transparent reason giving and subjection to intense public criticism. When a regulatory body is silent and behind closed doors, and is not easily transparent for analysis, it tends to lose legitimacy. And, you know, those of you who are in the UK and lived through the the Brexit process, probably know, on both wherever you were on that issue, that the perception I’m not speaking of realities now. But the perception that European regulation was insufficiently transparent, and therefore could not be subject to detailed criticism played a crucial role, I would argue, in the delegitimation within the UK of the project of European regulation, I mean, it’s not a coincidence that one of the most powerful and pro Brexit arguments from the powerful, most powerful leave arguments was, again, rhetorically was a claim of a legitimate regulation, illegitimate because non-democratic, and illegitimate and non-democratic, because non-transparent. So transparency can be an import- play an important role, because then we have other institutions, institutions like advocacy groups, institutions like the press, that can then engage in criticism of what are perceived as bad regulatory outcomes. So to me, in the absence of a magic bullet solution, I am interested in finding ways that it’s possible to use existing mechanisms of legitimation, I would call a democratic legitimation in the absence of mass voting, to improve participation and to improve access, not that these are perfect solutions at all, they’re very far from perfect, but they’re definitely starts in that direction. And they’re, they’re identifiable and they’re concrete. And you can point to them and say this regulatory process is good because people know what’s happening. They know the reasons, and they can be criticized and discussed. This is bad because they don’t.

Allan Dafoe  54:26

Thanks. Sophie, over to you.

Sophie Fisher  54:37

Okay, can you hear me now? Perfect. Okay, so already, we’re already in the middle of this really interesting discussion. I just want to take a couple of steps back and talk about how we actually got to the point that we’re now talk, we’re talking about the Facebook oversight board before offering some reflections on the limitations but then also the strength of this approach and maybe some of the lessons But we can learn for other regulatory models, maybe for the case of AI. Now we all know that Facebook has actually made for a long time important decisions of what kind of content it removes or leaves up on its platform that affected its 2.7 billion users around the world. And we’ve just heard from Noah that actually also within Facebook, there’s been a lot of thinking before about how to make this process of content moderation more participatory. But I think what really outside of Facebook has changed over the last couple of years are that we have seen new challenges brought about by, for example, the interference in the 2016 US elections in which Facebook has played a prominent role, or even the prosecution of targeted populations, and most notably, the Rohingya minority in Myanmar. And I think these cases really have showed that the stakes inherent in handling this kind of content that we see on a platform, like Facebook really have changed. And this incidents have not only emphasized the difficulty of balancing the freedom of expression and removing harmful content from the platform, in different national and cultural contexts. But, and I think this is important to stress again, it also created tangible economic costs for Facebook, due to an audible loss of consumer trust, which threaten Facebook’s business model and future growth.

So I think these developments really have emphasized again, the need for new measures and participatory measures to evaluate content in a fair and transparent manner to maintain the trust of Facebook users in the long term. And what we’re looking at now is the Facebook oversight board, which is certainly one of the most ambitious private governance experiments to date as a transnational platforms get mechanism to govern something which is very vital to the public and an essential human rights speech. Now, the board hasn’t even started operations yet. And we’re still at a very early stage. But different facets of its design has already been criticized widely, for example by journalists, but also nonprofit organizations, and I very briefly want to get into one of the most fundamental ones, and that is the at present very limited mandate of the board. So the limited mandate of this board implies that most probably the board won’t be in the position where it is able to really solve some of the most critical issues related to the content that we see on these platforms and do the most harm. So for example, it won’t probably tackle the selection and augmentation of certain content visible to users made by Facebook’s algorithm, including this information, it won’t necessarily minimize coordinated attacks on democracies around the world. And although there’s also an expedited procedure to bring issues more quickly to the attention of the board, the board won’t be able to offer a quick reaction to and prevent the spread of harmful content, such as the live streaming of the Christchurch shooting a while ago. Now, some of these limitations are probably inherent in the function of a court-like body as the board that exerts influence by making clear how the law applies for cases. But the problem is that many of the most contentious incidents, and I’ve named a couple before, that [garbled] the past few years and then have shown that the stakes and handing this cognitive change won’t be tackled by the organization that at least partially was established in response to begin to regain user trust. And they were also again to safeguard Facebook’s future growth. So I would argue that there’s a risk that the board would distract regulators from addressing some of the fundamental and most harmful activities on the platform and by the company that will maintain. Out of that, I would also argue that when we judge the board based on its mandate, and its court-like function from what we know about it today, its design is very thoughtful, and also very promising, because not only is it a clear improvement to the existing system, that existing system that we currently have in place, but I would also argue that we can learn different lessons from the way the board was set up, especially with regard to one of the key challenges of industry self governance, and that is how to structure private governance mechanism and essentially diplomacy already in the institution vetting process given that it originates with the organization that it is supposed to check on. And with legitimacy, I mean, here how to ensure meaningful transparency, impartiality and accountability.

And I briefly want to reflect on five of these lessons that we I think, have learned from the process of how the board was established. And the first is probably a very banal one, that is power sharing. So first of all, we need to reach a situation when we look at these tech firms, where they’re actually willing to share power. And I think Facebook is a really extreme case here, because due to its dual stock structure, then the exclusive power for a very long time over contentious decisions was with a CEO Mark Zuckerberg and I think now at the board, there’s the power to actually overrule Zuckerberg on contentious decisions and also previous decisions made by content moderators. The second aspect is public outreach. So what I found very fascinating about the way in which this board was set up is that there was actually a month long consultation process all around the world, with users and stakeholders in different countries, and also that this feedback was actually published afterwards. And then you can see that it is flown into the design of the board. So I think developing a public process that incorporates listening to outside users and stakeholders and to show as a company that you take this feedback seriously, is a really important issue to keep in mind. The third aspect is diversity. So Facebook’s community standards have been looking very American for a long time. And I think they’ve shifted towards most of the European approach, but input of the global south has always been absent. And while the composition of the board as it looks now is definitely not perfect, I think it reflects much better the diversity of this reserve base in the very broadest sense representing different cultural backgrounds, professional experiences, languages, etc. The fourth aspect is independent judgment, a really fundamental one. And I think if a private governance initiative should be perceived as legitimate, it is, of course, important that the people are working in these kind of boards, or outside organizations should not be working for this company. And I think, of course, there’s a chicken and egg problem that Facebook has always also faced, how to select the first members of this kind of institution that will then select other members. But I think the solution of using a non-charitable purpose trust, pay the members and set up a limited liability company to run the operations of the board is actually quite an elegant solution that we can run from. And the last aspect is transparency. And I think also here, Facebook did quite a good job of making all the steps and key decisions taken on the design of the board transparent. And [garbled] plans to make the decisions of the board transparent, how they’re being implemented. And also policy recommendations are issued by the board explained by the public, why, how they’re being implemented, or if not why they’re not being implemented. And I think being transparent all along the way, also really increases the cost of Facebook to just drop the board or threaten its independence.

So this was this will basically the five lessons where I think we can really learn from this process. And to conclude the oversight board, as it stands, is certainly no silver bullet to reform Facebook, and it shouldn’t distract regulators from tackling some of the remaining probably most harmful activities that are happening as well on the board and that are to a certain extent, also promoted by the platform. However, within the scope of what an outside body with a limited mandate as the board can do, it is certainly a really important step towards more transparency, and also to empower users by providing them with potential lever for accountability and a mechanism for due process. I also want to stress at the end that I think it is way too early to really say how meaningful and effective the board will eventually be and whether its operations will be independent, before it even has started operations. And there are many other important unknowns outside of the realm of the board and Facebook, including how exactly foreign governments or national governments will react to the board, how national courts will react to it, and how other platforms will perceive it. So I think for now to close, we can just impatiently wait for the board to finally start its work and see how things will unfold. Thank you very much.

Allan Dafoe  1:03:11

Thanks, Sophie and Noah’s muted. There we go.

Noah Feldman  1:03:14

Let me make just a few responses. And in the process, I think I’ll still try to answer Allan’s which I didn’t answer before about the global versus regional. I agree with, you know, 95% of what Sophie said. And it’s important to note that experiments need to evolve in the in the real world. And that evolutionary experimentalism and incrementalism are sometimes the right thing when you’re trying something radical, you’re trying a radical experiment, you don’t necessarily want to roll it out, giving it all of the power to do everything that it could possibly do. Because it might not work well. Instead, a little incremental ism is appropriate. And in fact, every Constitutional Court in the world has only gradually and incrementally increased its power, you also have to realize that in the process of institutional design, the oversight board face two opposite criticisms from within Facebook. One was, it will be much too powerful, it’s going to take over the core decision making that goes to our business function and shut us down, we can’t have this. The other was this will be a big waste of time and money, it will be purely symbolic, it will have no impact. I won’t help us at all. It’s a waste of time and money to do it. And my response to both was to say you’re both completely correct that these are risks, they can’t both be correct. You know, either it will turn out to be so powerful, that it threatens Facebook’s business model or it will turn out to be purely symbolic. The history of constitutional courts is a history of gradually expanding powers sometimes having to pull back after they’ve gotten too much power. But you also couldn’t possibly have convinced the board of directors of a major company to do something, or the management of the company, or the you know the leading shareholder in the case of Mark, to do or if you thought it was going to destroy the company. And in fact that would be, that wouldn’t be responsible on his part. So I think we will see whether the limited mandate, first of all that mandate is described already in the documents as intended to expand. Second of all, there are many things that the board can do to expand its mandate right out of the box. They can say to Facebook, we don’t like your rules, write new ones in light of these values, and they have the capacity to do that written into their mandate, which is a very, very great power. In the first instance, they’re supposed to decide if Facebook is following on rules, and if they accord with their values and the second instance, they can say your rules don’t fit your values, write new rules. So I’m agreeing with Sophie, that we’re at the beginning of the experiment. And we’ll see how it goes. And I hope that we don’t we remain patient rather than impatient because it will take time for this experiment to run out. It’s not going to solve all of the problems at Facebook, and it’s not going to solve them all right away.

With respect to the global versus the local. That was a really interesting and important design question, Allan, from the beginning, it may be relevant in the AI context as well, it was very relevant with respect to content moderation, because reasonable cultures, let’s say, could have different solutions to the question, right? I mean, and then there are real cultural value differences on the platform. So you know that what is culturally appropriate to wear to the beach in San Jose is different from what is culturally appropriate to wear on Main Street in Jeddah at prayer time. I like being in both of those places, but they are very different cultural norms for what dress is appropriate. And so, and I mentioned that because nudity policy is, you know, one of the most basic policies that a social media platform has to cope with, I mean, nobody in all of the consultation that Facebook did, I didn’t encounter anybody who said, Facebook should have such a radical free expression that it’s open to pornography, I didn’t hear anybody say that. But there is such a view out there, one could imagine in that view, and there has been a real fight on Instagram, about the extent to which sex workers accounts should be constrained or limited with organized sex workers in some places in Northern Europe, arguing for greater range of expression in order to facilitate their businesses. So this is a kind of everyday day in day out difficult thing to deal with. I think the difficulty of going down the every culture on its own is basically a line drawing one out, you know, where do you draw the line? Where do you say is the definitive view within a given culture? You know, some women in Saudi Arabia really don’t want to wear the hijab. And some consider the job to be liberating and say so, who’s right? That’s a very difficult social question, which couldn’t be answered without some independent base of [garbled], as Sophie says, community standards have traditionally been very American and their orientation, opening that up is risky. Because it may lead to, you know, I was often asked in Facebook internal deliberations? Well, what are the things that you imagine could happen in terms of interest group politics? And I said, well, the single largest, if you’re gonna break groups down by interests, the single largest group of Facebook users is Muslims.

Noah Feldman  1:07:59

Right, and so you know, not all Muslims agree on all things. Many Muslims disagree on a wide range of things. But imagine that there were agreement among Muslims on some set of issues, you know, what if one then want the views held by Muslims to govern the platform? What about the views of Christians? What about the views of, you know, so you know, these are hard and genuine questions. And I think Facebook, in the end decided that hard as it is to have standards that fit the whole platform. It would be harder to divide the world up into a kind of quasi, [would have to] map making kind of way, to create different Facebook’s for different contexts and places. And I think that’s where they drove that, coupled with Facebook’s ongoing vision of wanting to be a global community. And we have a fascinating conversation about what is a global community? Can there be a global community with two and a half billion people? What is the word community even mean in that context, but that is also part of the part of the aspirational picture. So you know, much more to be said about all these topics. But I think our time is sort of coming to a to its end, if I’m not mistaken. So I just want to thank all of you for great questions and comments. And if we have more time, and I’m happy to keep talking, I’m leaving that up to you.

Allan Dafoe  1:09:06

Great, well, I’m torn, because formally, we said it would end in one minute, but of course, I would love to keep talking. Why don’t we see if there’s any burning last thoughts more discusses? And maybe I’ll say something, and then No, you can reflect again, and then we’ll we’ll close. Gillian, Sophie, do have anything last you want to share?

Gillian Hadfield  1:09:24

So I think this question about the global and the local is really quite critical. And I think that’s how we train that challenge is how do you have a global platform that yet allows smaller subgroups to have different values and to have – somebody in the chat has picked up this idea of, you know, competition between those different subgroups. It’s, you know, the, the challenges around a harmonization of standards, globally, is one we’ve been struggling with in many, many domains for decades. And I don’t think it’s reasonable to think we’ll get there Allan, I’ve had a lot of conversations along these lines over time, so I think the real challenge is how can you have, how can you have a global community where people nonetheless feel that there are smaller communities to which they belong and in which they feel reflected and respected?

Sophie Fisher  1:10:17

I agree. And I think it’s also going to be very interesting to see what the support staff of the board will maybe be able to contribute in terms of acquiring local knowledge that may be necessary to really get into the culture of these of these individual cases. And it’s not only about the diversity of the board members as such, but really also about the support staff and what they can contribute.

Allan Dafoe  1:10:40

Maybe I’ll just add to this. So I find this decision fascinating politically, and I can completely believe that global is just the most viable solution, because as you say, you know, are you going to make them national, are you going to start drawing, defining the sort of cultural, social networks? Me, I’m imagining maybe there’s some clever social network clustering algorithm that could allow subgroups to self identify, self select? And maybe this actually gets to a broader governance question about Facebook, which is the ability of users to define the mechanisms of their interaction, you know, maybe they different users would like, different weightings of what kind of media they’re provided with, you know, news versus family updates versus political inputs. Maybe I’ll say one last thing, which is, yeah, I think your argument is right, it makes sense that you want to start in many ways, you wanna start with the lowest hanging fruit. If we think this kind of governance initiative is promising, you want to start with something that ideally will succeed, right, that ideally, is good for Facebook, and it’s good for Facebook shareholders, and it’s good for users, and it’s good for the public and, and then you can grow from there. I can imagine that speech moderation is in many ways, the easiest of sort of governance issues facing a company like Facebook, because there’s not as many trade offs between Facebook’s profit and the decisions that are being made, versus other decisions, like how to how to personalize advertising or, or just anything around, I guess, advertising, or perhaps, say the addictiveness of the device, you know, to what extent you use various, you know, notification techniques or other other techniques to keep people engaged. So maybe a worry is, it’s going to be much more difficult to have these sorts of solutions for domains where there is more of a trade off between profit motive and the sort of the legitimate decision. I’ll conclude there. So over to know if you have last thoughts.

Noah Feldman  1:12:40

Just, just briefly, again, thanking everybody for great comments, I think it’s just worth noting that the problems we’re talking about are the problems of human societies. And their problems that we face at the local level, at the sub state level, and their problems we face at the global level. One interesting thing about the social media platforms is that they have, they’re both not state problems, because they’re, this is a private corporation, not a state, it doesn’t have an, Facebook doesn’t have an army, it can be shut down by states, it’s, Facebook is weaker in many ways than certain states, than most states. But at the same time, they’re also super state problems, because they’re about crossing borders and users globally. And these are problems that in international affairs, international relations of international law, we also haven’t solved, you know, the United Nations, you know, we have universal declaration rights, which are defined at such a high level of generality that lots of countries can adopt them. But many of those countries don’t follow those, those principles, because that’s the only way you could get the consensus. So you have both sub state level problems and super state problems. And I think that carries through to the AI context, as well, insofar as AI is deployed by platforms that have this kind of reach, in so far as it’s to a certain degree shaped and controlled at the highest end, by multi, by corporations that are multinational and that are present in many different contexts. And I guess I would end just with a plea to people who are listening in to remember that, in order for us to make good decisions about governance, whether in AI or other tech contexts, we need to be deeply aware of the body of social conflict, and the body of thought and debate that exists around the deepest governance problems that we face as human beings. I mean, in the end, you know, when Aristotle said that, that humans were political animals. He didn’t just mean that we do politics, he meant that we live in a polis, and that we make a politeia, which is a constitution, you know, that humans have the capacity uniquely, not to live socially, lots of animals or social, but to have a conscious thought through a set of our publicly articulated values and norms by which we try to live together, and that, to me is the challenge of governance. And I’m all for doing that across the, across the disciplines, the less we hive ourselves off, the better we’ll do. And we also have to have modesty in knowing that, unlike some problems in science, and unlike, unlike some problems in AI, which may actually be soluble, by, you know, better work and faster processors and more sophisticated algorithmic design, some of the problems that we’re talking about here, don’t are not they don’t admit, have definitive solutions. If they did, we would have converged on one system of government sometime in the last 3000 or say, 10,000 years since we started making constitutions. But we haven’t converged because there are a range of different possibilities, a range of different viewpoints, again, about which reasonable people can disagree. So some degree of epistemological modesty, I mean, that’s always good in life to have epistemological modesty, but I’m never I’m not the one to tell anybody who works in the scientific domain to be epistemologically modest, what I can say is in the domain of governance, that kind of modesty is very called for, and people like me who want to and like you who want to contribute to doing better governance. It behooves us to be modest, and incremental, and cautious, and experimental. So thanks to all of you for a great conversation, and thanks to those who listened in for listening in.

Allan Dafoe  1:16:29

Fantastic, what a great conclusion. So yes, thank you again, to our wonderful discussions and to know for this great conversation.

Gillian Hadfield  1:16:41

All right. Thanks, everybody. Bye bye.



Source link

Protected by Security by CleanTalk