05Jun

How Many Steps Forward? – European Law Blog


Blogpost 30/2024

The history of EU institutions is marked by a long list of statements and political initiatives that endorse the legal claims of the LGBTIQA+ community (see, for instance, Kollman and Bell). Over the past decades, these have gradually been mainstreamed within different areas of EU law. Particularly, the current EU legislative term (2019-2024) has witnessed an increased commitment of EU institutions towards the LGBTIQA+ community. This is not only shown by the numerous and recurrent Resolutions of the European Parliament on this topic (see EPRS). It is also evident from several political and legislative initiatives that have been introduced over recent years, which (attempt to) intervene in diverse fields of EU law that are considered as relevant to individuals that identify as LGBTIQA+.

Meanwhile, most EU law scholars focus their research on narrow areas, such as non-discrimination (mainly, in the field of employment) and free movement (of same-sex couples and their children). In other words, LGBTIQA+ issues never appear as the starting point of the analysis but rather as an incidental reference in the context of other research topics (on this point, see Belavusau). This piece aims to provide a deeper overview of the EU’s direct commitment towards the LGBTIQA+ community during the EU legislative term that is now coming to an end. It will thus retrace the different political, legislative, and judicial developments occurred, which have been marked as relevant for, or targeted to, LGBTIQA+ persons. Some contextual challenges of EU law vis-à-vis LGBTIQA+ matters will also be highlighted.

An EU Strategy for LGBTIQA+ Equality

Looking back at the very beginning of this EU legislative term, on 12 December 2020, the European Commission adopted, by way of a Communication, the EU LGBTIQ Equality Strategy (hereinafter, ‘the Strategy’). Unsurprisingly, the adoption of the Strategy comes during the EU legislative term in which the first-ever Commissioner for Equalitywas appointed. Likewise, a specific unit working on ‘non-discrimination and LGBTIQ’ matters has been established in the European Commission. Prior to the publication of the Strategy, some had argued that the EU is equipped with adequate legal bases to intervene in the fields of non-discrimination and equality for LGBTIQA+ persons. These are, for instance, the non-discrimination clause in Article 19 TFEU, or Article 81(3) TFEU as regards aspects of family law with cross-border implications. Yet, the potential of these provisions had been restrained by the absence of an overarching and coherent approach. The Strategy seems to have, at least in principle, addressed this gap.

Despite its non-binding nature, the Strategy has been considered a significant development for LGBTIQA+ persons in the EU for the following three main reasons. First, the Strategy has a strong symbolic value. It represents the first instrument in the history of EU integration that targets specifically the LGBTIQA+ community. Second, the Strategy provides a comprehensive approach, as it addresses the topic from different angles. Indeed, it is built on four major axes: i) tackling discrimination against LGBTIQ people; ii) ensuring LGBTIQ people’s safety; iii) building LGBTIQ inclusive societies; iv) leading the call for LGBTIQ equality around the world. Last, the Strategy is very detailed. It precisely identifies legislative and non-legislative initiatives to be achieved within a fixed timeline, thus serving as a planning instrument for the Commission’s action.

More recently, a survey conducted by the EU Fundamental Rights Agency shows that while there are signs of slow and gradual progress, discrimination against LGBTIQA+ persons remain dramatically high. This is also evident in the ILGA-Europe’s annual rainbow map. As the end date of the Commission’s Strategy is approaching and EU elections are coming up, the question remains whether the next European Commission will develop a new instrument for LGBTIQA+ equality; or, as it will be argued below, try at least to fulfil the missed objectives of the current Strategy.

Recognition of same-sex parents and their children

On 7 December 2022, the European Commission proposed the Equality Package (hereinafter, ‘the Package’), a proposal for a Regulation to harmonise rules concerning parenthood in cross-border situations. One of the key aspects of the proposal is that once parental bonds are established in one Member State, these must be automatically recognised everywhere in the EU (for a deeper analysis of the Package, see Tryfonidou; see also Marcia).

The mutual recognition of same-sex parents and their children had also been addressed, just a year earlier, by the Court of Justice (CJEU) in the Pancharevo case (C-490/20). The dispute concerned a same-sex couple, a Bulgarian and a UK national. They gave birth to S.D.K.A. in Spain, where the couple had been married and was legally residing. Spain thus issued a birth certificate, as Spanish law recognises same-sex parenthood. Yet, Bulgarian authorities refused to issue a passport/ID for S.D.K.A since Bulgarian law does not recognise same-sex parenthood. This led to a preliminary question referred to the CJEU, namely whether such a refusal constituted a breach of EU free movement rights (notably, Articles 20 and 21 TFEU and Directive 2004/38). The Court ruled that the refusal to issue a passport or ID to S.D.K.A. would indeed alter the effectiveness of her right to move and reside freely within the Union. National authorities are thus required to recognise the parental bonds legally established in another Member State. This obligation, however, applies only for the purposes of the exercise of the right to free movement, while Member States remain free (not) to recognise same-sex parenthood within their internal legal orders (for a full overview of the judgment, see Tryfonidou; see also De Groot).

Despite the obligation stemming from this judgment, in practice, same-sex parents often experience long and expensive proceedings before national authorities. Indeed, the Commission stated that the key objective of the Equality Package is to reduce times, costs, and burdens of recognition proceedings for both families and national judicial systems. The proposed regulation would, in other words, ‘automatise’ the requirements introduced by the Court in Pancharevo (for the purposes of the exercise of the right to free movement). However, one of the biggest challenges to the adoption of the Package is its legal basis: Article 81(3) TFEU. This requires the Council to act unanimously under a special legislative procedure, after obtaining the consent of the European Parliament. If reaching unanimity among the 27 Member States is generally challenging, this becomes even more complex when the file concerns a topic on which Member States’ sensibilities and approaches differ dramatically. Indeed, some national governments, such as the Italian one, have already declared their unwillingness to support the Commission’s initiative (see, for instance, Marcia).

Combatting hate crime and hate speech

Current EU law criminalises hate crime and hate speech only if related to the grounds of race and ethnic origin. Yet, national laws differ significantly when it comes to such conduct in relation to sex, sexual orientation, age, and disability (see EPRS). To implement the Strategy’s objective of ‘ensuring LGBTIQ people’s safety’, on 9 December 2021, the Commission proposed to include hate crime and hate speech against LGBTIQA+ persons within EU crimes. This initiative requires a two-step procedure. First, Article 83(1) TFEU contains a list of areas of ‘particularly serious crime’ with a ‘cross-border dimension’ that justify a common action at EU level. This list can only be updated by a Council decision, taken by unanimity, after receiving the consent of the European Parliament. Second, once hate crime and hate speech have been included in this list, the Commission can follow up with a proposal for a directive to be adopted through the ordinary legislative procedure. This would establish minimum rules concerning the definition of criminal offences and sanctions (for a full analysis of the proposal, see Peršak).

The European Parliament addressed the problem of hate crime and hate speech against LGBTIQA+ persons on different occasions. Accordingly, in a Resolution of 18 January 2024, the Parliament positively welcomed the Commission’s initiative and urged the Member States to make progress on it. The Justice and Home Affairs Council of 3-4 March 2022 had previously discussed the proposal, concluding that ‘a very broad majority was in favour of this initiative’. Yet, the file has never been scheduled for further discussion or vote since then. Significantly, not even the Belgian Presidency of the Council managed to make any progress, despite the declared intention to make of LGBTIQA+ equality a priority during the country’s six-month lead of the institution. The Commission’s proposal is therefore far from being accomplished, with unanimity being – once again – the greatest challenge to overcome.

The return to EU values

In December 2022, the European Commission referred Hungary to the Court of Justice in the context of an infringement procedure (C-769/22). The contested legislation, approved by the Hungarian Parliament in June 2021, was depicted as a tool to combat paedophilia. As highlighted by the Commission and several NGOs, however, the law directly targets the LGBTIQA+ community. Indeed, it limits minors’ access to content that ‘promote(s) divergence from self-identity corresponding to sex at birth, sex change or homosexuality’ and bans or limits media content that concerns homosexuality or gender identity. It also introduces a set of penalties for organisations that breach these rules (see Bonelli and Claes).

During the past decade, Viktor Orbán made Hungary very (un)popular for the multiple violations of the rule of law and fundamental rights, including attacks to the LGBTIQA+ community. Thus, the introduction of – another – infringement procedure against Hungary seems business as usual. However, EU law scholars have immediately pointed out how this could be a landmark case. For the first time, the Commission has directly relied on Article 2 TEU, proposing a direct link between LGBTIQA+ equality and the ‘founding values’ of the EU. If there is no doubt that this is of high symbolical and political importance, questions have been raised as regards the ‘added legal value’ of article 2 TEU. In other words, the judicial mobilisation of Article 2 TEU does not seem to bring more legal benefits than an infringement procedure based only on the Charter of Fundamental Rights and other provisions of EU law.

It must be noted that the Commission’s reliance on EU values has encouraged a significant political and judicial mobilisation. In an unprecedented move, the European Parliament and fifteen Member States have asked to intervene before the CJEU. This is the first time in the history of EU integration that so many Member States have asked to intervene in support of the Commission’s action against another Member State. For some of them, including France and Germany, this is the first-ever intervention in a case related to fundamental rights’ protection (see Chopin and Leclerc). However, it should also be underlined that the group of countries that participate in the lawsuit has a markedly Western component. This clearly shows the existence (and the persistence) of an East-West divide when it comes to the controversial topic of LGBTIQA+ rights’ protection. Therefore, considering the unanimity requirements mentioned above, even the high participation the Member States to the infringement procedure seems insufficient to advance coherent action at EU level.

Conclusions

EU institutions, in particular the Commission and the Parliament, seem increasingly committed to offer more robust protection to LGBTIQA+ persons. This is shown by the first-ever EU comprehensive Strategy and the related legislative proposals, as well as the numerous calls of the European Parliament. Whereas this is clearly positive for the visibility and legal claims of the LGBTIQA+ community, the legal outcome appears however limited. All legislative proposals are blocked by the failure to reach unanimity in the Council. Indeed, the only changes occurred in terms of legal obligations seem to stem from the CJEU ruling in case Pancharevo (and other minor developments related to anti-discrimination case-law). If it is true that, in principle, the EU is equipped with good legal bases to legislate in the fields of non-discrimination and equality for LGBTIQA+ persons, the feasibility of EU intervention seems challenged by the type of legislative procedure provided and the unanimity requirement. Therefore, further research is needed to identify the actual potential of EU competences to deal with the legal claims advanced by the LGBTIQA+ community.

The pending ‘EU values case’ (C-769/22 Commission v Hungary) shows the existence of highly divergent cultural and political views between the Member States, especially when it comes to issues such as LGBTIQA+ equality which seemingly continues to be controversial. At the end of this week (6-9 June 2024), EU citizens will be called to elect the new Members of the European Parliament (MEPs). As current polls show, far-right parties are likely to gain an increased number of seats. Accordingly, this could lead to a more conservative composition of the next European Commission. These dynamics may constitute a significant shift in the commitment of these institutions to enhance LGBTIQA+ rights’ protection. Indeed, the European Parliament and the European Commission are considered two early [LGBTIQA+] movement allies, as they have been supporting the claims of this community on numerous occasions before and during this term. Therefore, the question is whether these potential political changes will result in a softening of their commitment. If so, the CJEU may remain the only and last resort for LGBTIQA+ individuals at EU level.



Source link

30May

Does the EU’s MiFIR Review make single-name credit default swaps transparent enough? – European Law Blog


Blogpost 29/2024

Regulation 2024/791 (“MiFIR Review”) was published in the Official Journal of the European Union on 8 March 2024. This newly adopted legislation requires single-name credit default swaps (CDSs) to be made subject to transparency rules, only however if they reference global systemically important banks (G-SIBS) or those referencing an index comprised of such banks.

In this blog post, I discuss the suitability of the revised transparency requirements for single-name CDSs of the MiFIR Review. On the one hand, it seems that the new requirements are limited in scope as any referencing entity that is not a G-SIB will not be majorly impacted (see, in more detail, my recent working paper). Indeed, CSDs referencing G-SIBS represent only a small fraction of the market: i.e., 8.36% based on the total notional amount traded and 5.68% based on the number of transactions (source: DTCC). It follows that a substantial percentage of the single-name CDS market will not be captured. On the other hand, this post cautions against creating even more far-reaching transparency requirements than those provided for in the MiFIR Review: more transparency could, in practice, be detrimental for financial markets as it could result in higher trade execution costs and volatility and could even discourage dealers from providing liquidity.

 

Single-name credit default swaps and why they are opaque.

CDSs are financial derivative contracts between two counterparties to ‘swap’ or transfer the risk of default of a borrowing reference entity (i.e., a corporation, bank, or sovereign entity). The buyer of the CDS – also called the ‘protection buyer’ – needs to make a series of payments to the protection seller until the maturity date of the financial instrument, while the seller of the CDS is contractually bound to pay the buyer a compensation in the event of, for example, a debt default of the reference entity. Single-name CDSs are mostly traded in the over-the-counter derivatives markets, typically on confidential, decentralized systems. A disadvantage, however, of over-the-counter derivative markets is that they are typically opaque, in contrast with, for example, listed financial instruments.

Over-the-counter derivative markets have very limited access to pre-trade information (i.e., information such as the bid-ask quotes and order book information before the buy or sell orders are executed) and post-trade information (i.e. data such as prices, volumes, and the notional amount after the trade took place),

In March 2023, three small-to-mid-size US banks (i.e. Silicon Valley Bank, Silvergate Bank, and Signature Bank) ran into financial difficulties with spillovers to Europe where Credit Suisse needed to be taken over by USB. During this financial turmoil, the CDSs of EU banks rose considerably in terms of price and volume. For Deutsche Bank, there were even more than 270 CDS transactions for a total of US 1.1 billion in the week following UBS’s takeover of Credit Suisse. This represented a more than four-fold increase in trade count and a doubling in notional value compared with average volumes of the first ten weeks of the year. The CDS market is namely illiquid with only a few transactions a day for a particular reference entity, so this increase in trading volumes was exceptional. On 28 March 2023, the press reported that regulators had identified that a single CDS transaction referencing Deutsche Bank’s debt of roughly 5 million EUR conducted on 23 March 2023 could have fuelled the dramatic sell-off of equity on 24 March 2023 causing Deutsche Bank’s share price to drop by more than 14 percent.

One of the conclusions drawn by regulators, such as the European Securities and Markets Authority (ESMA), on the 24 March event was that the single-name CDS market is opaque (i.e., very limited pre-trade and post-trade market information), and consequently, subject to a high degree of uncertainty and speculation as to the actual trading activity and its drivers.

The Depository Trust and Clearing Corporation (DTCC) indeed provides post-trade CDS information, but the level of transparency is not very high, given that only aggregated weekly volumes are provided rather than individual prices. Furthermore, only information for the top active instruments are disclosed rather than for all traded instruments. Regarding pre-trade information, trading is conducted mostly through bilateral communication between dealers, who might directly contact a broker to trade or use a trading platform to enter anonymously non-firm quotes. However, even when screen prices are available, they are only indicative, and most dealers will not stand behind their pre-trade indicated price because the actual price the dealer will transact with is entirely subject to bilateral negotiations conducted over the phone or via some electronic exchange. Dealers are free to change the price until the moment the trade is mutually closed. The end-users are thus dependent on their dealers and sometimes do not even have access to the pre-trade information because they have to rely on third-party vendors and services that aggregate data. End-users do not know before the trade which price offered by dealers is the best one and do not know which other parties are willing to pay or to sell at, nor do they have comparable real-time prices against which to compare the price of their particular trade.

 

New transparency requirements in the MiFIR Review

On 25 November 2021, the European Commission published a proposal to amend Regulation No 600/2014 on markets in financial instruments (MiFIR) as regards enhancing market data transparency, removing obstacles to the emergence of a consolidated tape, optimizing trading obligations, and prohibiting receiving payments for forwarding client orders. This initiative was one of a series of measures to implement the Capital Markets Union (CMU) in Europe to empower investors – in particular, smaller and retail investors – by enabling them to better access market data and by making EU market infrastructures more robust. To foster a true and efficient single market for trading, the Commission was of the view that the transparency and availability of market data had to be improved.

The proposal implemented the view of ESMA that the transparency regime that was in place earlier was too complicated and not always effective in ensuring transparency for market participants. For single-name CDSs, the large majority of CDSs are indeed traded over the counter where the level of pre-trade transparency is low. This is because pre-trade requirements only apply to market operators and investment firms operating trading venues. Even for CDSs traded on a trading venue, there is a possibility to obtain a waiver as they do not fall under the trading obligation and are considered illiquid financial instruments. Because of their illiquidity, the large majority of listed single-name CDSs can also benefit from post-trade deferrals where information could even be disclosed only after four weeks.

Regulation (EU) 2024/791 (“MiFIR Review”) was finally approved on 28 February 2024 and entered into force on 28 March 2024. Article 8(a) of the MiFIR Review now requires as pre-trade transparency requirement that when applying a central limit order book or a periodic auction trading system, market operators and investment firms operating a multilateral trading facility or organized trading facility have to make public the current bid and offer prices, and the depth of trading interest at those prices for single-name CDSs that reference a G-SIB and that are centrally cleared. A similar requirement is now there for CDSs that reference an index comprising global systemically important banks and that are centrally cleared. Hence, under the new MiFIR Review, CDSs referencing G-SIBS are subject to transparency requirements only when they are centrally cleared. Such CDSs are, however, not subject to any clearing obligation provided for in the European Market Infrastructure Regulation (Regulation No 648/2012 EMIR). This means that data on single-name CDSs referencing G-SIBS that are not cleared or CDSs referencing other entities do not need to be made transparent.

Regarding post-trade transparency, Article 10 of the MiFIR Review requires that market operators and investment firms operating a trading venue have to make public the price, volume, and time of the transactions executed in respect of bonds, structured finance products, and emission allowances traded on a trading venue. For the transactions executed in respect of exchange-traded derivatives and the over-the-counter derivatives referred to in the pre-trade transparency requirements (see above), the information has to be made available as close to real-time as technically possible. The EU co-legislators are further of the view that the duration of deferrals has to be determined utilizing regulatory technical standards, based on the size of the transaction and liquidity of the class of derivatives. Article 11 of the MiFIR Review states that the arrangements for deferred publication will have to be organized by five categories of transactions related to a class of exchange-traded derivatives or of over-the-counter derivatives referred to in the pre-trade transparency requirements. ESMA will thus need to determine which classes are considered liquid or illiquid, and above which size of transaction and for which duration it should be possible to defer the publication of details of the transaction.

Besides the pre- and post-trade transparency requirements for market operators and investment firms operating a trading venue, the MiFIR Review also focuses on the design and implementation of a consolidated tape. This consolidated tape is a centralized database meant to provide a comprehensive overview of market data, namely on prices and volumes of securities traded throughout the Union across a multitude of trading venues. According to Article 22a, trade repositories and Approved Publication Arrangements (APAs) will need to provide data to the consolidated tape provider (CTP). The MiFIR Review is then also more specific on the information that has to be made public by an APA concerning over-the-counter derivatives, which will flow into the consolidated tapes. Where Articles 8, 10 and 11 of MiFIR before referred to ‘derivatives traded on a trading venue’, the MiFIR Review no longer uses this wording with respect to derivatives and refers to ‘OTC derivatives as referred to in Article 8a’, being those subject to the pre-trade transparency requirements. This incorporates again those single-name CDSs that reference a G-SIB and that are centrally cleared, or CDSs that reference an index comprising G-SIBs and that are centrally cleared. Similarly as for the pre-trade and post-trade transparency, data on single-name CDSs referencing G-SIBS that are not cleared or CDSs referencing other reference entities do not need to be made transparent.

 

Do we want even more transparency?

The MiFIR Review’s revised transparency requirements for single-name CDSs are not very far-reaching, given that CDSs referencing to reference entities that are not a G-SIB are not majorly impacted. Given that CSDs referencing G-SIBS represent only a small fraction of the market (see introduction above), a substantial percentage of CDSs is not captured by the MiFIR Review. In addition, single-name CSDs referencing G-SIBS that are not centrally cleared are also not affected. As there is no clearing obligation on CDSs because they are not sufficiently liquid, a large fraction will not be impacted or can continue to benefit from pre-trade transparency waivers or post-trade deferrals. This entails that a large fraction of the entire CDS market will thus not be affected by the MiFIR Review.

Nevertheless, I argue that even more severe transparency requirements than those foreseen by the MiFIR Review might not necessarily be beneficial for financial markets. Too much transparency can be detrimental to financial markets as it might result in higher trade execution costs and volatility and could even discourage dealers from providing liquidity. In a market, in which there are few buyers and sellers ready and willing to trade continuously, asking for more transparency could lead to even less liquidity as the limited number of liquidity providers would be obliged to make their trading strategies available, giving incentives to trade even less. A total lack of transparency might thus be undesirable to avoid market manipulation or from an investor protection point of view, but full transparency on an illiquid CDS market might dissuade traders even more from trading. The EU’s newly adopted MiFIR Review thus seems to strike an appropriate balance between reducing the level of opaqueness while not harming liquidity.



Source link

24May

Treaty Reform in the Scales of History – European Law Blog


Blogpost 28/2024

The European Parliament’s recent proposal to remove the unanimity requirement from Article 19 TFEU (non-discrimination legislation) echoes a centuries-old US debate on voting and minority rights. James Madison, the ‘father’ of the US Constitution defended majority voting as a necessary condition for impartial law-making and minority protection in multi-state unions. Conversely, John C. Calhoun, the then Vice US President and a key advocate of slavery, sought to maintain the racial status-quo through advocating for a unanimity-based structure.

The purpose of the blog is twofold. First,  it utilises US constitutional history to show how unanimity voting can function as a tool to perpetuate the unjust status quo to the detriment of minority rights. In this regard, partial similarities are drawn between the current Article 19 TFEU and Calhoun’s voting model. Secondly, it contrasts the pragmatic nature of the travaux préparatoires of Article 19 TFEU with the principled approach of the US debate. This juxtaposition underscores the importance of anchoring the proposed treaty changes in the foundational principles of Western constitutionalism. Specifically, it highlights  the nemo judex rule — ‘not being the judge in one’s own cause’ — a principle that shaped the US constitutional debate but was surprisingly absent in the drafting history of Article 19 TFEU. The blog shows why this particular principle should be considered in the debate on the European Parliament’s proposal to amend Article  19 TFEU.  Addressing the foundational role of this principle in Western constitutional theory provides additional support for removing the unanimity requirement from Article 19 TFEU. This change could help ensure better protection of minority rights in line with the values enshrined in Article 2 TEU. It is important to clarify that the term ‘minorities’ here refers to underprivileged segments of societies based on racial or ethnic origin, religion, or belief, among the main grounds protected under Article 19 TFEU – aligning with the EU Commission’s definition.

 

 

The Madison (majority) vs. Calhoun (unanimity) debate

In making the case for a union over unitary states, Madison argued that in unitary states, the majority’s control of legislative bodies enables them to effectively ‘be the judge of their own case’ and legislate in a manner that serves their interests, often to the detriment of minorities. The best means to counteract majoritarian biases is for states to integrate within a larger union, where diverse majorities can balance each other, compelling agreement on common principles that are more likely to lean towards egalitarianism. Madison’s argument from the nemo judex rule is complex and rests on certain assumptions, but the chart below visualizes the essence of his argument.

Assume there are five similarly populated states, each dominated by a racial majority with other dispersed racial minorities. The states then come together into an integrative union.  For simple arithmetical reasons, strong state majorities get diluted at the union level (for instance, group A in the chart shifts from 90% domestically to 18% at the union level).  With majority-based voting, no single group can dominate independently. Rather, groups must  compromise to address common interests. This mutual check on state majorities can provide some  protection for minorities by ensuring that no single domestic group unilaterally decides matters for the whole union.

This Madisonian argument has been tested in many cases, as shown by Halberstam, among others. Minority rights in America have improved significantly in the so-called ‘Civil Rights Era’ when these rights were decided at the federal level rather than left to the majorities of states. Other examples  in the US include various fiscal and economic legislation, where voting at the union level broke the abusive control of local majorities and provided more balanced outcomes.

The most (in)famous challenge to Madison’s argument came from Calhoun, the twice US Vice President and the American South’s ‘evil genius’. Calhoun was known for shifting the slavery debate from being a ‘necessary evil’ to being a ‘morally good’ practice and his theory on voting is closely related to his position on slavery. While accepting the advantages of multi-state union, he feared that majority voting would lead to the emancipation of slaves and disturb the ‘racial hierarchy’. He thus offered a competing voting mechanism rooted in unanimity or what he termed ‘concurring majorities’.  To challenge Madison’s reasoning, he employed two arguments which may resonate with EU lawyers: the indivisibly of sovereignty and its concomitant ‘no demos’ thesis. Calhoun noted that sovereignty is ‘an entire thing;—to divide, is,—to destroy it’. To him, this indivisible sovereignty lies with ‘the people of several states’ because there is ‘no other people’ at the union level. Therefore, his concurring majority model means that majority is only acceptable within states (because people there are sovereign) but not at the union level (where there is no demos nor sovereignty) and thus the union must function on the basis of unanimity.

Space precludes a full discussion of Madison’s reply to Calhoun (which is discussed elsewhere). It is worth noting here that unanimity voting undermines the ‘nemo judex’ rule by allowing one state majority to judge its own case and block legislation favourable to minorities across the entire union. In this sense, it amounts to the tyranny of the few. The Madison-Calhoun and their majority vs unanimity debate was ultimately resolved in Madison’s favour in two ways. First, the outcome of civil war relegated Calhoun to the history ‘dustbin odium’.

Second, many comparative case studies attest to the effectiveness Madison’s argument that majority voting in a multi-state union tends to, subject to some conditions, provide more egalitarian outcomes. Extensive literature covers this issue, citing examples such as the improvement of minority rights in the US when regulations shifted to the federal level compared to the state level, as previously discussed. In the EU, some highlight how the regulation of sex equality in the workplace became more egalitarian through joining the European Community compared to leaving the matter to domestic law. Other examples abound as discussed by Halberstam among others.

With this comparative and historical background in mind we can now explore how this debate influences Art 19 TFEU and the proposed treaty revision.

 

Calhoun vs Article 19 TFEU’s present

While the issue of slavery has receded into the annals of history, the rationale behind Calhoun’s unanimity theory has found echoes in the EU’s Article 19, albeit inadvertently. Article 19 mandates unanimity among Member States in the Council to ‘combat discrimination based on sex, racial or ethnic origin’ among other grounds. It must be noted that the similarity between the EU’s approach and Calhoun’s is only partial because of the divergent socio-political circumstances that he laboured under compared to today.

Nonetheless, this partiality does not exclude some similarity in essence and consequence. In essence, his mechanism aimed to ensure that the union would act only through consensus, this is comparable to Article 19 TFEU’s requirement for consensus to ‘combat discrimination’. In terms of consequence,  the similarity lies in perpetuating the status quo. At the heart of Calhoun’s theory is the desire to insulate the status quo from change as much as possible. Yet, the status quo, as Sunstein notes, is often ‘neither neutral nor just’. To insulate the status quo from change is to perpetuate the injustices befalling many of the underrepresented parts of the society. Article 19 TFEU insulates the status quo of EU minorities and its concomitant injustice. While Calhoun’s model was not applied, Article 19 TFEU has been applied.

Since its adoption, the legislative reliance on Article 19 TFEU has been exceedingly rare. The only two measures enacted using the article date back to 2000 and were induced by the Haider Affair as an ‘unusual twist of political fate’.  Nonetheless, after more than two decades, the consequence of Article 19 TFEU, as many have noted, has rendered the EU ‘minority agnostic’ and its contribution ‘limited’ to ‘all but the most anodyne of actions’, leaving minorities at the mercy of the ‘tyranny of veto’.

An example of the impact of unanimity in perpetuating inaction is highlighted in the recent report of the  EP’s Committee on Civil Liberties, Justice and Home Affairs. It laments the 16-year failure to pass the EU Horizontal Directive on equal treatment across different grounds in respect of goods and services which remains unadopted since the 2008 Commission proposal due to a ‘blockage’ at the Council level. The Council’s approach is in stark contrast to the Parliament, which, unshackled by unanimity, approved the proposal as early as 2009.

The impact of unanimity is also shown by comparing Art 19 TFEU to areas or institutions where unanimity is not required. Most obviously, sex equality, generally unshackled by unanimity remains the most protected ground where nine directives have been successfully enacted and transposed.

While space precludes a full analysis of the substance of EU non-discrimination law beyond gender, it suffices to say that unanimity has been criticised for slowing the development of this area of law to the detriment of racial, ethnic and religious minorities. For instance, the Commission blamed Article 19 TFEU’s unanimity requirement for leading to ‘an inconsistent legal framework and an incoherent impact of Union law on people’s lives’.  Moreover, de Búrca remarked the Race Equality Directive is a ‘more genuine framework in nature, in so far as it contains a general prescription … to which States must commit themselves, but without prescribing in detail how this is to be achieved’. Relatedly, the existing directives, as Bell argues, almost exclusively rely on the ‘passive’ protection through ‘complaints-based’ enforcement, which is particularly insufficient to rectify historical inequalities of racism. According to the Commission’s own reckoning, the existing legislative framework ‘is not enough to resolve the deep-rooted social exclusion’. Many has referred to the failure to prevent the ill-treatment of Roma minorities in many member states. Kornezov has showed that dangers of unanimity for minority rights extends even beyond inaction as it can make things worse for minorities domestically through disincentivizing states from providing any special advantages for its local minorities. He remarked that ‘virtually any right reserved for a special group of citizens of a particular Member State who belong to a minority must be opened up to any EU citizen from other Member States’. Thus, others have lamented the lack of EU legislative response to fix these hurdles as well matters such as affirmative actions and other proactive measures needed to combat non-discrimination.

Another example is related to how the inability to pass further legislative measure contributes to hindering jurisprudential development. Considering the failure to pass the horizontal 2008 directive, as EU law currently stands, it would be ‘lawful’ to deny services for someone manifesting a religious symbol, be it a Sikh turban, a Jewish yarmulke, or a Muslim headscarf.  The Court cannot simply extend the protection here to those minorities. As Advocate General Mazák noted, ‘Article 19 TFEU is simply an empowering provision’ and as such ‘it cannot have direct effect’. He cautioned that any judicial activism in this area ‘[n]ot only would … raise serious concerns in relation to legal certainty, it would also call into question the distribution of competence between the Community and the Member States, and the attribution of powers under the Treaty in general’.  Circularity and the ‘constitutional catch 22’ is obvious here.  Unanimity cannot be interpreted away,  and the Council with its current 27 Member States cannot easily agree to expand legislations beyond the existing measures.

Overall, the negative impact of unanimity of Art 19 TFEU is well-documented in the Commission’s  communications as well as scholarly work to warrant summary here.This dissatisfaction lies at the core of the proposed amendment of Art 19 to which we now turn.

 

Travaux préparatoires and Article 19 TFEU’s Future

Following the conference on the Future of Europe, which gathered input from European citizens and resulted in forty-nine proposals, the European Parliament tasked the Committee on Constitutional Affairs (AFCO) with finalising a report  on the draft proposed amendments. In November 2023, the Parliament voted in favour of a wide range of amendments and called for a convention to revise the treaty.

The vote included approving a draft proposal to amend Art 19 TFEU trough introducing majority voting instead of unanimity as well as expanding ‘non-discrimination protections to gender, social origin, language, political opinion and membership of a national minority’. While this a commendable step, the absence of reasoning from first principles in the accompanying Parliamentary reports raises an alarm from the travaux préparatoires of Art 19 TFEU (ex-Art 13 TEC).  The drafting history of the article channelled Calhoun (unanimity as a concomitant of indivisible sovereignty) but not Madison and his use of the European sources citing the nemo judex rule.

Archives show that the original draft of Article 19 (ex 13 TEC) in the Amsterdam Treaty contained qualified majority voting but pressure from a few Member States led by the UK managed to weaken the Article by requiring unanimity for its use. The UK Parliament’s archives demonstrates that the British view, which concurring member states hid behind, saw much like Calhoun, that the ‘the defence of sovereignty is bound up with the concept of veto’.

While certain parallels can be drawn between Calhoun’s argument from sovereignty and the position of the UK-led faction, it is essential to underscore an important distinction between the position of Member States endorsing majority voting and that of Madison. Whilst Madison made a clear recourse to first constitutional principles, representatives of European states supporting majority voting relied only on pragmatic arguments which were described as lacking a clear ‘direction’. Commentators noted that the Irish Presidency ‘failed to push the negotiations along’ and to articulate compelling criteria to determine which matters should be subject to qualified majority voting.

What is surprising is that Madison directly engaged with sources of European constitutional theory using the nemo judex  rule. The very same principle has been overlooked in the allocation of decision-making procedure within the Article negotiation. This oversight is striking considering that the principle was leitmotiv many foundational text of European Constitutional theory (e.g in Locke and Hobbes). More recently the maxim has been invoked before the CJEU  and lays the foundation of the right to an impartial tribunal enshrined in Article 47 of the Charter. The absence of foundational principles allowed the unanimity side to prevail on pragmatic grounds, without fostering the constitutionally enriching debate witnessed in the US.

The nemo judex argument and its history shows unanimity’s particularly disproportionate cost for racial, religious and ethnic minorities. Opting for unanimity for non-discrimination legislation speaks volumes about the priority of this domain. This demonstrates either complete discard for foundational constitutional theory or intentional discard of minorities. Whilst Article 2 TFEU upgrades minority rights to an EU value, the unanimity choice relegates its protection to the lowest level.

Advocates of reform should not be discouraged by their opponents wielding the sovereignty argument to defend unanimity. This argument would have been convincing had Article 19 not been directly preceded by Articles 18 (discrimination on grounds of nationality) which requires majority voting, as does 157 TFEU (equal opportunities of men and women). Moreover, recourse to majority does not threaten states and there are safeguards to states which I detail here.  Even in sovereignty-guarding states like the UK, since the Factortame II judgment, courts have reconciled EU powers with sovereignty on the premise that Member States have voluntarily transferred some powers to the EU and sovereignty is preserved through retaining the ultimate power to exit. Additionally, as Triantafyllou noted, despite the EU’s claim to being a ‘new legal order’ it is lagging behind in many international organisations, which now use majority voting rather than unanimity to amend their own charter.

To be clear, while the nemo judex rule is crucial for minority rights, it does not necessitate the removal of unanimity in areas such as the Common Foreign and Security policy (CFSP). Such area, for instance, does not necessarily involve a direct conflict between racial majorities and minorities where the nemo judex in causa sua rule applies. Therefore, addressing which voting procedure is suitable for this area may require balancing various competing factors, extending beyond the scope of the current blog and as explained elsewhere.

To conclude, the blog uses insights from comparative constitutional history to show how unanimity can function as a tool to perpetuate the unjust status quo to the detriment of minority rights. This analysis aims to support the European Parliament’s proposal of moving Article 19 to majority voting akin to articles 18 and 157 TFEU. This would allow the EU to strengthen its much-needed role in this area and to avoid the pitfalls that befell Calhoun’s racially motivated model. This can also enable the EU to uphold the values outlined in Article 2 TFEU, which explicitly include minority rights, and to respect the centuries-long history of the nemo judex in causa sua principle in Western constitutional theory. Overall, understanding the interlinkages between the constitutional principle of nemo judex and the unanimity versus majority debate is of timely relevance to larger debates within the EU.

Admittedly, treaty amendment is complex and difficult to secure, but history may counsel against despair. The introduction of the EU’s competence to include non-discrimination beyond gender in the first place was only made possible after relentless activism, contributions from the Kahn Commission Report, and the political efforts of the European Parliament. Now, revising the treaty seems to be ‘gradually gaining ground’ — possibly in anticipation of the EU’s further enlargement. If the Parliament’s call for a convention is materialised, heeding lessons from comparative history and reasoning from first principles of western constitutionalism can provide intellectual ammunition to the reform endeavours against Calhoun-like thinking.



Source link

22May

Bruce Schneier and Gillian Hadfield on Securing a World of Physically Capable Computers


Computer security is no longer about data; it’s about life and property. This change makes an enormous difference, and will inevitably disrupt technology industries. Firstly – data authentication and integrity will become more important than confidentiality. Secondly – our largely regulation-free Internet will become a thing of the past. Soon we will no longer have a choice between government regulation and no government regulation. Our choice is between smart government regulation and stupid government regulation.

Given this future, Bruce Schneier makes a case for why it is vital that we look back at what we’ve learned from past attempts to secure these systems, and forward at what technologies, laws, regulations, economic incentives, and social norms we need to secure them in the future. Bruce will also discuss how AI could be used to benefit cybersecurity, and how government regulation in the cybersecurity realm could suggest ways forward for government regulation for AI.



Source link

22May

David Autor, Katya Klinova & Ioana Marinescu on the Work of the Future: Building Better Jobs in an Age of Intelligent Machines


David Autor is Ford Professor of Economics and associate department head of the Massachusetts Institute of Technology Department of Economics. He is also Faculty Research Associate of the National Bureau of Economic Research, Research Affiliate of the Abdul Jameel Latin Poverty Action Lab, Co-director of the MIT School Effectiveness and Inequality Initiative, Director of the NBER Disability Research Center and former editor in chief of the Journal of Economic Perspectives. He is an elected officer of the American Economic Association and the Society of Labor Economists and a fellow of the Econometric Society.

Katya Klinova directs the strategy and execution of the AI, Labor, and the Economy Research Programs at the Partnership on AI, focusing on studying the mechanisms for steering AI progress towards greater equality of opportunity and improving the working conditions along the AI supply chain. In this role, she oversees multiple programs including the AI and Shared Prosperity Initiative.

Ioana Marinescu is assistant professor at the University of Pennsylvania School of Social Policy & Practice, and a Faculty Research Fellow at the National Bureau of Economic Research. She studies the labor market to craft policies that can enhance employment, productivity, and economic security. Her research expertise includes wage determination and monopsony power, antitrust law for the labor market, the universal basic income, unemployment insurance, the minimum wage, and employment contracts.

You can watch a recording of the event here or read the transcript below. Slides:  David Autor –  Katya Klinova

Anton Korinek:

Welcome to all the human and artificial intelligences around the globe, who have joined us for today’s webinar on the governance and economics of AI. I’m Anton Korinek. I’m an economist at the University of Virginia and a Research Affiliate at the Centre of the Governance of AI, which is part of the Oxford Future of Humanity Institute and is hosting the event. Let me thank the Centre and in particular Markus Anderljung and Anne le Roux, for their support.

Our presenter today is David Autor, Ford Professor of Economics at MIT. David has earned so many honors and awards that I could not list them if I used the entire webinar, so let me just say that he is the world’s top authority when it comes to analyzing the effects of automation on the labor market. As discussants, we have Katya Klinova from the Partnership on AI and Ioana Marinescu from the School of Social Policy at Penn who will share their comments on David’s presentation. I will tell you a little more about them when they take the stage to give us their comments.

The topic of our webinar is “The Work of the Future: Building Better Jobs in an Age of Intelligent Machines” – and the title in fact reflects the title of a report that David released late last year at the MIT Task Force on the Future of Work, which he is co-chairing. The task force was formed by the President of MIT three years ago to analyze the relationship between emerging technologies and work, to help shape the public discourse around realistic expectations of technology, and to explore strategies to enable a future of shared prosperity. I will leave it to David to tell you more about the task force and the report.

But before handing over the mic to David, let me perhaps emphasize a point that we are particularly interested in at the Future of Humanity Institute and that I hope you can speak to, David:

Your report reflects a really comprehensive reading of the labor market developments that were triggered by automation in recent decades and, in some sense, you are suggesting that the coming decades will kind of continue that trend.  

And indeed, with the narrow AI systems that we have in today’s world, that view may be well-justified and based on that, I believe that your report will be really an excellent guide for policy for perhaps the next decade.

But now let me do something that today’s narrow AI system cannot do, but we humans are very good at: to speculate about the future using just a tiny bit of data and a highly abstract meta model of the world.

So if we extrapolate based on our technological trajectory, our currently narrow and brittle AI systems are becoming broader and more robust, and in the next few decades, they may well reach a point where they surpass human capabilities in substantially all domains. So economically speaking, employing humans may be a dominated technology then, just like we don’t use horses for transportation anymore. Now this would be a marked rupture from the past, but it would not be the first time that human technology has fundamentally altered the course of history.  

I am afraid that we may lull ourselves in a little bit of a false sense of security if we do not consider this possibility when we speak about the Future of Work. And so I would really appreciate it if you can include your thoughts on this possibility in your presentation.

And now without further ado, the floor is yours, David!

David Autor:

Thank you. So the question that you just asked is not one I exactly was set up to answer. So it’s not in my slides, but let me let me sort of circle back to it later, rather than taking on the sort of the ITF horse scenario. Let me start with the report itself, and I’m going to share my screen to do that. And thank you all. It’s pleasure to be here. Thank you for inviting me. Thank you to Ioana and to Katya for agreeing to discuss, and thanks for all of your attention. And if you guys all want to now tune me out and watch the Biden inauguration, I don’t fault you for that.

The title of the report, Building Better Jobs in the Age of Intelligent Machines and this was a taskforce set out by President Rafael Reif of MIT. It was co led by Professor David Mindell who’s both a historian and an engineer. He’s in AeroAstro [Aeronautics and Astronautics], the School of Humanities, and Arts and Social Sciences at MIT. And Elizabeth Reynolds, who’s the head of the MIT Industrial Performance Center.

So since Anton said some of this, let me not linger. The purpose of the taskforce was constructive, to sort of survey the landscape, ask what’s changing, and then ask how can we design and leverage innovations and institutions to maximise the benefits and minimise the harms of changes that are underway. Lots of people were involved. And I won’t say them by name. And let me just kind of start, I’m going to start with the economic context. Why? One I’m an economist, first and foremost. And second of all, I think it’s, it’s the thing that frames the debate and motivates the entire discussion. And then I’ll step back and talk about technological and institutional forces. But I want to start with the economics.

As many of you will be aware, we’ve been talking about the obsolescence of human labour for quite a long time, for a couple of centuries. And there have been periods, periodic moments of great concern. So you’re all familiar with the Luddites. And that’s not the only time. Even prior to the Great Depression, the US Secretary of Labour was talking about the notion that machines would be used for, quote, “scrapping men”, as opposed to scrapping machines we would be scrapping people. In the 1960s President Johnson of the United States, set up a Blue Ribbon Commission on automation and employment. And the concern at that time was that in the post-war period, productivity was rising so fast, that it would threaten to outstrip demand, or at least that was the concern. And so the outcome would be mass unemployment, because there’ll be insufficient demand to keep up with all of the supply.

Of course, none of those scenarios, those dreaded, dreaded scenarios have come to pass, at least in the form that people envisioned them. This figure just reminds you that the fraction of the adult population here in the United States – this has to be true in most advanced economies – has risen: the fraction [of the population] working has risen over the last 100 years. It’s risen because women have left unpaid, often highly constrictive, domestic unpaid employment to enter the paid labour force. The fraction of males working has come down over time, but generally, that’s a positive feature, of reflecting the fact that people don’t have to work until the point at which they expire. So we have not, despite decades, you know, centuries of concern about the possibility of work, we haven’t seen it. That doesn’t mean we can’t ever see it. But there’s no current evidence that we’re anywhere close to running out of jobs. In fact, prior to the pandemic, we were in the United States as close to full employment as we have been in quite a long time. And that’s true, as the Economist has reminded us, throughout the developed world, throughout the industrialised world.

So let me say, Why haven’t we run out of work so far in any case? So really several answers to that question. They go from the kind of the most prosaic to the most interesting. The most prosaic answer is that when we automate, we become more productive, that makes us wealthier, you consume more, and that creates work. So that’s one answer.

The second answer, which I think is less obvious, but more important, in fact, is that automation, we think of automation, or many of us think of automation as eliminating tasks that would be true for artificial intelligence that will be true for many forms of machinery and so on. But automation does do that. It absolutely does eliminate certain types of work. Simultaneously, it makes people more productive in the work that remains – often because what automation does is give us better tools. You can see this at all levels: roofers use pneumatic nail guns to hang shingles; doctors deploy batteries of tests to make diagnoses; architects rapidly render designs on the computers; teachers deliver lessons through telepresence; long haul truckers use route planning software to make sure they never carry an empty load. So a second reason that automation doesn’t simply eliminate work is that it makes us much more productive with the work that we do, and that increases our marginal product, it lowers the price of goods and services, again, boosts demand. And so it makes people more valuable. There’s no way that we could command the wages we do – have such high marginal products – if we didn’t have vastly improved tools that come from our machines and our computers and artificial intelligence.

And the final reason that automation has not eliminated work, in addition to creating wealth and complementing the work that we do, is that it leads to a lot of new work. This is a figure from an ongoing working paper, actually a working project, not a paper yet, that I’m doing with Anna Solomons at Utrecht University and Brian Segmiller, who’s an MIT PhD student. And what we do here is we look at employment across 12 categories of jobs, covering all of US employment. In 1940, for example – the blue bar is the fraction of all employment – in 1940, more than 25% of work was in production, was in manufacturing, almost 20% was in farming and mining, and everything else was much smaller. So those two categories comprised about half of all jobs. If you look in 2018, the height of the maroon and the teal bar together, shows you that in 2018, about 22% of employment was in professional work, about 14% in clerical administrative work, and a good amount in personal services.

So the composition of employment has changed enormously. Now I want to draw your attention just to the maroon or the pink, depending on how it looks on your monitor. This shows you the fraction of work in 2018 that exists in occupations that had not yet been invented in 1940. I’ll define what that term means in one second, but let me just say, if you add these up, what you see is that more than two thirds of all the jobs that people are doing in 2018 are jobs that did not exist in 1940. Let me give you some examples of that and then explain where that comes from. So here are jobs that didn’t exist, that were added to the US Census by decade. So in 1940 automatic welding machine operators were added. In 1950, airplane designers – let me just read through the list – textile chemists, computer application engineers, controllers, remotely piloted vehicles, certified medical technicians, artificial intelligence specialists, wind turbine technicians, pediatric vascular surgeons. So in this list, what you can see is these are primarily jobs where new expertise is demanded by the introduction of new technologies, right? You didn’t need airplane designers before we had the Wright brothers, you didn’t need computer applications of engines before we had computers. And all these medical specialties, of course, come from deepening knowledge. And so when we create new technologies, we create new demands for human expertise to service, to implement, to design, to advance, to apply those technologies.

Part of where your work comes from is that as we make the world more complicated and interesting, we create new work for ourselves within it. Now let me turn to the other side of this figure that I was covering up. Here are some other jobs that were added by decade: gambling dealers, beauticians, pageant directors, hypnotherapists, chat room hosts, sommeliers, drama therapists, these are all jobs that obviously they do not obviously have a technological component, but instead reflect rising incomes, creating demand for new luxury goods and services. Right so many of these things, mental health counsellors, sommeliers, drama therapist would not have been perceived as needed, some decades ago, but now are obviously demanded and paid for, supplied by people, primarily. And so what this suggests is that new work doesn’t just emerge, per se, from technology, but even from rising incomes and market scale themselves create these new opportunities that people jump into.

Let me just say, from where did we get this list? To do this, we’re building on work by Jeff Lin, who’s an economist at the Federal Reserve Bank of Philadelphia. And what we’ve done is we’ve taken historical census documents and used these kind of micro lists of occupations and industries and catalogued their changes over decades. And then we’re analysing where those come from. I’d be happy to talk more about that. But let me know.

The point I want to take away here is, as we change the world, we eliminate work, we create work, and we also increase wealth. And those processes have roughly tended to keep in balance over the course of decades. There’s not a law that says that they have to do that. But they have tended to do that. Okay, so if that’s the case, what’s to worry about? My goal here is not just to tell you not to worry about anything. I think there’s plenty to worry about. It’s just not obvious to me we’re worried about the right thing.

The concern that, really, I find very focal is the disjuncture between productivity and compensation growth, and many people call this the Great Divergence. This just shows in the US, after the mid 1970s trajectory of productivity growth, remains pretty steep, not as fast in the immediate post war period. That’s this purple line, right. So this is the post war period, and then it slows down in the 70s, picks up back up, again, not as fast. This is average compensation growth, what the average worker receives, it keeps pace with productivity until the early 2000s, when they diverged here, that’s the falling labour share. But then the real cause of concern, of course, is the flattening of median compensation after this period. And so what it says is, productivity is rising, average earnings are rising, but inequality is increasing so fast, that the median person is seeing almost none, the median worker is seeing almost none of this productivity growth. Now, let me say you can quibble, it’s reasonable to quibble: Well, is the median really rising by zero? Or is it rising by more? Are we understating the growth of real incomes? And it’s possible that we are, but that would also cause us to understate productivity growth and mean compensation growth. So in other words, that gap is real, even if you think the levels are more dramatic than they should be. So this is a really important phenomenon. And I think if you ask why are people so concerned about automation, if we know that automation or technological progress raises productivity raises national incomes? Well, this figure shows you a very good reason for concern, because the median person could correctly say, looking at the last four decades of economic history, I see that the country has gotten a lot richer, and yet the typical worker has really not gotten richer. And so it’s quite possible to have a lot of productivity growth without a lot of shared prosperity. And I think that is a huge concern.

Now, let me not just be US focused here, you can say, “well, what if I drew this graph for other countries?” Well, I don’t have a version of this graph for other countries. But it is the case that in most advanced economies, the median has not grown as fast as the average, has not kept pace with productivity reflecting the growth in inequality as well. But the US is an outlier, as in many things, in the degree of this gap. And if you look across countries, and the OECD does put together statistics on that, the US has done extraordinarily badly, in this respect, if you consider this gap a bad thing, as I do.

Okay, simultaneously as you’ll be aware, there’s been enormous growth of earnings differentials. You can see that a lot of that failure of median drives reflects the failure of incomes to rise for people without four-year college degrees. But what’s going on here? What are the causes? What is the cause of this juncture, if we had this much productivity growth, and yet, there’s so little compensation growth for the typical worker, what is going wrong?

I would say there are three different things that are going wrong. The first is technology itself. Although I don’t think technology has eliminated work per se, the digitalization of work has made highly educated workers much more productive and made less educated workers easier to replace with machinery. One way we see that is in this kind of barbell shape of occupational structure growth. So this figure, again using the US data, looks at these 11 occupations here, ranks them from low pay to high pay. And of course, these high paid jobs are our professional, technical, managerial jobs that require lots of education. They are obviously, at present, highly complemented by information technology. On the left hand side, we see a lot of these in-person service occupations, personal services including protective services, and these are numerically growing rapidly, but they’re not becoming much more productive. And the supply of people who do that work is highly abundant because they are not specialised. Those are growing. And many of these medium skill, occupations in production, administrative support, and sales are contracting as a share of employment. And I don’t think you have to introspect too deeply to see how that is related to computerization, not AI, really, that has made many of those codifiable tasks much more subject to automation. So technology itself has played a big role in changing the structure of occupations.

And one interesting way, you can also see this, and this goes back to the work with Autor, Salomons, and Seegmiller is to look at the growth of new work over time. So this figure shows you the addition of work between 1940 and 1950. And what I want to draw your attention to is that a lot of the fastest growing categories of occupations are actually getting new titles, new jobs, not just more people, but new types of work are found in the middle: construction, transportation, production, clerical administrative support, and sales work. If I plot that same figure, for 2000 to 2018, what you’d see is all of the new work that’s being added is found really at the tails. On the one hand, in these highly paid specialised occupations and the other, in personal services. And so the direction of technological change is actually shifted in a way that it is not just displacing things in the middle but actually creating new activities at the edges.

And let me just link that to one other phenomenon. In the work with Anna and Brian, look at the relationship of new work creation to innovation. And innovation is measured by patents. And we show that you can predict where new work is appearing according to where innovation is appearing. And if you look at this figure, it divides patents across the 20th century into broad industry categories. And, over time, this amazing shift in the first part of the 20th century, from around 1900 to 1930, the largest categories of patenting are in manufacturing and transportation, so that’s the dark blue and the kind of maroon initially below it. In the next several decades that moves to chemicals and electricity, so that’s the brown to purple. And then if you look at the last 40 years, the majority of patenting has been just in two categories, instruments and information, which is the mustard coloured, and electricity and electronics. And so as the locus of innovation has shifted, the locus of new work creation has also shifted. And that’s very strongly tied to the growth of occupations at the top.

So now, let me say, technology is not the only factor that matters. Globalisation has been a huge positive for world welfare, but has placed a lot of pressure on manufacturing jobs, and manufacturing-intensive communities. In the United States, for example, we have what some would call the China Trade Shock, where when China joined the World Trade Organisation in 2001, its import penetration to the United States just accelerated remarkably, and US manufacturing employment fell pretty steeply. And although that only amounts to a couple of million jobs, in a labour market of 150 million workers, it was very, very strongly regionally concentrated and strongly felt. So a second factor that contributes to the type of changes in work and the divergence between productivity incomes, is trade pressure, or the way trade pressure has been managed.

The third factor, and I would say, what makes the US distinctive from other countries, is institutions. Weakened labour unions, historically low minimum wages and outdated employment regulations have been extremely harmful to the rank and file workers, to the median worker. You can see this in a variety of ways. For example, this just shows you the purchasing power adjusted hourly earnings of low-educated workers in 2015, according to the OECD. The US is here at $10.33. If you want to do a little better, just head to Canada, a little bit to the north, where wages for similar work are about a third higher. If you want to go lower, you would have to go to Portugal or Greece or the Czech Republic, to find low paid workers who are paid as little – and that cannot be a function of skills or differences in jobs. You would find McDonald’s workers in all of these countries doing basically the same work and yet wages vary dramatically among them. And I view that, especially when we look across these high income countries, as a function of institutional choice rather than underlying technology.

You can also see this in terms of collective bargaining. The US is an outlier in having extremely low and falling collective bargaining. The UK actually has a lot in common with the US labour market and has seen large declines as well, and a big opening up at the bottom. And then throughout the OECD, collective bargaining has been falling, but to a much greater extent, in some countries and others. Germany is another great example, and Germany is also a country that has seen a very rapid, a very sizeable increase in inequality, from the 1990s, to the present. Finally, I think I’ve already mentioned this and you will all be aware of it, the US minimum wage is remarkably low, almost meaningless. It actually in real terms, depending on how you deflate it is the same level of present as it was around 1950. And despite mass productivity growth, now, US states have taken the lead on this, we’ll see what happens in the Biden administration that is being sworn in as we speak.

I want to make sure that I leave enough time. So I’m going to stop speaking within 12 minutes and I want to leave time especially for Anton’s questions. So I had a section prepared on “are we getting a positive return on inequality”? Are we getting anything out of this? Let me just sort of say, if you look across countries, you will not find a positive correlation between labour force participation rates, economic mobility, and economic growth, you will not find those things to be positively correlated with high levels of inequality or high levels of divergence between productivity levels and the wages. In other words an argument was frequently made in the 1980s, and 1990s, that you could take your inequality in one of two forms, you could you could either have dispersed wages or you could have low labour force participation rates at the bottom, but you couldn’t have both high wages at the bottom and high employment. Or another way of saying that is, if you believe in the equity-efficiency trade off, that if you want more equity, you have to give up efficiency. And so if you want to have a more, a more egalitarian social system, you’d have to give up higher growth, you’d have to give up dynamism, and so on. There’s really no evidence, at least on a correlational basis that those patterns are visible.

Just to give you a couple examples, and not spend a lot of time on them, if you look across countries it is certainly not the case that more unequal countries have higher employment rates, which you might hope they would if they have very low wages, the US is a good example of having relatively low employment rates, especially among men. Um, if you look at economic mobility, this is the famous graph by Miles Corak, which Alan Krueger called The Great Gatsby curve. Just looking at the relationship between cross-sectional inequality and intergenerational mobility, more unequal countries have lower not higher intergenerational mobility, you might have hoped that high inequality would create a lot of rags to riches, but in fact, it seems to create a lot of permanent social stratification.

In the interest of time, that’s the economic structure or foundation I wanted to just lay out. And again, the main takeaway there is there has been an enormous divergence. It is not associated with falling employment, or the lack of creation of work. It is associated with divergence of incomes. And institutional factors, I believe, are as at least important in explaining the diversity of experience across countries as our differences in technology and globalisation. But now I was not talking about technology at all.

So let me spend a few minutes on that. And I can summarise the conclusions of our report on this in really a sentence, which is that the momentous impacts of technological change are unfolding gradually. What I mean by that is that new technologies themselves are often astounding. But it can take decades from the birth of invention to its commercialization, its assimilation into business processes, standardisation, widespread adoption, and broader impacts on the workplace.

There is often a line – a kind of headline – that is drawn from a laboratory invention to mass displacement of labour. And we have rarely, if ever seen that. I’m not aware of examples where we see that. And looking at the technologies we surveyed in our task force, we find a pattern consistent with this observation. So we look at autonomous vehicles, industrial robotics, intelligent supply chains, additive manufacturing, and artificial intelligence. In all cases, we came away saying they are remarkably important. Over the long run, they will do a great deal. In the short and medium run, they are often extremely limited. So autonomous vehicles will be one example. In the long run, it’s estimated that autonomous vehicles will displace 1.3, 2.3 million workers out of transportation jobs. This has strong regional implications. A lot of the people who drive for a living are located in the South although they drive all over the country. And this has potentially important consequences. But as all of you who’ve been reading the news will be aware, this is happening much more slowly than was predicted five years ago. So for example, here’s a headline I really like, from the Washington Post, “Shaken by hype, self-driving leaders adopt new strategy: Shutting up.”

And why is that the case? Well, there are two reasons. One is, of course, the technology itself was overhyped. Autonomous vehicles are just not as competent or reliable as they were initially claimed to be, although I think they eventually will be – I don’t mean to say they’re not amazing, they are. And eventually they’ll be much safer drivers than people and many good things will come from that.

Second, adopting that technology at a very large scale doesn’t just mean people buying a new car, it means changing infrastructure. A lot of vehicle miles are driven by heavy machines, long haul trucks that have a service life of a couple of decades. And they work in a complex web of roads and warehouses and so on. And they will not overnight be replaced. Even if tomorrow someone introduced a truck with really great autonomous capabilities, it will take decades for the infrastructure to turn over. So that’s why it’s important to recognise that there’s an enormous gap between what is possible, and what is occurring at scale.

Another great example that we looked at – and I enjoyed learning about at the task force – was additive manufacturing. Additive manufacturing will be quite revolutionary. So these are just some examples of things that were additively manufactured. This is a very early example, the MIT dome just out of a plastic. This is a custom metal hip implant made for a patient. That’s an aircraft fuel nozzle. A faucet, you’ll notice the faucet is hollow, the water travels in those little channels on the side, or an orthodontic retainer.

Additive manufacturing is what some people call 3D printing, but really, additive manufacturing is probably a better term. Most manufacturing is subtractive, where you start with a raw piece of material, and you remove parts of it until you get what you want. Sometimes it’s formative where you put things in a mould. But additive is the idea that you put the pieces on layer by layer, and it has a potential to transform how products are developed and realized. It can eliminate the need for product specific tooling. It can make highly complex parts. They consolidate multiple materials in ways that were previously impossible. And it makes it possible to envision manufacturing as a mostly digital process where the actual turning of materials into final objects occurs only at the very end of the supply chain and most of manufacturing is in the design and engineering and prototyping, itself enabled by additive manufacturing. So this is a very big deal. But at the moment, it’s a very tiny part of the market. And it will take a very long time. In the long run, I do think it will reduce the amount of employment in people making things, increase the employment of people designing things. So like many of these technologies, people slowly devalue a lot of the physical skills and make the cognitive skills more consequential.

Just to summarise what I’ve said here, these AI robotic applications, they take time, often decades to develop and deploy, especially if you’re talking about safety and production critical applications. For example, it’s noteworthy that airplanes used to have three pilots 50 years ago, they’re down to two, but they’ve been at that for quite a long time, despite incredible advances in autonomy. The largest labour market effects of information technology that we’re seeing at present still from stem from maturing technologies of two decades ago, like Internet, like mobile computing, like electronic health records and e-commerce. We can see lots of glimpses of what’s going on, but it’s going to take a long time to fully roll out. And this time window offers opportunity for investment, investment in skills in particular. Let me wrap up, because I want to leave time for discussion.

The main argument of our report is that institutional innovation must complement technological innovation. And in particular, we argue that if the wave of technology that we’re seeing now deploys into the institutions that we have in place, we will have bad results, as we have arguably had bad results for the last four decades. Those institutions have not done enough to translate rising productivity into anything like shared prosperity, and that has had real social and political costs. But that was not necessary. And we can see from a diversity of experiences across countries using the same technologies, facing the same force of globalisation, that they have done very differently, and not obviously at great cost; they haven’t given up a lot to get better results. So we talk about three places where innovation is most needed. One is, of course, investing in innovation and innovating in skills and training. And any economist who woke up in his or her sleep would tell you that’s necessary. And that’s true. In the long run, human capital is critical. If we did not continue to raise our skills to keep pace with the technologies that demand those skills, we would have a problem. And we have done that successfully over a century, but must keep doing it. The second is really ensuring the productivity gains translate into better quality jobs. And the third is expanding and shaping innovation itself. So I’ll say just a minute on each of these and then I will stop.

Instead of walking through this long laundry list, I know Katya will say more about this in her remarks, I will just say that we talk about a variety of ways in investing, innovating and skills and training at scale. And one positive developments, you know, it’s boring to talk about education: education, one of  the least sexy things you can do with technology. But it is the case that we are in a moment where the technology for education is suddenly much better, or at least potentially much better, although we have not found the kind of secret sauce for making online learning in every way as good as in person learning, in the long run, it will be better. Not only will it be better, it will be cheaper, it will be more accessible, and it’ll be much more immersive, when people can use tools like augmented reality and virtual reality, when they can do kind of hands on learning via simulation. And when they can do that, so when more of learning looks more like video games and Netflix and so on, it will be much more appealing to many more people and more broadly used. Although it is hard to improve education, and it’s especially hard to retrain adults – lots of research demonstrates that – I think the technology for doing this is going to help us a great deal both in terms of cost and even more so in terms of efficacy.

A second thing is really innovating institutions to go along with technological change. To use an example, in the US we would talk about modernising the unemployment insurance system, we would talk about making sure health care provision was not intimately dependent upon employment. Certainly restoring the real value of the US federal minimum wage and indexing it to inflation: all evidence suggests that this has benefits at low costs. And then a lot of what we talked about in the report that I won’t talk about here is strengthening and adapting labour laws, both enforcing existing protections, but also allowing for innovation in worker representation. The US has a highly atrophied form of collective bargaining but bringing it back in full strength in its current form is not necessarily desirable. It’s highly adversarial, and arguably not a good way to go about it. We talked about alternative models, but this is an area where experimentation is desperately needed. In addition, actually building legal protections for workers to organise without retaliation in non-traditional realms. Currently, US domestic and home-care workers cannot legally organise, nor can farm workers, and it’s unclear about independent contractors. So finding room for collective bargaining in an innovative way is critical.

Finally, we talk about expanding and shaping innovation itself. As you may be aware, the US has really slacked off in public investment in innovation. R&D in the US economy has stayed roughly stable as a share of GDP. But that’s because public sector has fallen enormously and private sector has risen. Now, you might say, well, isn’t that good enough? You know, are those substitutes, if the public sector doesn’t do it then the private sector does? Shouldn’t we be happy about that? And I would argue no: that these things are complimentary, that the public sector does a different type of innovation, earlier in the pipeline, more focused on public goods, it provides a lot of the fundamental science. And so these things go together. In fact, there’s nice research by my colleagues, Pierre Azoulay and Daniel Lee, showing that public sector R&D investment leads to private sector patents among other things.

So how do we do that? Again, you know, when I talk about policy, I’m constrained to speak about the US, or at least, it’s hard to speak about many countries simultaneously. One is not only to increase federal R&D spending but to use it to set the agenda. We forget, or many people forget, how important the government has been not just in paying for things but in deciding what things should be paid for, whether that was the telecommunications revolution, whether that was NASA – at one point, our space agency, was consuming more than 50% of all integrated circuits that were available in the United States. It’s the same with the internet, DARPA led the self-driving car, kind of initial catalytic moments. And so the federal government can set the agenda on innovation towards valuable things: towards education, towards things that increase worker productivity, and many others. A second is that innovation has a geographic element, and it can be used to bring prosperity to other places, not just to the coastal regions of the United States. And a third thing that we talked about the report which is quite controversial, and here we’re building on the work of Daron Acemoglu, Andrea Manera and Pascal Restrepo, is rebalancing our tax system, which at the margin subsidizes firms to eliminate workers and replace them with machines, which is not something that we think is desirable. Capital investment in general is good, but at the margin, could be counterproductive depending on where it goes.

So let me kind of conclude my canned remarks and then open this up. So I believe the work of the future is ours to invent. There is a palpable fear, and at least the way we perceive it, this could be a hypothesis but is not in fact proved, is that a lot of the fear comes from a consequence between advancing innovation and fairly stagnant labour market opportunity. And that will get worse if we don’t take countermeasures. But we do not think that there is a trade-off that we see between economic growth and strong labour markets as far as we can tell. These things are really either complementary or orthogonal. They are not at odds with one another. As I stress at the beginning, the majority of today’s jobs had yet to be invented a century ago. And a lot more are to come. The job of the President, as we understand it, is to build the work of the future, in a way and for a world that we all want to live and that is not a technology inevitability, it’s a matter, to an important extent, of social choice.

Let me talk very briefly to the question that Anton asked me. There is a scenario, the “horses” scenario, but of course, that has been with us for a long time. Many of the things that, you know, we used to do: dig ditches by hand, we used to pound tools out of wrought iron, we used to do bookkeeping using books, right? We have automated ourselves out of all kinds of work, right? If we were limited to things that we did 100 years ago, with employment in agriculture, we would be in big trouble. We have continued to complement ourselves with the machines we create, and this appearance of new work, which I’m arguing has been extremely important. Now, we don’t know that’ll stay in balance. There’s a nice theory paper by again by Acemoglu and Restrepo called The Race Between Man and Machine that talks about the forces that potentially counterbalance this and what do you need in equilibrium for labour to stay valuable, and what do you need for innovations that complement labour? So those are obviously hard to check – that’s true, there’s no evidence yet that we’re seeing what people are worried about. That we’re seeing vast displacement, what we’re seeing is some devaluation of skills. You’re not seeing an excess supply of labour, per se.

I perceive it as a challenge for at least for the next decade, maybe for next few decades as being distributional rather than pure human obsolescence. If we reach that point of pure human obsolescence, what we then have is a much bigger distributional problem. Because it’s not that we’re poor – at that point, we’re incredibly rich, labour is no longer scarce, and we own the machines – they don’t work for themselves. So we have a fabulously wealthy society with no obvious means of distributing income, in the sense that most of our income distribution systems are based on scarcity of labour. And where labour is no longer scarce, we must have income without really being clear who gets to make a claim on it. So I think that’s a huge social organisational problem, and I don’t look forward to that problem. I like the problem of scarce labour. I think it has many, many virtues. So I don’t see any early indications of that scenario. But I know that many will disagree with me. So let me let me stop there. And thank you very much for your attention – I look forward to the conversation that follows.

Anton Korinek:

Thank you so much, David, for your deep insights! I find it very impressive how you managed to distill this huge wealth of data into high-level insights that are both really intuitive and also so policy-relevant.

Let me invite everybody in the webinar to submit questions through the Q&A field and to upvote the questions that have already been submitted and that you find particularly interesting. So we have two discussants now, and after that we will open the floor to the questions that you are all posing.

Our first discussant is Katya Klinova. Katya directs the AI, Labor, and the Economy Research Programs at the Partnership on AI. She focuses on studying the mechanisms for steering AI progress towards greater equality of opportunity and improving the working conditions along the AI supply chain. And in the interest of full disclosure, I have had the pleasure of working with Katya as part of the AI and Shared Prosperity Initiative that is dedicated precisely to this cause of sharing the economic benefits of AI broadly across society.

Katya, the floor is yours.

Katya Klinova

Thank you very much, Anton. I want to start by thanking David and his colleagues from the task force for creating this encyclopaedia of a report. You know, in the last three to four years, there’s been really a flurry of future work reports, and then just a number of them offer really valuable insights. But really, really, really few come even close to them on the mentality and comprehensiveness and level headedness of this report. So really, my biggest problem with that is that it’s the final one from the task force. I wish they kept going, because it’s just been a really great service to the future or community-shaping and directing this conversation. So I want to use my time to start really invite a conversation about implementing the report’s recommendations, which I think are really excellent. But we as a society, I guess, still have work to do figuring out how exactly to implement them and what are the guardrails that need to be put in place. And because our time is limited, I picked two of my favourite ones, which I think are incredibly important. The first one is about allowing innovation in the new forms of representation of workers in the workplace and corporate decisionmaker making. And the second one is about committing to an innovative agenda that is targeted towards augmenting rather than replacing workers. So let’s take these two in turn.

Firstly, about the worker representation. Let me begin by showing you this graph that David already shared, that is slightly depressing, that shows a decline in union membership across the OECD and US has declined the most proportionally big and has come to the lowest point out of all the countries. And I think it’s instructive to look at this chart back to back with the graphs that showed the discrepancy in the forking of the growth between the highest wage earners and the lowest wage earners in the OECD countries. And I just want to draw your attention to the Nordics that have not experienced the fork. And also the growth there has just been much higher, like it’s switching 60%, over almost 80% of Sweden. Of course, not all of that has to do with just union participation. And yet, I think it’s insightful to look at these graphs side-by-side and kind of regain our appreciation for unions and really appreciate that the report is talking a lot about that and calling for innovation there.

So one thing that unions can do when it comes to technology, and this is again, the story that the report tells as well. UNITE HERE, a few years ago, was the first union that was able to put clauses around technological development into their union contracts. And now Marriott is obliged to give 165 days notice, if they’re about to bring automation to the workplace, and workers are entitled to retraining. Which is a great provision, but of course we know that the technological disruption into worker’s welfare and life can come not only from their employer bringing in technology and deploying technology, it can come from Silicon Valley. This is what happened in the hotel industry as well – the biggest players in the hotel industry are being blown out of the water by Airbnb. And of course, it’s not only that industry, you can be putting clauses around technological adoption in retail brick and mortar stores, but the main disruption can come from the rise of e-commerce, for example. And that is not necessarily bad, disruption can be a sign of very healthy and dynamic economy. It’s also not new or unique to the digital age. So it’s definitely not something new for unions to deal with.

And yet, if we are thinking of AI as a force that can dramatically expand the variety and the range of human tasks that can be automated, then we can be entering an age in which for really most workers around the globe, you know, some faraway Silicon Valley company can be as relevant in terms of its influence, its ability to influence their well-being, as their own employer. And of course, that relationship is not covered by traditional union contracts cannot be covered because Silicon Valley does not employ all of these people, not in the direct way, not in an indirect way. And that’s why in addition to all the reasons that the report is listing, such innovation is so badly needed in unions.

So you can be thinking about who can be advocating for the aggregate labour demand to stay high and for human labour to stay relevant. We know that that can take a hit in aggregate. These are graphs from a seminal paper by Professors Acemoglu and Restrepo that show that while automation kept pace with creation of new tasks for humans in the four decades following World War Two, in the last three decades, that really has changed and automation is now outpacing the new task creation by far. So you could, of course think about the government and the policymaker (benevolent one) as the one who would be thinking about this aggregate picture. And this is what brings us to the second recommendations about shaping technology to augment as opposed to replace workers.

And the graphs that David already showed here, again, that are slightly depressing, or actually quite a bit depressing that showed the erosion of both employment and earning for the medium worker, make us think about what are the barriers like why the median worker clearly doesn’t seem to be sharing in on the productivity growth that the economy has been experiencing, even if that productivity hasn’t been growingas quickly as previously in history, but it has been, and that does not really spread evenly across education levels and wage levels. Clearly, technologies of late have been hugely complimentary to the knowledge workers, but not necessarily to everyone else. So how can we commit to boosting productivity of a typical worker to boost the demand for their labour and their wages in turn. So when we think about this productivity boosting technology, the recent examples of that do raise concerns, because we need guardrails that are not yet in place between making workers more productive and really exploiting them. In an example I’m going to give you a few years ago, technology was patented for wristbands for warehouse workers, which gives them haptic feedback, they basically buzz if you’re putting a wrong item into a wrong bin. And this just sounds like a great idea. And it’s something that can tell workers about the mistake they’re making, and overall raise their productivity. But of course, that same wristband can track every single movement of a worker and can tell their employer how many times they took a break, went to the bathroom, any of that sort of details. And people are obviously raising concerns about that.

That’s not the only example. Now there are startups that literally advertise themselves by offering you to tell apart efficient and inefficient workers and build a map of who, where, when, and where is doing what kind of activity. And that might sound wonderful if you’re an employer, I don’t know if that’s compelling to many of them, but if you are a worker, you might be quite concerned about that, because you are human and there are days on which you might need to have more breaks or fewer breaks. And there are obvious concerns around that information become fully transparent.

Really, the workplace and the labour market is a market in which employers have way more power than workers. And the little power that workers have comes from the information asymmetry in their principal agent relationship with their employer. So when that information asymmetry goes away, all bets are off, in some sense. And then employers no longer need to worry about offering incentives or offering bonuses and carrots to try to induce higher performance, they can just limit their tools to the sticks and the punitive measures, if someone doesn’t meet their sometimes really exaggerated performance thresholds. And then lastly, of course, there is a lot of uncertainty, when we’re trying to anticipate the impact of new technology on labour demand. If you’re a central planner, or just the policymaker, thinking how much you want to incentivize or disincentivize the development of a certain technology, there are a lot of uncertainties that you’re dealing with and your calculation would look very different if you’re making it for a single country, especially, you know, a tech maker country that might have ageing workforce and might be in need of robots, or if you’re making the calculation for the world as a whole, in which millions of young people are entering the workforce every year in need of formal sector jobs. And then again, the very same technology that can be used to automate the jobs of truck drivers can be also used to automate going to the grocery store, which, you know, in many countries is still done by households that sell themselves and this is unpaid labour that would be converted into creation of new jobs for people who pack those groceries and send them to people’s home. So the very same technology — a lot of different ways to apply it, which creates additional uncertainty when we try to predict what would be the impact on labour demand. And that is just when we think about the labour demand, but there are of course, all kinds of different effects that we might or might not want to encourage with self driving cars. There’s, of course, first of all the considerations of safety and saving lives on the road.

So, I want to wrap to give it back over to Anton but just by saying that, we are trying to think about some of these questions. If you have advice for us or just want to get involved, please get in touch. My Twitter is here. And you can also sign up at the Shared Prosperity website, partnershiponai.org/shared-prosperity. Thank you very much.

Anton Korinek:

Thank you so much, Katya. Our second discussant is Ioana Marinescu. Ioana is an assistant professor at the School of Social Policy & Practice at the University of Pennsylvania, and a Faculty Research Fellow at the NBER. She studies the labor market to craft policies that can enhance employment, productivity, and economic security. Her research expertise includes wage determination and monopsony power, antitrust law for the labor market, the universal basic income, unemployment insurance, the minimum wage, and employment contracts.

Iona Marinescu:

Hello, everybody. David, I really enjoyed your presentation. And it touched upon many policy issues that my work and my thinking has also been concerned with. So I want to make here two points. The first one is very much already in the perspective of what David was talking about. But I want to point out for our audience, the paradox in a way of the institutions and the technology, in the sense that in the US, as David already mentioned, we have one of the lowest minimum wages among OECD countries. And you might think that, you know, if technology is skill-biased in favour of more skilled workers, and it’s going to put downward pressure on the wages of the less skilled, you might think in a free market, that it’s good to have a low minimum wage, that that will allow the economy to create more jobs. And so therefore, the US should be in a better position to weather those issues as compared to other countries, like the country I was raised in, in France, where the minimum wage is much higher.

And paradoxically, that’s not been the case. And employment rates are higher for prime wage workers in France than in the US. And it’s also the case in many other OECD countries, and yet they have higher minimum wage. So what gives, you know, why is it that by making workers more expensive, you know, we have higher employment and certainly not low employment. So here I want to introduce the idea that that’s possible because in many cases, workers are underpaid relative to their productivity, which we call monopsony power. Just by analogy with monopoly power in the product market, you have firms with market power who overcharged consumers relative to cost. So in the case of the labour market, firms with employers with market power will underpay workers relative to these workers’ productivity. And if that’s the case, then a minimum wage rather than decreasing employment as would be the classic prediction and the basic model, where if you increase the price of something, the demand for that decreases with monopsony power, as you increase the minimum wage, employment can in fact increase since firms can afford to pay more. And by paying more, they’re able to attract additional workers. So basically, if at baseline workers are underpaid, there is a margin to increase the minimum wage without destroying employment. And as we increase wages, we make jobs more attractive, which attracts more workers into this these jobs. And in fact, my work in the US shows that that’s exactly what happened across US states, that when there was less competition for workers, increasing the minimum wage has tended to increase employment. So that kind of solves a bit the paradox of the cross-country pattern, where you see countries with institutions that make labour more expensive, seemingly doing better in the face of technological developments that seem to go against the demand, especially for low-skilled labour.

So how do we solve this? Of course, as David said, by increasing the minimum and also by increasing or helping worker unionisation as well as other forms of worker bargaining power. Because unionisation in the recent research has been shown to be able to counteract firm’s monopsony power. So therefore resist pressure by firms to underpaid workers. And also, of course, law and legislation in particular, some of my work looks very much into antitrust law. So if we already have issues with competition in the labour market, we want to act to prevent behaviour by firms that could further diminish competition, including things like mergers, by which firms become bigger and more powerful, as well as things like no poaching agreements, wage fixing agreements where employers collude between themselves to keep wages low.

So that’s, that’s my first point explaining for the audience hear some of the reason I think is behind the paradox that countries with more expensive, seemingly more expensive labour have often been doing better than the US when facing these technological changes. And then the second point I want to talk about is the universal basic income. So as Anton said in his question to David, you know, it’s true that in the past, the labour market has been able to adapt for all the mechanisms that David has explained so well. But obviously, you know, we cannot be sure of what will happen in the future in 20, 30 years. And there is a huge corner of uncertainty. And it seems to me definitely possible that at least some workers will go the way of the horse so that their labour is simply not valuable anymore, in the face of, new technologies. And so, if that’s the case, then something like a universal basic income is one interesting policy innovation to think about. First of all, there’s already growing interest in something like a universal basic income, with what we’ve seen during this crisis with the stimulus checks, which have gone to almost everybody, up to 90% of US households without any conditions, they have, you know, allowed people to weather this crisis. And more generally, in the US, we have a social welfare system that is much less protective of people than in other ECD countries, there’s a lot of holes and a lot of the benefits you can only get if you work. Now imagine what would happen if technology were in fact massively killing jobs. And as they would say, we’re not there yet. But what if that were the case, then, you know, these people without a reform of our social protection system would no longer be able to make ends meet. So particularly in the US something like a universal basic income, which is cash for all, without questions asked, could be quite an interesting solution.

Now, many will say this is not targeted, right? Because at least in the pure system, everybody gets the same amount, no matter their circumstances and their income. But that feature allows it to really make sure that nobody falls through the cracks. So everybody gets it, definitely, there’s no need to apply, or it’s really minimal. So that’s the big advantage. And I want to say that even though it seems untargeted, if you add to that the financing side of it, so you have a basic income, then you have a conduit for financing this, which could be many things, including a carbon tax, sales taxes, additional income tax, wealth, tax, you name it, many possibilities. So these extra taxes, almost any tax you can think of is progressive, meaning that rich people end up paying more so that on net, even though everybody gets the same basic income, through the tax system, it turns out that, you know, rich people pay a lot more taxes to pay into this system relative to what they’re getting. And depending on your tax, you can make it as progressive as you want, by using a more progressive tax.

So therefore, if labour was going to go the way of the horse, I think that the universal basic income is one of the innovative ideas that’s very much worth thinking about in that context, and it already has many advantages today. So this is, you know, what I had to say for now, and I’m looking forward to the discussion.

Anton Korinek:

Let me perhaps also start with a follow-up on our discussion on human replacement, and that also weaves in two of the questions that were posed by members of the audience:

David, you observed that the main problem if we automate away all labor at some point in the future will be distributional. Ioana has spoken about one potential solution, the universal basic income. And you said it’s a distributional problem, but at that point, we will be incredibly wealthy, except of course, I suppose, if the automation mainly took the form of what Acemoglu and Restrepo call so-so technologies.

Now one interesting observation is that this distributional problem that we would be facing in that future, it’s in some ways just a continuation of the distributional problems between high and low skilled workers, that you have emphasised at the beginning of your talk. So there really is no fundamental difference between what we may face in the future, and what we have already been facing over the past decades, it’s just going to be more extreme, it’s a difference in degrees.

Now, let’s say that we may, indeed, in a couple of decades, be at that point where humans are no longer economically useful. Let’s say it is cheaper to pay robots and AI than to pay humans to buy the basic food and the amenities that we need to live. So in that world, it’s not that there is no work, it’s just that your competitive wage, your marginal product is worth very little, let’s say two cents an hour, and it costs you $1 a day to keep alive.

Now, there will, of course, be a transition between today and that future. And some may argue that we are already in that transition in the US, based on precisely your work on skill premia, and the growing inequality in the labour market. But of course, many may also disagree. But, David, my question to you is, if we want to prepare for the possibility of that future, what would be concrete policy measures that would make sense anyways, that would help us for the transition? And then let me also invite you to respond to all the broader points that were brought up by Katya and Ioana.

David Autor:

Okay, there’s so much to talk about here. And I really appreciate the observations from Katya and Ioana. And I agree that the policies are extremely hard to implement, you know, it’s hard to figure out how do we bargain over these things? What is the way? What is the form of collective bargaining that is not too restrictive, that doesn’t have too many loopholes simultaneously? And then, you know, how do we shape technology in the direction that we want it as well? Those are both very important and difficult, but important questions and difficult questions to answer. And then, of course, I very much agree with you Ioana saying about, you know, for too long, you know, sort of assume the labour market, somehow there are perfectly competitive markets for toothbrushes, cereals… of course they’re not and now people are sort of re-examining that presupposition. But let me try to tie this together a bit.

You know, Anton, you’re talking about a future which I view as relatively distant. But I agree that it connects, you’re right. And since if what we have done is made labour less scarce in some domains and more scarce in others, you’re talking about a future where labour is not scarce, right, where there’s no such thing as labour scarcity. I don’t think we are in that world at the moment, we’re in a world with extreme inequality in labour scarcity.

And so the policies that I’ve been advocating, are all ones that do some forms of redistribution, but not through post-market redistribution, through your tax and transfer, but through what you would call pre-market or within the market really, which is changing the quality of jobs. And I don’t think that skills, just simply skill-ing is going to be sufficient.

And so you know, this a point that Rodrik and Blanchard made in their very nice conference a year ago, about, look, there are three ways you can do this, you can do some supply side, by building better workers, you can do this through post market tax and transfer, or you can directly intervene to affect the quality of work. And I think the second one is the least exploited and the most direct.

And so how do we do that? One is, of course, through minimum wages. Another is through collective bargaining. And another is for focus on labour standards. And I agree that the technological headwinds are against us, right, in a way that they were not. So if we talk about the post war period to the 1970s, it’s clear that the new work creation was very much in the middle. And so technology was sort of helping create the middle class, even as regulation and norms and so on were complementing that. And now I don’t think that’s occurring. I think that the technology is creating new work at the very top and some of the bottom, and so we have to push harder against it. But on the other hand, if you think about the figures that content brought up, you know, product from economic growth and so on, countries have had remarkably different trajectories, facing these same set of forces. And it doesn’t seem they paid a high price for that. In fact, I would say the US has paid a very high price for not, you know, stepping in, the price of non-intervening has been much higher than the price of intervening.

So, if you say, well, how do you know well, how to prepare for the future? Well, one way is, we take action now on whatever manifestations of it we’re already seeing, basically start to build this social compact that invests in people that cares about the quality of jobs and that uses rising productivity to create rising prosperity, right? That’s the only way we’re going to be able to do this, if we wait till the day that comes and all of a sudden, Mark Zuckerberg owns everything. And then we all come after Mark Zuckerberg, right? Nothing against him personally, that’s not going to be a good system of distribution. So we have to basically create — and it’s politically hard, right, it’s not economically hard, it’s politically hard.

And that’s the problem. And it needs to be palatable in some way. So I’m not a big fan of UBI myself. First of all, I think it’s the answer at the moment to a problem we don’t have, which is the lack of work. I also think it’s not very politically palatable, at least certainly in the United States, people want other people to work if they if they’re getting money, they don’t want to give money to people who they don’t perceive as working for that money in some way. And so now, that could change, but the norms and projections of what is fair and reasonable will affect what types of tax policies you make.

And I finally would say, and then I’m going to stop, I think work is an intrinsic good. The whole economic model that people do work – which causes disutility – to get income to afford consumption, which is the only thing they enjoy, it’s completely backward. Work is incredibly important, because it gives people identity, it gives them a structure, many people enjoy the tasks they do, it gives them social esteem, and a set of relationships, and a way of life. And so I would like to – and I think people prefer to get income for their work relative to having a lousy low paid job and getting a supplementary check. I don’t think most people find that as appealing. And so, in my mind, there’s still plenty of room for improving work, rather than preparing for its demise. And so as long as work is a viable system, as long as there’s a lot of it, I think there is a lot of it, then working on improving equality, such that we get better distribution through employment for people who are capable of working, I think is much more socially palatable, much more psychologically healthy, and moves us in the direction of a kind of social compact that is more robust. I’ll pause there.

Anton Korinek:

Thank you, David. Let me maybe give Katya and Ioana an opportunity to jump in here. I think Ioana, you looked like you were about to contribute something?

Ioana Marinescu:

Yes. So you know, I don’t think that the two are contradictory. And, you know, that was the sense of my remarks. First, I said, you know, let’s raise the minimum wage, improve collective bargaining and so on and so forth. So I think that there should be policies out there that improve the quality of jobs as well as worker position in this bargain. So I don’t see this as a substitute.

But at the same time, I think that, as I mentioned before, the social protection system in the US is quite a bit less generous than in other countries. And so it’s worth thinking about how you could improve that. And you know, it’s not like basic income is yet highly popular, but I think it’s becoming more popular. And I think one of the things that it has going for it is the universal part of the universal in it, which, which means that it can, you know, no longer be perceived as a handout for someone. Why are they getting it, and I’m not getting it? We’re all getting it anyway. So that kind of potentially changes the perception of this, and therefore potentially can a little bit expand our budget possibility through politics. Now, if people really want this, then they might be willing to spend you know what it takes to, to get something like that. And then the final quick point I wanted to make is that, yeah, work is very important to many people. And it’s not just about the disutility of work. But it doesn’t necessarily always have to be a classic paid market work. And I think what something like UBI enables is for people, first of all, as I said, we should make more classic market work available that’s good work. But also people having the opportunity to do other types of non-market work, caring and volunteering and whatever else they want to do and artistic activities that are nonremunerated, and that having something like a basic income could enable people economically to engage in those sorts of activities that are work-like, but that are not necessarily remunerated by the market.

Anton Korinek:

Thank you, Ioana. Katya?

Katya Klinova

Yeah, I think Anton your question is very important, like, is there a discontinuity between the distribution problem that we’re facing now and the distribution problem we might be facing if labour is not, is not scarce anymore. And I think like I am with David all the way that I don’t want to face the problem. It seems like a qualitatively different problem. Because it’s a very fragile setup, it’s like much more fragile setup, right, then the one that we’re facing right now, in even if, for example, UBI is providing enough for people to live on and pursue their artistic and other interests, which, like I would be all in for, but it does rely on whoever is possessing all these productive, you know, forces to be willing to generation after generation continue to redistribute that, while they don’t really need the people to keep producing. And people don’t have the political power. So this political setup is what I worry the most about, in that, you know, hypothetical scenario that you did for us.

David Autor:

It would be like the resource curse, right, only for everything, right? So we know countries that have basically one source of income, right, they tend to be terribly governed, because it’s so easy to monopolise that, right, whether that’s oil or diamonds or something, then we would use the machines, where we would worry.

Anton Korinek:

Thank you, Katya. And also, David, let me maybe bring up one more question from the Q&A that relates to education. And let me maybe broaden it a bit. So David, what types of education would you advocate the most? And how much should public sector be involved versus a private sector? And let me perhaps also ask the question, is there going to be a limit to the human capacity to being educated? Like, let’s say, I’ve worked very hard to get a PhD degree, how much more will I have to educate myself to still be relevant in the labour market? Three decades from now? Will I be mentally able to process that?

David Autor:

Yeah, excellent question. So I really think the fundamental needs of education have not changed. But it’s not about learning specific skills. It’s about being able to read, it’s being able to think logically and analytically, right, and quantitatively, but that doesn’t mean math that means analytically, to be able to present and communicate, like people’s writing skills have actually gotten worse over time by the way. To communicate with group and to lead and work in a team and so on. And these things are incredibly foundational. Now, then you say, Well, what will people do for work beyond that? You know, I don’t know, I think that, you know, it’s very, very likely that they sort realm in which humans will maintain competitive advantage is in things that continue to require flexibility and interaction with others, but draw on a base of expertise that interacts with technology, right? You can’t make a living just being empathetic. But you can’t make a living just, you know, adding raw columns and numbers. One of those is not scarce because of human capacity and the other is not scarce because of machine capacity. You need to be in the place where those things are complements, not substitutes.

And now on the finite capacity of humans to learn, right, certainly there’s got to be a limit. However, it’s not clear how close we are to it. For one, during the high school movement in the turn of the 20th century, you know, the end of the 19th century, you know, there was this concern about sending kids to high school. One, isn’t that expensive, you know, all these teachers, all these books. Two, the opportunity costs are really high, they can’t work on the farm. But three, is it really reasonable to think that all these cretins could actually achieve this level of education because a high school diploma, right, which was considered elite, and we had this belief that maybe people, we just already hit the capacity of most people, right, in that era, that healthy era of eugenics, right, like, we knew who wasn’t gonna be able to do that.

And so it’s not, you know, we’ve gone a lot beyond that. And more than that, we’re getting more efficient at learning, I would argue, so I do think we’ll eventually hit a limit. But one way we deal with that limit is we specialise, right? There are, you know, hundreds of types of doctors now, whereas, you know, a century ago there was a dozen. And they’re more specialised because the technology has deepened, expertise has deepened, but people’s capacity for expertise is finite. And so they specialise, right? You see this in our field as well. There was a time when someone like Paul Samuelson could do all of economics. Now only Daron Acemoglu does all of economics, everyone else has to specialise. And so it’s possible that we will specialise and remain complementary to the tools we create through this specialisation.

Anton Korinek:

Katya, Ioana, would you like to add anything?

Katya Klinova:

I am excited about the potential capability of AI to facilitate and make more scalable teaching at the right level. I’ve definitely you know, I’m sure that my fellow panellists like, we’ve never been in this situation, but like, I’ve definitely sat in class where I was not able to follow as well as some of my classmates. And if there wasn’t a system that could catch me up to what I’m missing out on, I think that can really improve the quality of education and scale it to a lot of the countries where it’s scarce.

Ioana Marinescu:

Actually, I did have a comment about Katya’s potential political dystopia. And of course, this is a bit distant. I agree with David that even if all this is a threat, it’s like quite a few decades afield, but you know, I think this also speaks to thinking about modes of property, you know, who owns what, how, and, you know, I’m very committed to a market economy, but that doesn’t necessarily mean that we have to have a few people, you know, owning these machines. And so sort of trying to think about a potential transition again, from here to there, I think, you know, through perhaps, wealth taxes, which over time, if they’re high enough, would draw down the wealth inequality, you know, things like that, I think are very much worth thinking about in connection with technology. If we think that it’s a serious concern that in the longer run a few, you know, will own this productive capacity for everything, which indeed, I think is extremely politically dangerous for society.

Anton Korinek:

We are already towards the end of our webinar. David, would you like to make any concluding statement before we wrap it up?

David Autor:

Well, first of all, thank you. Thanks to all of you. It’s been a great conversation, it could go on for hours and it was a lot of fun, at least for us, I don’t know about the audience. I guess I think that you know, sort of something that Katya said, is this question: is it a discontinuity or is it a continuity? And I think that’s a really important question. And I think, you know, I view the singularity view, which has been around for a while, as being not realistic. And the notion that there just comes a day, there’s a crossing point, and boom, I actually think we hit diminishing returns on most things, not increasing returns. But I do think it’s useful to say, well, maybe we are seeing some manifestations of that. And if that’s true, in some sense, that’s good, because it means that there’s a transition path that we can work with, right, as opposed to arrives on Monday, like the last thing we want, is the whole economic system collapses on Monday. Right? Much better, that this occurs over a long period of time. So I think it’s a really important question to ask and sort of a very focal question for thinking about AI and the future of humanity: is this going to be a big bang or is this going to be a continuation and create these specific challenges, and maybe it’s more the latter. And hopefully we can shape that as well. And I think that’s probably something that we overlook, is our capacity, not only our capacity to shape where the technology goes, but the degree to which we have already done that. The degree to which the world in which we live in is the one we created, not the one that technology created. And in many ways, we did it intentionally.

Anton Korinek:

Thank you, David, for that uplifting conclusion that I hope we can all strongly agree with. And thank you all for your contributions. I also hope that the new administration that has been sworn in while we were having this webinar will listen carefully to everything that we have learned from David, Katya, and Ioana during this webinar, and I hope to welcome you all soon to the next GovAI webinar. Thank you.



Source link

22May

Audrey Tang and Hélène Landemore on Taiwan’s Digital Democracy, Collaborative Civic Technologies, and Beneficial Information Flows


The webinar conversation involved the following participants: 

Audrey Tang is Taiwan’s Digital Minister in charge of social innovation, open governance, and youth engagement. They are Taiwan’s first transgender cabinet member and became the youngest minister in the country’s history at the age of 35. Tang is known for civic hacking and strengthening democracy using technology. They served on the Taiwanese National Development Council’s Open Data Committee and are an active contributor to g0v, a community focused on creating tools for civil society. Audrey plays a key role in combating foreign disinformation campaigns and in formulating Taiwan’s COVID-19 response.

Hélène Landemore is an Associate Professor of Political Science at Yale University. Her research and teaching interests include democratic theory, political epistemology, theories of justice, the philosophy of social sciences (particularly economics), constitutional processes and theories, and workplace democracy.

Ben Garfinkel is a Research Fellow at the Future of Humanity Insitute. His research interests include the security and privacy implications of artificial intelligence, the causes of interstate war, and the methodological challenge of forecasting and reducing technological risks.

You can watch a recording of the event here.



Source link

21May

‘Economics of AI’ Open Online Course


GovAI’s Economics and AI Lead, Anton Korinek, recently released an open online course on the Economics of AI. The course introduces participants to cutting-edge research in the economics of transformative AI, including its implications for growth, labour markets, inequality, and AI control.

The course is free, supported by a grant from the Long-Term Future Fund. The structure involves six distinct modules, the first of which is accessible to anyone with a social science background, while the other modules are aimed at people with an economics background at the graduate or advanced undergraduate level. 

The course proceeds to analyse how modes of production and technological change are affected by AI, investigate how technological change drives aggregate economic growth, and examine how AI-driven technological change will impact labour markets and workers. The course closes by looking at several key questions for the future in AI governance: How can progress in AI be steered in a direction that benefits humanity, and what lessons does economics offer for how humans can control highly intelligent AI algorithms?



Source link

21May

Joseph Stiglitz & Anton Korinek on AI and Inequality


Joseph Stiglitz is University Professor at Columbia University. He is also the co-chair of the High-Level Expert Group on the Measurement of Economic Performance and Social Progress at the OECD, and the Chief Economist of the Roosevelt Institute. A recipient of the Nobel Memorial Prize in Economic Sciences (2001) and the John Bates Clark Medal (1979), he is a former senior vice president and chief economist of the World Bank and a former member and chairman of the US President’s Council of Economic Advisers. Known for his pioneering work on asymmetric information, Stiglitz’s research focuses on income distribution, risk, corporate governance, public policy, macroeconomics and globalization.

Anton Korinek is an Associate Professor at the University of Virginia, Department of Economics and Darden School of Business as well as a Research Associate at the NBER, a Research Fellow at the CEPR and a Research Affiliate at the AI Governance Research Group. His areas of expertise include macroeconomics, international finance, and inequality. His most recent research investigates the effects of progress in automation and artificial intelligence for macroeconomic dynamics and inequality.

You can watch a recording of the event here or read the transcript below:

Joslyn Barnhart [0:00]

Welcome, I’m Joslyn Barnhart, a Visiting Senior Research Fellow at the Centre for the Governance of AI (GovAI), which is organizing this series. We are part of the Future of Humanity Institute at the University of Oxford. We research the opportunities and challenges brought by advances in AI and related technologies, so as to advise policy to maximise the benefits and minimise the risks from advanced AI. Governance, this key term in our name, refers [not only] descriptively to the ways that decisions are made about the development and deployment of AI, but also to the normative aspiration that those decisions emerge from institutions that are effective, equitable, and legitimate. If you want to learn more about our work, you can go to http://www.governance.ai.

I’m delighted today to introduce our conversation featuring Joseph Stiglitz in discussion with Anton Korinek. Professor Joseph Stiglitz is University Professor at Columbia University. He’s also the co-chair of the high-level expert group on the measurement of economic performance and social progress at the OECD, and the chief economist of the Roosevelt Institute, a recipient of the Nobel Memorial Prize in Economic Sciences in 2001, and the John Bates Clark Medal in 1979. He is a former senior vice president and chief economist of the World Bank, and a former member and chairman of the US President’s Council of Economic Advisers, known for his pioneering work on asymmetric information. Professor Stiglitz’s research focuses on income distribution, risk, corporate governance, public policy, macroeconomics and globalisation.

Professor Korinek is an associate professor at the University of Virginia, Department of Economics and Darden School of Business, as well as a research associate at the NBER, research fellow at the CEPR, and a research affiliate at the Centre for the Governance of AI. His areas of expertise include macroeconomics, international finance, and inequality. His most recent research investigates the effects of progress in automation and artificial intelligence [on] macroeconomic dynamics and inequality.

Over the next decades, AI will dramatically change the economic landscape and may also magnify inequality both within and across countries. Anton and Joe will be discussing the relationship between technology and inequality, the potential impact of AI on the global economy, and the economic policy and governance challenges that may arise in an age of transformative AI. We will aim for a conversational format between Professor Korinek and Professor Stiglitz. I also want to encourage all audience members to type your questions using the box below. We can’t promise that [your questions] will be answered but we will see them and try to integrate them into the conversation. With that, Anton and Joe, we look forward to learning from you and the floor is yours.

Anton Korinek [3:09]

Thank you so much, Joslyn, for the kind introduction. Inequality has been growing for decades now and has been further exacerbated by the K-shaped recovery from COVID-19. In some ways, this has catapulted the question of how we can engineer a fairer economy and society to the top of the policy agenda all around the world. As Joslyn has emphasised, what is of particular concern for us at the Centre for the Governance of AI is that modern technologies, and to a growing extent artificial intelligence, are often said to play a central role in increasing inequality. There are concerns that future advances in AI may in fact further turbo-charge inequality.

I’m extremely pleased and honoured that Joe Stiglitz is joining us for today’s GovAI webinar to discuss AI and inequality with us. Joe has made some of the most pathbreaking contributions to economics in the 20th century. As we have already heard, his work was recognised by the Nobel Prize in Economics in 2001. I should say that he has also been the formative intellectual force behind my education as an economist. What I have always really admired in Joe — and I still admire every time we interact — is that he combines a razor-sharp intellect with a big heart, and that he is always optimistic about the ability of ideas to improve the world.

We will start this webinar with a broader conversation on emerging technologies and inequality. Over the course of the webinar, we will move more and more towards AI and ultimately the potential for transformative AI to reshape our economy and our society.

Let me welcome you again, Joe. Let’s start with the following question: Can you explain what we mean by inequality? What are the dimensions of inequality that we should be most concerned about?

Joseph Stiglitz [5:33]

[Inequality is the] disparities in the circumstances of individuals. One is always going to have some disparities, but not of the magnitude and not of the multiplicity of dimensions [that we see today]. When economists talk about inequality, they first talk about inequalities of income, wealth, labour income, and other sources of income. [These inequalities] have grown enormously over the last 40 years. In the mid-1950s, Simon Kuznets, a great economist who got a Nobel Prize, had thought that in the early stages of development, inequality would increase, but then [in later stages of development], inequality would decrease. And the historical record was not inconsistent with that [model] at the time he was writing. But then beginning in the mid-1970s and the beginning of the 1980s, [inequality] started to soar. [Inequality] has continued to increase until today, and the pandemic’s K-shaped recovery has exposed and exacerbated this inequality.

Now beyond that, there are many other dimensions of inequality, like access to health[care], especially in countries like the United States [without] a national health service. As a result, the US has the largest disparities in health among advanced countries, and even before 2019, had an average decline in life expectancy and overall health standards. There are disparities in access to justice and other dimensions that make for a decent life. One of the concerns that has been highlighted in the last year is the extent to which those disparities are associated with race and gender. That has given rise to the huge [movement], “Black Lives Matter.” [This movement] has reminded us of things that we knew, but were not always conscious of, [including] the tremendous inequalities across different groups in our society.

Anton Korinek [8:23]

Thank you. Can you tell us about what motivated you personally to dedicate so much of your work to inequality in recent decades? I’ve heard you speak of your experience growing up in Gary, Indiana. I have heard a lot about your role as a policymaker, as a chair of the President’s Council of Economic Advisors, and as a chief economist of the World Bank in the 1990s. How has all of this shaped your thinking on inequality?

Joseph Stiglitz [8:55]

I grew up, as you said, in Gary, Indiana, which was emblematic of industrial America, though of course I didn’t realise that as I was growing up. [In Gary], I looked at my surroundings, and I saw enormous inequalities in income and across races; [I saw] discrimination. That was really hard to reconcile with what I was being taught about the American Dream: that everybody has the same opportunity and that all people are created equal. All those things that we were told about America, which I believed on one level, seemed inconsistent with what [I saw].

That was why I had planned [to study economics]. Maybe it seems strange, but I had wanted to be a theoretical physicist. [But with all] the problems that I had seen growing up around inequality, suddenly, at the end of my third year in college, I wanted to devote my life to understanding and doing something about inequality. I entered economics with that very much on my mind, and I wrote my thesis on inequality. But life takes its turn, [so I spent] much of the time [from then until] about 10 years ago on issues of imperfect information and imperfect markets. This was related, in some sense, to inequalities because the inequalities in access to information were very much at the core of some of the inequalities in our society. [For example,] inequalities in education played a very important role in the perpetuation of inequalities. So, the two were not [part of] a totally disparate agenda.

From the very beginning, I also spent a lot of time thinking about development, which interacted with my other work on theoretical economics. It may seem strange, but I did go to Africa in 1969: [I went to] Kenya not long after it got its independence. I’m almost proud to say that some people in Africa claim me to be the first African Nobel Prize winner: [Africa] had such an important role in shaping my own research. That strand of thinking about inequality between the developing countries and the developed countries was also very important [to my understanding of inequality].

Finally, to answer your question, when I was in the Clinton administration, we had a lot of, you might say, fights about inequality. Everybody was concerned about inequality, but some were more concerned than others. Some wanted to put it at the top of the agenda, and [others] said, “We should worry about it, but we don’t have the money to deal with it.” It was a question of prioritisation. On one side Bob Reich, who was the Secretary of Labour, and I were very much concerned about this inequality. We were concerned about corporate welfare: giving benefits to rich corporations meant that we had less money to help those who really needed it. Our war against corporate welfare actually led to huge internal conflicts between us and some of the more corporatist or financial members of the Clinton team.

Anton Korinek [13:53]

That brings us perhaps directly to a more philosophical question. What would you say is the ethical case for [being concerned with] inequality? In particular, why should we care about inequality in itself and not just about absolute levels of income and wealth?

Joseph Stiglitz [14:17]

The latter [question] you can answer more easily from an economic point of view. There is now a considerable body of theory and empirical evidence that societies that are marked by large disparities and large inequalities behave differently and overall perform more poorly than societies with fewer inequalities. Your own work has highlighted the term “macroeconomic externalities,” which [describes when a system’s functioning] is adversely affected by the presence of inequality. An example, for instance, is that when there are a lot of inequalities, those at the bottom engage in “keeping up with the Joneses,” as we say, and that leads them to be more in debt. That higher level of debt introduces a kind of financial fragility to the economy which makes it more prone to economic downturns.

There are a number of other channels through which economic inequality adversely affects macroeconomic performance. The argument can be made that even those at the top can be worse off if there’s too much inequality. I reflected this view in my book, the Price of Inequality, where I said that our society and our economy pay a high price for inequality. This view has moved into the mainstream, which is why the IMF has put concerns about inequality [at] the fore of their agenda. And as Strauss-Kahn, who was the Managing Director of the IMF at the time said, [inequality] is an issue of concern to the IMF because the IMF is concerned about macroeconomic stability and growth, and the evidence is overwhelming that [inequality] does affect macroeconomic performance and growth.

[There is a] moral issue which economists are perhaps less well-qualified to talk about rigorously. Economists and philosophers have used utilitarian models and equality-preferring social welfare functions. [These models build on] a whole literature of [philosophy], of which Rawls is an example. [Rawls] provides a philosophical basis [for] why, behind the veil of ignorance, you would prefer to be born into a society with greater equality.

Anton Korinek [17:40]

So that means there is both a moral and an economic efficiency reason to engage in measures that mitigate inequality. Now, this brings us to a broader debate: what are the drivers of inequality? Is inequality driven by technology, or by institutions [and] policies, broadly defined? There is a neoclassical caricature of the free market as the natural state of the world. In this caricatured description of the world, everything is driven by technology, and technology may naturally give rise to inequality, and everything we would do [to mitigate inequality] would be bad for economic efficiency. Can you explain the interplay of technology and institutions more broadly and tell us what is wrong with this caricature?

Joseph Stiglitz [18:46]

[To put it in another way:] is inequality the result of the laws of nature, or the laws of man? And I’m very much of the view that [inequality] is a result, overwhelmingly, of the laws of men and our institutions. One way of thinking about this, which I think [provides] compelling evidence for my perspective, is that the laws of nature are universal: globalization and [technological advancement] apply to every country. Yet, in different countries, we see markedly different levels of inequality in market incomes and even more so in after-tax and transfer incomes.

It is clear that countries that should be relatively similar have been shaped in different ways by the laws. What are some of those laws? Well, some of them are pretty obvious. If you have labour laws that undermine the ability of workers to engage in collective bargaining, workers are going to get short stacked; they’re not going to be treated well. You see that in the United States: one of the main [reasons for] the weakening of the share of labour in the United States is, I believe, the weakening of labour laws and the power to unionise.

At the other extreme, more corporate market power [allows companies to raise prices, which] is equivalent to lowering wages, because [people] care about what [they] can purchase. The proceeds of [higher prices] go to those who own the monopolies, who are disproportionately those at the top. During Covid-19 we saw Jeff Bezos do a fantastic job of making billions of dollars while the bottom 40% of Americans suffered a great deal. The laws governing antitrust competition policy are critical.

But actually, a host of other details and institutional arrangements that we sometimes don’t notice [drive inequality]. United States [policy] illustrates that we do things so much worse than other countries. Bankruptcy laws, which deal with what happens if a debtor can’t pay [back] all of the money, give first priority [to banks]. In the United States, the first claimant is the banks, who sell derivatives – those risky products that led to the financial crisis of 2008. On the other hand, if you borrow money, to get ahead in life or to finance education, you cannot discharge your debt. So, students are at the bottom, and banks are at the top. So that’s another example [of how laws drive inequality].

Corporate governance laws, that give the CEOs enormous scope for setting their salaries in any way they want, result, in the United States, in the CEOs getting 300 times the compensation of average workers. That’s another example [of how laws create inequality].

But there are a whole host of things that we often don’t even think of as institutions, but [they] really are. When we make public investments in infrastructure, do we provide for public transportation systems, which are very important for poor people? When we have public transportation systems, do we connect poor people with jobs? In Washington D.C. they made a deliberate effort not to do that. When we’re running monetary policy, are we focusing on making sure that there’s [as] close to full employment as possible, which increases workers bargaining power? Or do we focus on inflation, which might be bad for bondholders?

Monetary policy, in the aftermath of the 2008 crisis, led to unprecedented wealth inequality, but didn’t succeed very well in creating jobs. 91% of the gains that occurred in the first three years of that recovery went to the top 1% in the United States. So, [inequality stems from] an amalgam of an enormous number of decisions.

Now, even when [considering] the issue of technology, we forget that [it is] man-made to a large extent — [it is] not like the laws of quantum mechanics! Technology [itself], and where we direct our attention [within technology], is man-made, and the extent to which we make access to technology available to all is our decision. Whether we steer technology to save the planet, or to save unskilled jobs, we can determine whether we’re going to have a high level of unemployment of low-skilled people or whether we’re going to have a healthier planet. [We witnessed] fantastic success in quickly developing COVID-19 vaccines. But now the big debate is, should those vaccines be available only to rich countries? Or should we waive the intellectual property rights in order to allow poor countries to produce these vaccines? That’s an issue being discussed right now at the WTO. Unfortunately, although a hundred countries want a waiver, the US and a few European countries say “no”. We put the profits of our drug companies over [peoples’] lives, not only over [lives] in developing countries, but possibly over the lives of people in our own country. As long as the disease rages [in developing countries], a mutation may come that is vaccine resistant, and our own lives are at risk. It’s very clear that this is a battle between institutions, and that right now, unfortunately, drug companies are winning.

Anton Korinek [26:04]

It’s a battle of institutions within the realm of a new technology.

If we now turn to another new technology, AI, you hear a lot of concern about AI increasing inequality. What are the potential channels that you see that we should be concerned about? To what extent could AI be different from other new technologies when it comes to [AI’s] impact on inequality?

Joseph Stiglitz [26:36]

AI is often lumped together with other kinds of innovations. People look historically, and they say “Look, innovations are always going to be disturbing, but over the long run, ordinary people gain.” [For example,] the makers of buggy whips lost out when automobiles came along, but the number of new jobs created in auto repair far exceeded the old jobs, and overall, workers were better off. In fact, [automobiles] created the wonderful middle class era of the mid-20th century.

I think this time may be different. There’s every reason to believe that it is different. First, these new technologies are labour replacing and labour saving, rather than increasing the productivity of labour. And so [these technologies are] substituting for labour, which drives down wages. There’s no a priori theory that says that an innovation [must] be of one form or the other. Historically, [innovations] were labour augmenting and labour enhancing; [historically, innovations] were intelligence-assisting innovations, rather than labour-replacing. But the evidence now is that [new innovations] may be more labour replacing. Secondly, the new technologies have a winner-take-all characteristic associated with them: [these new technologies] have augmented the potential of monopoly power. Both characteristics mean there will be a less competitive market and greater inequality resulting from this increased market power, and almost everybody may lose.

In the case of developing countries, the problems are even more severe for two reasons. The first [reason] is that the strategy that has worked so well to close the gap between developing and developed countries, which was manufacturing export-led growth, may be coming to an end. Globally, employment in manufacturing is declining. Even if all the jobs in manufacturing shifted, say, from China to Africa, [this shift] would [hardly] increase the labour force in Africa. I and some others have been trying to understand: why was manufacturing export-led growth so successful? And what [strategies] can African countries [employ] today if [manufacturing export-led growth] doesn’t work? Are there other strategies that will [be effective]? The conclusion is that there are other things that work, but they’re going to be much more difficult [to implement]. And there won’t likely be the kind of success that East Asia had beginning 50 years ago.

The second point [concerns] inequalities that occur within our country [as a result of AI]. [For example,] when Jeff Bezos [becomes] richer or Bill Gates [becomes] richer, we always have the potential to tax these gainers and redistribute some of their gains to the losers. The result, [which] you and I wrote about in one of our papers, shows that in a wide class of cases we can make sure that everybody could be better off [via redistributive taxation]. While [implementation is] a matter of politics, at least in principle, everybody could be made better off. However, [AI] innovations across countries [drive down] the value of unskilled labour and certain natural resources, which are the main assets of many developing countries. [Therefore, developing countries are] going to be worse off. Our international arrangements for redistribution are [very limited]. In fact, our trade agreements, our tax provisions, and our international [arrangements] work to the disadvantage of developing countries. We don’t have the instruments to engage in redistribution, and the current instruments actually disfavour developing countries.

Anton Korinek [32:44]

Let me turn to a longer-term question now. Many technologists predict that AI will have the potential to be really transformative if it reaches the ability to perform substantially everything that human workers can do. This [degree of capacity] is sometimes labelled as “transformative AI,” though people have also [described] closely-related concepts like Artificial General Intelligence and human-level machine intelligence. There are quite a few AI experts who predict that such transformative advances in AI may happen within the next few decades. This could lead to a revolution that is of similar magnitude to or greater magnitude than the agrarian or industrial revolution, which could make all human labour redundant. This would make human labour, in economic speak, a “dominated technology.”

[When we consider inequality,] the dilemma is that in our present world labour is the main source of income. Are you willing to speculate, as a social scientist, and not as a technologist, [about] the likelihood and timeframe of transformative AI happening? What do you see as the main reasons why it may not be happening soon? [Alternatively,] what would be the main arguments in favour of transformative AI happening soon? And how should we think about the potential impacts of transformative AI, from your perspective?

Joseph Stiglitz [34:36]

There is a famous quip by Yogi Berra, who is viewed as one of the great thinkers in America. I’m not sure everybody in the UK knows about him. He was a famous baseball player who had simple perspectives on life and one of them was “forecasting is really difficult, especially about the future.”

The point is that we don’t know. But we certainly could contemplate this happening, and we ought to think about that possibility. So as social scientists, we ought to be thinking about all the possible contingencies, but obviously devote more of our work to those [scenarios] that are going to be most stressful for our society. Now, you don’t think that people should train to be a doctor to deal just with colds. You want your doctor to be able to respond to serious maladies.  I don’t want to call [transformative AI] a malady – it could be a great thing. But it would certainly be a transformative moment that would put very large stresses on our economic, social [and] political system.

The important point is that […] these advances in technologies make our society as a whole wealthier. These [advances] move out what we could do, and in principle, everyone could be made better off. So the question is: can we undertake the social, economic, [and] political arrangements to ensure that everyone, or at least a vast majority, will be made better off [by advances in AI]? When we engage in this sort of speculative reasoning, one could also imagine [a world in which] a few people [are] controlling these technologies, and that our society [may be] entering into a new era of unprecedented inequality – with a few people having all the wealth, and everybody else just struggling to get along and [effectively] becoming serfs. This would be a new kind of serfdom, a 21st century or 22nd century serfdom that is different from that of 13th and 12th century [serfdom]. For the vast majority [of people, this serfdom would not be] a good thing.

Anton Korinek [37:59]

For the sake of argument, let’s take it as a given that this type of transformative AI will arrive by, say, 2100. What would you expect to be the effects of [transformative AI] on economic growth, on the labour share, and in particular, on inequality? What would be the [impact] on inequality in non-pecuniary, non-monetary terms?

Joseph Stiglitz [38:36]

The effect [of transformative AI] on inequality, income, wealth, and monetary aspects will depend critically on the institutions that we described earlier in two key [ways]. If we move beyond hoarding knowledge via patents and other means, and gain wide[spread] and meaningful access to intellectual property, then competition can lower prices and the benefits of [transformative AI] can be widely shared.

This was what we experienced in the 19th and 20th century [during the Industrial Revolution]. Eventually, when competition got ideas out into the marketplace, profits eroded. While the earlier years of the [Industrial] Revolution were not great for ordinary workers, eventually, [ordinary workers] did benefit and competition [served to ensure] that the benefit of the technological advances were widely shared. There is a concern about whether our legal and institutional framework can ensure that that will happen with artificial intelligence. That’s one aspect of our institutional structure.

Even if we fail to do the right thing in that area, we have another set of instruments, which are redistributive taxes. We could tax multibillionaires like Jeff Bezos or Bill Gates. From the point of view of incentives, most economists would agree that if multibillionaires were rewarded with 16 billion dollars, rather than 160 billion [dollars], they would probably still work hard. They probably wouldn’t say “I’m going to take my marbles and not play with you anymore.” They are creative people who want to be at the top, but you can be at the top with 16 [billion dollars], rather than 160 billion [dollars]. You take that [extra tax revenue] and use it [for] more shared prosperity. Then, obviously, the nature of our society would be markedly different.

If we think more broadly, right now, President Biden is talking a lot about the “caring economy.” Jobs are being created in education, health, care for the aged, [and] care for the sick. Wages in those jobs are relatively low, because of the legacy of discrimination against women and people of colour who have [worked] in these areas. Our society has been willing to take advantage of that history of discrimination and pay [these workers] low wages. Now, we might say, why do that? Why not let the wages reflect our value of how important it is to care for these parts of our society? [We can] tax the very top, and use that [tax revenue] to create new jobs that are decently paid, [which would create] a very different outcome [for the economy]. I think, optimistically, this new era could create shared prosperity. There would still be some inequality, but not the nightmare scenario of the new serfdom that I talked about before.

Anton Korinek [42:49]

Let’s turn to economic policy. You have already foreshadowed a number of interesting points on this theme. But let’s talk about economic policy to combat inequality more generally. People often refer to redistribution and pre-distribution as the main categories of economic policy to combat inequality. Can you explain what these two [policy categories] mean? What are the main instruments of redistribution and of pre-distribution? And how do [these policies] relate to our discussion on inequality?

Joseph Stiglitz [43:37]

Pre-distribution [looks at] the factors that determine the distribution of market income. If we create a more equal distribution of market income, then we have less burden on redistribution to create a fair society. There are two factors that go into the market distribution of income. [The first factor] is the distribution, or the ownership, of assets. [The second factor] is how much you pay each of those assets. For instance, if you have a lot of market power, and weak labour power, you [end] up with capital getting a high return relative to workers and [high] monopoly profits relative to workers’ [incomes] — that’s an example of the exercise of market power leading to greater inequality. The progressive agenda in the United States emphasises increasing the power of unions and curbing the power of big tech giants to create factor prices that are conducive to more market equality.

We can [also consider] the ownership of two types of assets: human capital and financial capital. The general issue here is: how do we prevent the intergenerational transmission of advantage and disadvantage? Throughout the ages, there have always been parents who want to help their children, which is not an issue. [Rather, the issue is] the magnitude of that [helping]. In the United States, for instance, we have an education system which is locally-based. We have more and more economic segregation, which means that rich people live with rich [people] and poor [people] with poor [people.] If schools in [rich] neighbourhoods give kids a really good education and conversely [in poor neighbourhoods, then even] public education perpetuates inequality.

The most important provision in the intergenerational transmission of financial wealth is inheritance tax and capital taxation. Under Trump, [Congress] eviscerated the inheritance taxes. So 1716328364 the question is how to [reinstate these taxes] to create a more equal market distribution, called pre-distribution.

Anton Korinek [47:34]

[You began to address] taxation in the context of estate taxation. For the non-economists in the room, I should emphasise that among the many contributions that Joe has made to economics is a 1976 textbook with Tony Atkinson that is frequently referred to as the “Bible of Public Finance” which lays out the basic theory of taxation and still underlies basically all theoretical economic work on taxes.

In recent decades, the main focus of this debate has been on taxing labour versus capital. A lot of economists argue that we should not tax capital, because it’s self-defeating: [taxation of capital] will just discourage the accumulation of capital and ultimately hurt workers. My question to you is: do you agree? [If not,] what is wrong with this standard argument?

Joseph Stiglitz [48:43]

It is an argument that one has to take seriously: that attacks on capital could lead to less capital accumulation which would in turn lead to lower wages, and even if the proceeds of the tax were redistributed to workers, workers could be worse off. You can write down theoretical models in which that happens. The problem is that this is not the world we live in. In fact, [there are] other instruments at [our] disposal. For instance, as the government taxes [private] capital, [the government] can invest in public capital, education, and infrastructure. [These investments lead to an increase in] wages. Workers can be doubly benefited: not only [do workers benefit] from direct distribution, but [they also benefit from a greater] equality of market income caused by capital allocation to education and infrastructure.

Many earlier theories were predicated on the assumption that we were able to tax away all rents and all pure profit. We know that’s not true: the corporate profit tax rate is now 21% in the United States, and the amount of wealth that the people at the top are accumulating [provides evidence that] we are not taxing away all pure profits. Taxing away [these pure profits] would not lead to less capital accumulation, [but instead] could lead to more capital accumulation.

[Let’s] look broadly at the nature of capitalism in the late 20th and early 21st century. We used to talk about the financial sector intermediating, which meant [connecting] households and firms by bringing [households’] savings into corporations. [This process] helped savings and helped capital accumulation. [However,] evidence is that over the last 30 or 40 years, the financial sector has been disintermediating. The financial sector, [rather than] investing monopoly profits, has been redistributing [these profits] to the very wealthy, [to facilitate] the wealthy’s consumption or increase the value of their assets, [including their international assets], and their land. [Ultimately], this simple model [of financial intermediation] doesn’t describe [late] 20th and [early] 21st century capitalism.

Anton Korinek [52:17]

Should we think of AI as [the same kind of] capital described in theories of capital taxation in economics, or is AI somehow inherently different? Should we impose what Bill Gates calls a “robot tax” [on AI]?

Joseph Stiglitz [52:36]

That’s a really good question. [If we had had more time, I would have] have distinguished between intangible capital, called R&D, and [tangible capital, like] buildings and equipment. 21st century capital is mostly intangible capital, which is the result of investment in R&D. [Intangible capital] is more productive in many ways than buildings, and so in that sense, it is real capital, and is [well-described by the word] “intangible.” [Intangible capital is also the] result of investment: people make decisions to hire workers to think about [certain] issues, or individuals decide themselves to think about these issues, when [employers or individuals otherwise] could have done something else. [In this way, intangible capital] is capital: it requires resources, which could have been put to other uses, [and these alternative uses are foregone] for future-oriented returns.

The question is: is this [intangible] capital getting excess returns? Are there social consequences of those investments, that [the investors] don’t take into account? We call [these social consequences] externalities. People who invest in coal-fired power plants may make a lot of money, but [their investment] destroys the planet. If we don’t tax carbon, then society — rather than the investor — bears these costs. Gates’s robot tax is based on the same [concept]. If we replace workers, and [these workers] go on the unemployment roll, then we as a society bear the cost of [these workers’] unemployment. [Gates argues that] we ought to think about those costs, [though] how we balance the tax and appropriate its excess returns is another matter. Clearly, [the robot tax] is an example of steering innovation. You and I, [in our research,] have [also argued that we must] steer innovation to save the planet [rather than] create more unemployment.

Anton Korinek [55:32]

How would you recommend that we should reform our present system of taxation to be ready for not only [our present time in the] 21st century but also for a future in which human labour plays less of a role? How should we tax to make sure that we can still support an equitable society?

Joseph Stiglitz [56:02]

Let me first emphasise that not just taxation, but also investment, is important. [Much of the economy’s direction is determined by] the basic research decisions of the National Science Foundation and science foundations in other countries. [These decisions inform which] technologies are accessible to those in the private sector. Monetary policy [is also important]. We don’t think the central bank [affects] innovation, but it actually does. [At a] zero interest rate, the cost of capital is going to be low relative to the cost of labour, [which will] encourage investors to think about saving labour rather than saving capital. So monetary policy is partly to blame for distortions in the direction of innovation. The most important thing is to be sensitive to how every aspect of policy, including tax policy, shapes our innovative efforts and [directs where we] devote our research. Are we devoting our research to saving unskilled labour or to augmenting the power of labour? We talked before about intelligence-assisting innovations like microscopes and telescopes which make us more productive as human beings. We can replace labour, or we can make labour more productive. [While this distinction can be] hard to specify, it’s very clear that we have tools to think about these various forms of innovation.

Anton Korinek [58:16]

On the expenditure side, one policy solution that a lot of technologists are big fans of is a universal basic income. What is your perspective on a UBI: do you advocate it or do you believe there are other types of expenditure policy that are more desirable? Do you think [UBI] may be a good solution if we arrive at a far-future – or perhaps near-future – [scenario] in which labour is displaced?

Joseph Stiglitz [58:53]

I am quite against the UBI [being implemented] in the next 30 or 40 years. The reason is very simple: for the next 30 years, the major challenge of our society is the Green Transition, which will take a lot of resources and a lot of labour. Some people ask if we can afford it, and [I argue that] if we redirect our resources, labour, and capital [toward the Green Transition] then we can afford it. Ben Bernanke [describes] a surplus of capital and a savings glut. However, if [we look] at the challenges facing the world, [we understand Bernanke’s assertion] is nonsense. Our financial system isn’t [developing] the [solutions] our society needs [like] the Green Transition.

I also see deficiencies in infrastructure and in education in so many parts of the world. I see a huge need for investments over the next 30 to 40 years such that everybody who wants a job will be fully employed. It is our responsibility [to ensure] that everybody who wants a job should be able to get one. We must have policies to make sure that [workers] are decently paid. This should be our objective now.

[If] in the far-future [we don’t need] labour, we have the infrastructure that we need, we’ve made the Green Transition, and we have wonderful robots that produce other robots and all of the goods, food, and services that we need, then we will have to consider the UBI. We would [then] be engaged in a discussion of what makes life meaningful. While work has been part of that story of meaningfulness, there are ways of serving other people that don’t have to be monetised and can be very meaningful. While I’m willing to speculate about [this scenario,] it’s a long way off, and [is] well after my time here on this earth.

Anton Korinek [1:01:46]

Would you be willing to revise your timelines if progress in AI occurs faster than what we are currently anticipating?

Joseph Stiglitz [1:01:59]

I cannot see [a scenario where we] have excess labour and capital [over] the next 30 or 40 years, even if [AI] proceeds very rapidly, given the needs that we have in public investment and the Green Transition. We could have miracles, but I think if that happens, we could face that emergency of this unintended manna from heaven and we would step up to that emergency.

Anton Korinek [1:02:51]

We are already [nearing] the end of our time. Let me ask you one more question, and then I would like to bring in a few questions posed by the audience. My question is: what are the other dimensions of AI that matter for inequality, independent of purely economic [considerations]? What is your perspective [on these dimensions of inequality] and how we can combat them?

We’ve talked about meaning in life and meaningful work. If AI takes away work, we will have to find meaning in other places. In the shorter term, AI will take away routine jobs, which will mean that we as a society will be able to devote more labour to non-routine jobs. This should open up possibilities [for people to be] more creative. Many people [have] thought the flourishing of our society is based on creativity. It would be great for our society if we could devote more of our talents to doing non-routine, creative things.

The audience had a question about workplace surveillance, which is one element of [AI] that could potentially greatly reduce the well-being of workers. What are your thoughts on [workplace surveillance]?

Joseph Stiglitz [1:05:06]

I agree [that AI could reduce the well-being of workers]. There are many [adverse effects of AI] we haven’t talked about. We are in an early stage [of AI policy], and our inadequate regulation allows for a whole set of societal harms from AI. Surveillance is one [example of these harms]. Economists talk about corporations’ ability to acquire information in order to appropriate consumer surplus for themselves, or in other words, to engage in discriminatory pricing. Anybody who wants to buy an airline ticket knows what I’m talking about: firms are able to judge whether you really want to [fly] or not. Companies are using AI now to charge different prices for different people by judging how much [each consumer] wants a good. The basis for market efficiency is that everybody faces the same price. In a new world, where Amazon — or the internet — uses AI, everybody [faces] a different price. This discrimination is very invidious: it has a racial, gender, and vocational component.

Information targeting has other adverse [implications], like manipulation. [AI] can sense if somebody has a predilection to be a gambler and can encourage those worst attributes by getting [the person] to gamble. [AI] can target misinformation at somebody who is more likely to be anti-vax and give [them] the information to reinforce that [belief]. [AI] has already been used for political manipulation, and political manipulation is really important because [it impacts] institutions. The institutions — the rules of the game — are set by a political process, so if you can manipulate that political process, you can manipulate our whole economic system. In the absence of guardrails, good rules, and regulations, AI can be extraordinarily dangerous for our society.

Anton Korinek [1:08:25]

That relates closely to another question from the audience: do you think there is a self-correcting force within democracy against high inequality and in particular against the inequality that AI may lead to?

Joseph Stiglitz [1:08:47]

I wish I felt convinced that there were a self-correcting force. [Instead], I see a force that [works] in the [opposite] direction. This [perception] may be [informed] by my experience as an American: [in the US], a high level of inequality [causes] distortions and [gives] money a role in the political system. This has changed the rules in the political and economic system. Money’s [increasing] power in both the political system and the economic system has reinforced the creation of that kind of plutocracy that I talked about [earlier].

[The changes] we’ve seen in the last few years in the United States are shocking, but in some ways are what I predicted in my 2010 book The Price of Inequality. The Republican Party has openly said, “We don’t believe in democracy. We want to suppress voters and their right to vote. [We want to] make it more difficult for them to vote.” [They’ve said this without] any evidence of voter fraud. It’s almost blatant voter suppression. In some sense, this [scenario] is what Nancy MacLean [described] in her book Democracy in Chains, though it has come faster [than she predicted].

I’ve become concerned that what many had hoped would be a self-correcting mechanism isn’t working. We hope we are at a moment when we can turn back the tide. As more and more Americans see the extremes of inequality, they will turn to vote before it’s too late, before they lose the right to vote. This will be a watershed moment in which we will go in a different direction. I feel we’re at the precipice, and while I’m willing to bet that we’re going to go the right way, I would give [this path] just over 50% odds.

Anton Korinek [1:11:37]

I think fortunately Joe and all his work on the topic is part of the self-correcting force.

The top question in terms of Q&A box votes is whether AI will be a driver for long run convergence or divergence in global inequalities. Do you believe that current laggards, or poor countries, will be able to catch up with the front runners more easily or less easily [because of AI]?

Joseph Stiglitz [1:12:12]

I’m afraid that we may be at the end of the era of convergence that we saw over the last 50 years. There was widespread convergence in China and India, and though some countries in Africa did not converge, we broadly saw a convergence [occurring]. I think [that now] there is a great risk of divergence: AI is going to decrease the value of unskilled labour and many natural resources, which are the main assets of poor countries. There will be [complexity]: oil countries will find that oil is not worth as much if we make the Green Transition. A few countries like Bolivia, that have large deposits of lithium, are going to be better off, but that will be more the exception than the rule. Access to [AI] technology may be more restricted. A larger fraction of the research is [occurring] inside corporations. The model of innovation [used to be] that universities were at the centre, and [innovators received a] patent with a disclosure, which means that the information was public and others built on that [information]. However, AI [innovation] so far has been within companies that have better hoarded information. [Companies can’t protect all information]: one non-obvious [path forward] is that [members of the public] could still access the underlying mathematical theorems that are in the public domain. While that’s an open possibility, I [still] worry that we will be seeing an era of divergence.

Anton Korinek [1:14:41]

Thank you so much, Joe for sharing your thoughts on AI and inequality with us. We are almost at time for our event. I am wondering if I may ask you a parting question that comes in two parts. What would be your message to, on the one hand, young AI engineers and, on the other hand, young social scientists and economists, who are beginning their careers and who are interested in contributing to make the world a better and more equitable place?

Joseph Stiglitz [1:15:30]

Engineers are working for companies, and a company consists of people. Talented people are the most important factors in the production of these companies. In the end, the voice of these workers is very important. We [must] conduct ourselves in ways that mitigate the extent to which we contribute to increases in inequality. There are many people, understandably, within Facebook and other tech giants that are using all their talents to ink the profits of, say, Facebook, regardless of the social consequences and regardless of whether it results in a genocide in Myanmar. These things do not just happen, but rather are a result of the decisions that people make.

To give another example, I often go to conferences out in Silicon Valley. When we discuss these issues, they say, “there is no way we can determine if our algorithms engage in discrimination.” [However], the evidence overwhelmingly is that we can. While the algorithms are always changing, taking in new information, and evolving, at any moment in time we can assess precisely whether [algorithms] are engaging in discrimination. Now, there are groups that are trying — at great cost — to see who is getting [certain] ads. You can create sampling spaces to see how [ads] are working.

I think it is nihilistic to say [that gauging discrimination is] beyond our ability and that we have created a monster out of our control. These companies’ workers need to take a sense of responsibility, because the companies’ actions are a consequence of their workers’ actions. When working for these companies, one has to take a moral position and a responsibility for what the companies do. One can’t just say, “Oh, that’s other people that are doing this.” One has to take some responsibility.

For social scientists, I think this is a very exciting time because AI and new technologies are changing our society. They may even be changing who we are as individuals. There is a lot of discussion about what [new technologies] are doing to attention span and how we spend our time. [These technologies] have profound effects on the way that individuals interact with each other.

Of course, social science is about society and how we interact with each other. [It is about] how we act as individuals. [It is about] market power [and] how we curb that market power. The basic business model of many tech giants [relies on] information about individuals. Policy [determines] what we allow those corporations to do with our [personal] information and whether [these corporations] can store [our information] and use it for other purposes. It is clear that AI has opened up a whole new set of policy issues that we had not even begun to think about 20 years ago. My Nobel Prize was in the economics of information, but when I did my work, I had not thought about the issue of disinformation and misinformation. [At the time], we thought we had laws dealing with [misinformation], which [are called] fraud laws and libel laws. We put [misinformation] aside because we thought it was not a problem. Today, [misinformation] is a problem. I mention that because we are going to have to deal with a whole new set of problems that AI is presenting to our society.

Anton Korinek [1:21:46]

Thank you, Joe. Thank you for this really inspiring call to action. Let me invite everybody to give a round of virtual applause. Have a good rest of the day.



Source link

21May

Margaret Roberts & Jeffrey Ding on Censorship’s Implications for Artificial Intelligence


Molly Roberts is an Associate Professor in the Department of Political Science and the Halıcıoğlu Data Science Institute at the University of California, San Diego. She co-directs the China Data Lab at the 21st Century China Center. She is also part of the Omni-Methods Group. Her research interests lie in the intersection of political methodology and the politics of information, with a specific focus on methods of automated content analysis and the politics of censorship and propaganda in China.

Jeffrey Ding is the China lead for the AI Governance Research Group. Jeff researches China’s development of AI at the Future of Humanity Institute, University of Oxford. His work has been cited in the Washington Post, South China Morning Post, MIT Technology Review, Bloomberg News, Quartz, and other outlets. A fluent Mandarin speaker, he has worked at the U.S. Department of State and the Hong Kong Legislative Council. He is also reading for a D.Phil. in International Relations as a Rhodes Scholar at the University of Oxford.

You can watch a recording of the event here or read the transcript below

Allan Dafoe  00:00

Welcome, I’m Allan Dafoe, the director of the Center for the Governance of AI, which is organizing this series. We are based at the Future of Humanity Institute at the University of Oxford. For those of you who don’t know about our work, we study the opportunities and challenges brought by advances in AI, so as to advise policy to maximize the benefits and minimize the risks from advanced AI. It’s worth clarifying that governance, this key term, refers descriptively to the ways that the decisions are made about the development and deployment of AI, but also the normative aspiration: that those decisions emerge from institutions that are effective, equitable, and legitimate. If you want to learn more about our work, you can go to governance.ai. I’m pleased today to welcome our speaker Molly Roberts, and our discussant Jeffrey Ding. Molly is Associate Professor of Political Science at the University of California, San Diego. She’s a scholar of political methodology, the politics of information, and specifically the politics of censorship and propaganda in China. She has produced a number of fascinating papers, including some employing truly innovative experimental design, probing the logic of Chinese web censorship. Molly will present today some of her work co-authored with Eddie Yang, on the relationship between AI and Chinese censorship. I was delighted to learn that Molly was turning her research attention to some issues in AI politics. After Molly’s presentation we will be joined by Jeffrey Ding in the role of discussant. Jeff is a researcher at FHI and Oxford DPhil PhD student in a pre-doctoral fellow at CSAC, at Stanford. I’ve worked with Jeffrey for the past three years now, and during that time, have seen him really flourish into one of the premier scholars on China’s AI ecosystem and politics. So now Molly, the floor is yours.

Molly Roberts  01:57

Thanks, Allan. And thanks for so much for having me. And I’m really excited to hear Jeffrey’s thoughts on this since I’m a follower of his newsletter, and also his work on AI in China. So this is a new project to try to understand the relationship between censorship and artificial intelligence. And I see this as sort of the beginning of a larger work on this relationship between censorship and artificial intelligence. So I’m really looking forward to this discussion. This is joint work with Eddie Yang, who’s also at UC San Diego. So you might have heard, and probably on this webinar series, that a lot of people think that data is the new oil, data is the input to a lot of products. It can be used to predict financial, to make financial predictions that can be used to then trade stocks or to predict the future of investments. And and at the same time, that data might be the new oil, we also worry a little bit about the quality of this data. So how good is this data? How good is data that’s inputted into these products, applications that are that we’re using now a lot in our AI world. So we know there’s a really interesting new literature in AI about politics and bias within artificial intelligence. And this idea behind this is that this huge data that powers AI applications is affected by human biases that are then encoded in that training data, which then impacts the algorithms that are then used within user facing interfaces or products that encode, that replicate or enhance that bias. So there’s been a lot of great work looking at how racial and gender biases can be encoded within these training datasets that then are put into these algorithms and user facing platforms. For example, there’s been – I don’t know why my tex didn’t work here – but Latanya Sweeney has some great work on ad delivery, speech recognition, there’s been also some great work on word embeddings and image labeling.

Sweeney, Latanya. “Discrimination in online ad delivery.” Communications of the ACM 56.5 (2013): 44-54.

Koenecke, Allison, Andrew Nam, Emily Lake, Joe Nudell, Minnie Quartey, Zion Mengesha, Connor Toups, John R. Rickford, Dan Jurafsky, and Sharad Goel. “Racial disparities in automated speech recognition.” Proceedings of the National Academy of Sciences 117, no. 14 (2020): 7684-7689.

Davidson, Thomas, Debasmita Bhattacharya, and Ingmar Weber. “Racial Bias in Hate Speech and Abusive Language Detection Datasets.” Proceedings of the Third Workshop on Abusive Language Online. 2019.

Caliskan, Aylin, Joanna J. Bryson, and Arvind Narayanan. “Semantics derived automatically from language corpora contain human-like biases.” Science 356.6334 (2017): 183-186.

Zhao, Jieyu, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. “Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints.” In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. 2017.

Li, Shen, Zhe Zhao, Renfen Hu, Wensi Li, Tao Liu, and Xiaoyong Du. “Analogical Reasoning on Chinese Morphological and Semantic Relations.” In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pp. 138-143. 2018.

So in this talk, we’re going to explore another institution that impacts AI, which is censorship. Censorship impacts the training data, which then impacts NLP models and applications that are used. So we’re going to look instead of at institutional or human biases that might impact training data, here, we’re going to look at how censorship policies on behalf of governments impact training data. But then how this might have a downstream impact on applications. So we know that large user-generated datasets are the building blocks for AI. So this could be anything from Wikipedia corpuses to social media data sets, government curated data, that more and more data is sort of being put online and this is being used in downstream AI applications. But we also know that governments around the world influence these datasets and have political incentives to influence these datasets which are then used downstream. And they can influence these datasets through fear, through threats or laws that create self censorship that make it so people won’t put things on social media or that they’re whatever their activities are not reflected in government curated data, they can influence these these data sets through friction, what I call friction, which is sort of deletion or blocking of social media posts, or preventing certain types of posts on Wikipedia or preventing some sort of data to be uploaded to a government website, for example. And they can also influence these datasets through flooding, or coordinated addition of information. So we think about coordinated sort of internet armies or other types of government organized groups trying to put information on Wikipedia or on social media, or to to influence the information environment.

Molly Roberts  06:05

So this data is then used in other user facing applications. So increasingly, AI is taking data available on the internet through common crawl through Wikipedia, through social media, and then using it as a base for algorithms in entertainment applications and productivity applications in algorithmic governance and a lot of different downstream applications. So our question is, how does censorship, how does this government influence on these data sets then affect the politics of downstream applications? And it could be that, it could be that even if some of these applications are not in themselves political, that because of this political censorship, they could have some political implications. Deciding which corpus to use, for example, could have political implications on downstream applications. So this paper looks particularly at censorship of Wikipedia corpuses. So we study censorship of Chinese online encyclopedias, and we look at how these different online encyclopedias have different implications for Chinese language, NLP (natural natural language processing). And I’m sorry that my citations aren’t working, but we use word embeddings, they’re trained on two Chinese online encyclopedia corpuses. These are trained by Lee et al., which are, are trained in the same way on Baidu Baike encyclopedia corpus and Chinese language Wikipedia. So we look at Chinese language Wikipedia, which is not blocked within China, and Baidu – sorry, which is blocked but uncensored and Baidu Baike which is not blocked within China but has pre-publication censorship restrictions on it, We look at  how using each of these different corpuses, which have different censorship controls can have different implications for downstream applications. We measure political word associations between these two corpus, and we find that word embeddings, which I’ll go over in a second, trained on Baidu Baike associate more negative adjectives with democracy in comparison to Chinese language Wikipedia, and more positive associations with CCP and social CCP and other types of social control words. And we find with a survey that Baidu Baike word embeddings are not actually more reflective of views of people within China. And therefore, we don’t think that this is coming from simply people’s contributions to Wikipedia, but we think it is coming from the censorship of Wikipedia. And then we also identify a tangible effect of the decision to use pre-trained word embeddings on Baidu Baike versus pre-trained word embeddings on Chinese language Wikipedia in downstream NLP applications. And we’ll talk a little bit at the end about what the strategic implications this might have for politics and AI. So pre-trained word embeddings, some of you may be familiar with what these are, but just by way of introduction in case you’re not: natural language processing, which are algorithms that are used on text, rely on sort of numerical representations of text. So, we have to figure out how to represent text numerically in order to use it then in downstream application. So anything that is doing AI on social media data, Wikipedia data, encyclopedia, on predictive text, this is all relying on a numerical representation of that text. So one way that is common within the social sciences, is to represent text is to simply give each word a number essentially, and say, is this word included within this document or not? This is called the bag of words representation, a one or zero whether or not you use this particular word or not within a text. But another way to represent this text that has become very, very popular in computer science and increasingly also in social sciences is to use word embeddings to represent text. So the idea behind this is that each word, word embeddings estimate a K-dimensional vector, sometimes a 200 300 length vector for any word within a within a huge dictionary of words. And this vector encodes the similarity between words. So words that are likely to be used as substitutes for each other words that are often used in the same context will be in more similar areas of this k dimensional space than other words, and this allows so using pre-trained word embeddings, which already have a K-dimensional vector trained on a large corpus allows an NLP application to know, at the start, how these words might be similar to each other. So often, these word embeddings are pre-trained on very large corpuses, and then they’re used as inputs in smaller NLP tasks. So I already know that two words are more similar to each other than another word, even before starting to train my data.

Molly Roberts  11:13

So often, pre-trained word embeddings are made available by companies, by academics. So this is just one example of a screenshot from fastText: Facebook makes available a lot of different pre-trained word vectors that are trained, these ones are trained on common crawl and Wikipedia. So really large corpuses, they’re using these in 157 languages, you can then go download them and use them as inputs into your NLP model. So here’s just an example,  sort of to fix your fix ideas of what word embeddings are doing. So say that I have two documents. This is from an IBM Research blog, they have two documents document x on the left here, says I gave a research talk in Boston and document y on the right, this is a data science lecture in Seattle, these actually don’t share any words. But if you have word embeddings as a representation of the text, you would know – so this is a very simple two dimensional word embeddings, but imagine in a 300 dimensional space, right? – you would know that actually, these two documents are quite similar in content to each other. Because [garbled] Boston are often reusing the same, often in the same context as each other, [garbled] similar research and science or similar talk and lecture are similar. So the place that these words are within space would be pre-trained on a large corpus, then you could use this word embedding as an input, which would give you more information about those documents. So here, we come to censorship of training data. So these are often pre-trained word embeddings are often trained on very large data sets, like Wikipedia: because they’re user generated, they cover lots and lots of different topics. So we think that they’re sort of representative of how people talk about many different things. In China, or in the Chinese language, however, this is complicated by the fact that the Chinese government has blocked Chinese language Wikipedia. And therefore there’s also been the development of another Wikipedia corpus Baidu Baike, which is unblocked within China but is censored. So both of these are online encyclopedias in China, they’re both commonly used as training data for NLP and if you look at CS literature, you’ll see both of them as used as training data. Chinese language Wikipedia is uncensored as I said, but it is blocked and Baidu Baike is censored in that there are a lot of regulations of what can be written on Baidu Baike, but it is unblocked in that it is available within mainland China. So for example, if you want to create a post or a entry on Baidu Baike about the June 4th movement, it automatically tells you you cannot create this post. Also, there are a lot of regulations about political topics have to follow Chinese official government, Chinese government official news sources, so there’s so there’s a lot of pre-censorship of these entries, unlike Chinese language Wikipedia, where you can contribute without pre-censorship. There’s been some great work by Zhang and Zhu in 2011, American Economic Review, censorship of Wikipedia has reduced contributions to it. So they show that when Chinese language Wikipedia was censored, that there are many, many fewer contributions to Chinese language Wikipedia, because there’s a decrease in that in the audience of the of the site. And because of this, we are apparently at least because of this, Baidu Baike is many, many times larger than Chinese language Wikipedia with 16 times more pages. And therefore it’s increasingly an attractive source of training data for Chinese language NLP.

Molly Roberts  14:49

So what we do in this paper is we compare word embeddings. Between, we compare essentially where word vectors sit in pre trained word embeddings, pre trained on Chinese language Wikipedia versus Baidu Baike. So I’ll just give you a really simple example of what this might look like. So we have word embedding a say this is Baidu Baike. These are a few different word vectors from this trained on this corpus, and we have word embedding b., say this is Chinese language Wikipedia What we’re interested in is how some target words, for example words like democracy, or other types of political words, where they sit in relation to adjectives, positive and negative adjectives. So in this case, is democracy closer in word embedding space to stability, or is it closer to chaos. And we could compare where democracy sits between these two positive and negative added adjectives on in Chinese language Wikipedia versus Baidu Baike. So what we do is we come up with groups of target words, and we have many different categories of target words, each category of target words, has about 100 different words associated with that category. So we use democratic concepts and ideas, we use categories such as democratic, democracy, freedom and election. So each of these categories that has about 100 different words that are sort of synonymous with this within it. And then we also use non targets of propaganda. So for example, social control, surveillance, collective action, political figures, CCP, or historical other historical events, we also find lots of lots of words associated with these in these categories. And then we look at how they are related to attribute words, like adjectives for example, or evaluative words, which are these words in blue. So we use a list of propaganda attribute words, words that we know from reading and studies of propaganda that are often associated with these concepts. And we also use general evaluative words from these big adjective evaluated word lists in Chinese that are often used in Chinese language NLP. So what we do is we take each target word vector, so this is xi, where xi is either that word that part of the vector of the target word from Baidu, or from Wikipedia. And then we take the attribute word vectors, where A is positive attribute that that vector in either Baidu or Wikipedia, and B has the negative word vector for other by doing Wikipedia. And then for each embedding for Baidu, or for Wikipedia, we examine the cosine similarity between the target word and the positive attribute words, minus the mean cosine similarity between the target word and the negative attribute words. And then we take the difference between these differences across all of the word target words within a category to get the relationship or how much closer positive words are overall to the target category in comparison of Baidu to Chinese language Wikipedia. So if Baidu is Category A, and Wikipedia is Category B, if this is very negative, it means that the target, the target category is more associated with negative words. If this is more positive, this means that this target category is more associated with positive words. And the difference between these would be negative which means that Baidu would be associated more negatively than Chinese language Wikipedia. To assess statistical significance, we do a permutation test where we permute the assignment of the word vectors to A B and then we see how extreme our result is in comparison to that permutation. So the theoretical expectation of this is that overall, freedom, democracy, election, collective action, negative figures, all of these categories will be more associated with negative attribute words in Baidu Baike as in comparison to Chinese language Wikipedia. On the other hand, categories like social control, surveillance, CCP, historical events, and positive figures should be more positively associated. And this is exactly what we find. So here is the effect size for propaganda words for each of these categories. And here’s the effect size for evaluative words for each of these categories along with the p value of statistical significance. And we find that overall, Baidu Baike target words in the categories of freedom, democracy, election, collective action, negative figures, are more associated with negative attribute words in Baidu Baike than they are in Chinese language Wikipedia, and the opposite for categories such as social control, surveillance, CCP, etc.

Molly Roberts  19:31

So this could be one possibility that you might think is that perhaps it’s just simply that mainland Chinese internet users view target categories differently than overseas internet users contributing to Chinese language Wikipedia. And therefore this just this difference in word associations between these two, these two sets of internet users is creating this difference in online encyclopedias. So to try to get at this, we did an online survey of about 1000 response in mainland China and we asked people if they thought that we asked them between the following options, which do you think better describes a particular target word. And we took the closest attribute word from Baidu Baike, in the word embedding space, and the closest attribute word for Wikipedia, and we asked people to evaluate that. And what we found is that overall, neither of neither of the Baidu Baike nor Chinese language Wikipedia seemed to better reflect the associations of our survey respondents. So for some words, Chinese so this on the x axis is the likelihood of choosing the Baidu word for some categories and for some lists of attribute words, Chinese language Wikipedia was preferred, and in some categories for some list of attribute words Baidu Baike was preferred. So we didn’t see one of these to necessarily dominate the other in terms of users evaluations of them. So we didn’t we that sort of rejected this sense that Baidu Baike is just better reflecting people’s work associations. So the third thing that we did was we evaluated the downstream effect of these word embeddings on a machine learning task. So the task that we set out to do is classify news headlines according to sentiment. And we use a big general Chinese news headlines data set as our training data. So this is saying, so say you were, you wanted to create a general sentiment news headline classifier, say to create a recommendation system, or to do content moderation on a social media website, for example, say you were creating this algorithm, you might use a general Chinese news headline data set as training data. And then we’re going to look at how the algorithm that that was trained, performs on news headlines that contain these target words, like words related to democracy and election and freedom and social control and surveillance and words with historical or figures that might be of interest to CCP. So we use three different models, Naive Bayes, SVM, and neural network. And we look at how using the same training data, the same models, but simply with different pre-trained word embeddings one that comes from Baidu Baike, one that comes from Chinese language Wikipedia, how just using different pre-trained word embeddings can influence that systematic classification error of this downstream of this downstream task. So do overall do models trained with pre-trained Baidu Baike word embeddings have a just a slightly more negative classification of headlines that contain democracy, than models that contain word embeddings that are models that were trained using pre-trained Chinese language Wikipedia word embeddings.

Molly Roberts  22:57

So this is an example of so for example, this is a headline “Tsai Ing-wen: Hope Hong Kong can enjoy democracy as Taiwan does” the Wikipedia label here comes out as positive when we train this, but the Baidu Baike label when we use the same classifier, same training data, just a different word embeddings comes out as negative, or “Who’s shamed by democratization in the kingdom of Bhutan”, the Baidu Baike label here is coming out as negative, the Wikipedia label is coming out as positive even though the human label here is negative. So, um, so, you know, what are the sort of systematic mistakes that these classifiers are making? Overall, we see that these classifiers actually have very similar accuracy. So it’s not that Baidu Baike models trained on Baidu Baike word embeddings have a higher accuracy than the models trained on Chinese language Wikipedia word embeddings, we see that the accuracy is quite similar between each each of these different word embeddings. But we see big effects on the classification error in each of these different categories. So me, LIJ is the human labeled score for a news headline for target word I in category J. So this would be a negative one if it was a negative sentiment and a positive one if it was a positive sentiment. And if we use the if we get the predicted scores from Baidu and from Wikipedia, our models trained on Baidu Baike word embeddings versus Wikipedia word embeddings. And then we create a dependent variable that is the difference between Baidu and the human label and Wikipedia and the human label for a category, then we can we can estimate how the difference between how the difference between the human label and the predicted label changes by category for the Baidu classifier versus the Wikipedia classifier. So our coefficient of interest here is beta J. How does, how is, is there systematic differences in the direction of classification for a certain category for the algorithm trained with Baidu word embeddings versus Wikipedia word embeddings. And what we find is that there are quite systematic differences across all different machine learning models in the direction that we would expect. So Baidu Baike overall is much are the classifiers trained with pre-trained word embeddings on Baidu Baike overall are much more likely to categorize headlines that contain target words in the categories of freedom, democracy, election, collective action, negative figures, as more negative than social control, surveillance, CCP, historical and positive figures. So just to sort of think about a little bit of the implications of the potential implications of this. So there are sort of strategic incentives. So given that what I hope I convinced you so far is that censorship of training data can have an impact, a downstream impact on NLP applications. And if that’s true, one thing that we might try to think about is, are there strategic incentives to manipulate training data? So we do know that there are lots of government-funded AI projects to create more training data, to gather more training data that then can be used in AI in order to sort of push AI along, might there be sort of strategic incentives to influence this part, the politics of this training data? And how could this play out sort of downstream. So you might think that there would be a strategic benefit to, for example, a government, for example, to manipulate the politics of the training data. And we might think that that could be that their censored training data could in some circumstances reinforce the state, right. So in applications like predictive text, where we’re creating predictive text algorithms, the state might want sort of associations that are reflective of its own propaganda, and not reflective of things that it would like to censor, to sort of replicate themselves within these predictive text algorithms, right. Or in cases like recommendation systems or search engines, we might think that a state might want these applications trained on on data that they themselves curate.

Molly Roberts  27:20

On the other hand, and this I think is maybe less, less obvious when you first start thinking about this, but became more obvious to us as we started thinking about this more: censored training data, it might actually make it more difficult for the state to see society in a ways that it might actually undermine some applications in in certain ways. So for example, content moderation, there are a lot of new AI algorithms to moderate content online, whether it’s to censor it, to remove content that that violates the terms of service of a website, etc. If content moderation is trained on data that has all of sensitive or objectionable topics removed, in fact, it might be worse actually distinguishing between these topics, from the state’s perspective, than if that initial training data were not censored, right, and so we can think about ways in which censorship of training data might actually undermine what the state is trying to achieve. The other way in which it could be problematic for the interests of the state is to see in public opinion monitoring. So if, for example, a lot of training data were censored, in that removed opinions or ideas that were in conflict with the states, it might also if that was used as training data to then understand what the public thinks on on in, for example, by looking at social media data, which we know a lot of states do. And this could bias the outcome of this data in ways that would make it harder for the state to sort of see society. So just to give a plug for another paper that I’ve been that it’s coming out in the Columbia Journal of Transnational Life, I work with some co-authors on the Chinese legal system. And we show that sort of legal automation, which is one of the objectives of the Supreme People’s Court in China is sort of undercut by a data missing this within this big legal data set that the the Supreme People’s Court has been trying to curate. So in summary, data reflects, we know, the institutional and political contexts in which it was created. And not only do human biases replicate themselves in AI, but also political policies impact training data, which then has downstream applications. We showed this in word embeddings and my downstream NLP applications as a result of Baidu Baike and Chinese language Wikipedia word embeddings. But of course, we think that this is a much more general phenomenon that is potentially worthy of future study. And this could have an effect In a wide range of areas, including public opinion monitoring, conversational agents, policing and surveillance and social media curation. So AI, is in some sense can can, in some sense, replicate or enhance sort of an automation of politics. And there have been some discussions about trying to de-bias AI, we also think that this is, would be might be difficult to do, especially in this context where we’re not really sure what a de-biased political algorithm would look like. And so, thanks to our sponsors, and really looking forward to your questions and comments.

Allan Dafoe  30:41

Thanks, Molly. 66 people are applauding right now. That was fantastic. I know I had troubles processing all of your contributions, and I did a few screenshots, but not enough. So I’m sure we’ll, or I think there’s a good chance we’ll have to have you flick back through some of the slides. A reminder to everyone, there’s a function at the bottom where you can ask questions, and then you can vote up and down on people’s questions. So that’d be great to have people engaging. So now, over to Jeffrey Ding for some reflections.

Jeffrey Ding  31:13

Great. Yeah, this was really cool, really cool presentation. Dr. Roberts sent along like a early version of the paper beforehand, so we can get into a little bit and unpack the paper a little bit more in the discussion. But I just wanted to say off the bat that it’s just a really cool paper and it’s a good example of kind of flipping the arrow because a lot of like related work in this area most people look at the effect of NLP and language models on censorship. And just flipping the arrow to look at the reverse effects is really cool. And it also speaks to like this broader issue in NLP research where the L matters in NLP. So much of the time, most of the time, we talk about like English language models, but we know there are differences in terms of languages, in terms of how NLP algorithms are applied. So we see that with like low resource languages, like Welsh and Punjabi, there’s still a lot of barriers to developing NLP algorithms. And your presentation shows that even for the two most resourced languages, English and Chinese, there are still significant differences, tied to the censorship. And finally, the thing that really stuck out to me from the papers, just in the presentation is just an understanding and the integration of the technical details about about how AI actually works, and tied to political implications. And then one line that really stuck out and then you emphasize in the presentation is that the differences that you’re seeing and the downstream effects don’t necessarily stem from the training data in the downstream applications, or even the model itself, but from the pre-trained word embeddings, that have been trained on another data set. So that’s just a really cool, detailed finding, and kind of a level of nuance that you just don’t really see in the space. So really excited to dig in. I just have kind of three buckets, and then a couple of just thoughts to throw at you at the end. So the first bucket is which words to choose, which target words to choose. And I find it really interesting just like thinking through this, because for example, you pick election and democracy as two of the examples. And for democracy, it actually brings up an interesting question in that like, the CCP has kind of co-opted the word democracy, mínzhǔ. And and it actually like ranks second on the party’s list of 12 core values that they published in December 2013. Elizabeth Perry has written on this sort of like, populist dream of Chinese Democracy. So I’d be curious if you thought about that of like, when you you know, when you’re in different Chinese cities, and you see like the huge banners with democracy plastered along all these banners? And just, I wonder, like, what if you picked a target word like representation, or something that might speak to this more kind of populist dream or populist co-option of what democracy means?

Jeffrey Ding  34:07

And then on the point about I think the second point is sort of, on the theoretical expectations kind of tied to this democracy component, whether we should expect kind of more negative connotations related to democracy in the first place, is this idea of the negative historical events and negative historical figures. And the question is, why should we expect a more negative portrayal if these events and figures have been erased from the corpus? So shouldn’t it be, shouldn’t it be just basically not positive or not negative, kind of like just a neutral take? And I think in the paper, you gestured, you kind of recognize this and say that there’s very little information about these historical figures, sSo so their word embeddings do not show strong relationships with the attribute words and I’m just curious if we should expect the same thing with the negative historical events as well, like Tiananmen Square is the most obvious example. And then on the results I just had a quick thing that surprised me a little bit was that you showed that Baidu Baike and Wikipedia perform at the same level of accuracy overall. And then kind of the setup of the initial question is that Baidu Baike has just become a much better corpus, and there’s much more time spent on the corpus, it’s 16 times larger. So I’m just curious why we didn’t see that Baidu Baike corpus perform better.

Jeffrey Ding  35:37

And then yeah, I had, I had some comments on kind of threats to inference, kind of like alternative causes other than censorship that are producing the results. And actually one of them was just a different population of editors. And it’s cool that you all have already done a survey experiment to kind of combat that. That kind of alternative cause I was just thinking, like, as you’re talking about the social media stuff, I wonder if the cleanest way to kind of show the censorship as the key driving factor would be to like, train, train a language model based off of a censored version of like Weibo posts, a sample Weibo posts versus like the population that includes all the Weibo posts from a certain time period. And some, no, that’s like something that other researchers have used to study censorship. And then my last thought, just to open it up to like, kind of bigger questions that I actually don’t know that much about, but it would be cool to know, there’s a lot of technical people on the webinar as well, they could chime in on this point. But the hard part about studying these things is the field moves so fast. So now people are saying that it’s only a matter of time before pre-trained word embeddings and methods like word2vec are just completely replaced by pre-trained language models like, OpenAPI’s work, Google’s work, ELMo GPT2, GPT3. And the idea is that pre-trained word embeddings, kind of they only incorporate previous knowledge into the first layer of the model. And then the rest of the network still needs to be trained from scratch. And recent advances have basically taken kind of what people have done with computer vision and just taken a, to pre-train the entire model with a bunch of hierarchical representations. So I guess like word2vec, would be just wanting the edge and then these pre-trained language models would be learning like the full hierarchy of all of the features from like edges to shapes. And it’d be interesting to explore whether, to what extent, these new language models would still fall into the same traps, or whether they will provide ways to kind of combat some of the problems that you’ve raised. But yeah, looking forward to the discussion.

Molly Roberts  37:53

For great, fantastic comments, and thank you so much, I really appreciate that. And just to sort of pick up on a few of them. And, yeah, we were actually we didn’t, we had certain priors about the category of democracy, we thought that overall, it would be more negative. But of course, we did discuss this issue of mínzhǔ and how and how it’s been used within propaganda within China. The way that we did it was we took, we used both sets of word embeddings to look at all of the closest words to democracy, and get all hundred of those. So it’s not just mínzhǔ, but it’s also all of the other things that are sort of subcategories of democracy. And so it could be that for one of these words, it might be different than others. Right. And so I think that we’re seeing sort of like the overall category, but I think it’s something we should look a little bit more into, because it could sort of piece out some of these mechanisms. Yeah, so one of the things we find with negative historical events and figures is we get less decisive results in these categories. And we think that this is because Baidu Baike just doesn’t have entries on these negative historical events and figures. I think this is one example of how censorship of training data can make can sort of undermine the training corpus, because even from the perspective of the state, if algorithms were using this, and for example, social media, or censorship down the road, you would expect the state would want the algorithm to be able to distinguish between these things, but in fact, because of censorship itself, the algorithm is maybe going to do less well, there might we haven’t shown this yet, but we wouldn’t expect it to do less well on the censorship task than it would have if the training data weren’t censored in the first place. So that’s sort of an interesting kind of catch 22 of this for the from the state’s perspective, right. So and it is interesting that Baidu Baike and Wikipedia at least in our case performed with about the same level of accuracy. And there are papers that show that for certain really more complicated models, the magnitude of the Baidu Baike corpus is better. But of course, I think it sort of depends on your application. In our case, there wasn’t really a difference between the performance or the level of accuracy.

Molly Roberts  40:21

And I really liked this idea of looking at censored versus uncensored corpuses of Weibo posts to try to understand how that could have a downstream effect on training, I think that’d be great way to kind of piece that out. And then this point that you have about pre-trained language models sort of superseding this sort of like pre-trained embeddings, and this transfer learning task, I think that that’s really, really interesting development. And I think that this only makes these questions of where what the trick with initial training data is in transfer learning become more and more important, right? Because these are just sort of the biggest data, whatever has the most data, is that data itself has been amplified by the algorithm downstream. And it’s hard to sort of think about how to delete those biases without actually just fixing the training data itself, or making it more representative of whoever gets, you know, of the population or the language, etc. So yes, I’m looking forward to more discussion. And thank you so much for this awesome comments.

Allan Dafoe  41:35

Great, Jeff, do you want to say any more?

Jeffrey Ding  41:38

Yeah, that last point is really interesting, because there’s some people that are saying that, like, basically, NLP is looking for it’s kind of ImageNet, and kind of, you know, this big, really representative really good data set that you can then just train, you know, your train the language models on and then you do you do transfer learning and learning to all these downstream tasks? And yeah, I think your paper really points to like, if Baidu Baike becomes the ImageNet of Chinese NLP, and you know, I don’t know enough of the technical details in terms of like, if there’s ways in transfer learning to like, do some of the debiasing from the original training set, but yeah, I think, yeah, I think, obviously, the paper will still be super relevant to kind of wherever the NLP models are going.

Allan Dafoe  42:30

Great. Well, I have some thoughts to throw in, and I see people asking questions and voting. So that’s good. We’ll probably start asking those two eventually. And also, you too, should continue. Yeah, saying whatever you want to say. So but just some brief thoughts from me. So I also really like this idea of doing this kind of analysis on a corpus of uncensored data, and then you have an indicator for whether the post was censored. And of course, Molly, I think was it your PhD work, in which you did this. Yeah. So Molly’s already been a pioneer in this research design. And it’s not to say that this, I think it would just be a nice complement to this project. Because this project, you know, you have two nice corpuses, but it’s not obvious what’s causing the differences. It’s like, it could be censorship, it could be fear, it could be different editorial, you know, editors, or just contributors. And whereas that would really, I mean, you’d get the result anyway, so I think that’d be really cool. Okay, so a question I have is maybe one way to phrase it is how, like, how deep are these biases in, in a model trained on these corpuses? And I mean, we, I’d say, currently, you know, we don’t know of a solution for an easy solution for how you can kind of remove notions of bias or, or meaning from, from a language model. You can often remove a kind of the, the connections that you think of, but then there’s still lots of, you know, hidden connections that you may not want. Now, here’s maybe an idea for how you can look at how robust these biases are. I think it was your study three. I don’t know if you’re able to flick to that slide. So you had this, you have a pre trained model. And then in study three, you gave it some additional, right, some additional training data. Okay, yeah.

Molly Roberts  44:36

I’ll go to the setup.

Allan Dafoe  44:39

Yeah, exactly. Yeah. So you have your pre-trained model, and then you give it some additional, right, these Chinese headlines, training data. And sort of the graph I think I want to see is your outcome, so how biased it is, as a function of how much training it’s done, and so on. Initially, it should be extremely biased. And then as you train, if I, well, I think this applies to study three, but if not study three, you could just do it for another corpus where you have sort of the intended associations in that corpus, and see how long it takes for these sort of inherited biases to diminish. You know, maybe they they barely diminished at all, maybe they very rapidly go down with, you know, the first. Well, anyhow, with not too large of a data set, maybe, you know, you never quite get to no bias, but you get quite low. So, yeah, that might be one interesting way of looking at how deep are these biases? How hard is it to extract them? And and I guess another question for you,a nd a question for anyone in the audience who is a natural language expert? Is, are there techniques today or likely on the horizon that could allow for unraveling or flipping or kind of, without requiring almost overwhelming the pre-trained data set, having some way of sort of doing surgery to change the biases in a way that’s not just superficial, but fairly deep? And so, you know, like, for example, maybe with if you have this censored, uncensored and censored data set, you could infer what are the biases being produced by censorship. And then, even if you have a small dataset of this uncensored sensor data, set the biases that you would learn from that. You could then, like, subtract that out of these larger corpuses. And I guess the question is, how effective would that be? And I don’t expect we know the answer, but might be worth reflecting on.

Molly Roberts  46:50

Those are really great points and really interesting points, and I am I, you know, we’re really standing on the shoulders of giants here. This, there’s this whole new literature, and I’m embarrassed that my tech didn’t work. And I’ll I’ll post links to these papers later, that are, is this whole literature on bias of within AI, with respect to race and gender. And, and certainly one of the things that this literature has started to focus on is what are the harms and the downstream applications? So when you talk about like, how deep are these biases, I think one of the things that we have to quantify sort of downstream is like, what are what are there? How are they being used? How is this data being used within applications? And then how does the application that affect people’s decision making or people’s opportunities, etc, or it’s up down down the road? And I think that’s a really hard thing to do. But, but it’s important. And I think that’s one of the ways that where we want to go sort of inspired by this literature. I think how bias is a function of the training data is really interesting. And I think we’ve done a little bit of a few experiments on that, but I think we should include that as a graph within the paper. And certainly, as you get more and more training data, the word embeddings will be less important. Right? And it’s, that would be at least my prior on that. And I think this idea of trying to devise, like, how would you sort of subtract out the bias? I like the idea of trying to figure out, so if you had a corpus, which included both uncensored and included an entire uncensored corpus, and then what information was censored, and then trying to reverse engineer, what are the things that are missing, right, and then adding that to back into the corpus, that would be sort of the way to go about it. It seems hard at one of the things that it doesn’t overcome is self censorship. Because of course, if people didn’t originally add that information to the corpus, even if it were, you never even see that within the data. And, and also sort of, with training data itself is affected by algorithms, because you know, what people talk about. So for example, on social media might be that a lot of people are talking about a lot of different political topics, but certain conversations are amplified, say, by moving them up the newsfeed and other applications are down the newsfeed. And so then you get sort of this feedback loop on what the training data is like. But then, if you then use that, again, into training data, amplifies that again. So I think that there’s so many complicated feedback, AI feedback loops within this space that they’re really difficult to piece out. But that doesn’t mean we shouldn’t try. Yeah, yeah.

Allan Dafoe  49:33

Yeah. A thought that occurred during your talk, is I can imagine the future of censorship is more like light editing. So I submit a post and then the language model says, let’s use these slight synonyms for the words you’re using that have a better connotation than that you can imagine just the whole social discourse being run through this filter of the right associations. And I guess a question for you then also on this is what is is there like an arms race with citizens? So if if citizens don’t entirely endorse the, the state pushed associations, how can what countermeasures can they take? So can they, you know, if one word is sort of appropriated, can they, you know, deploy other words? And I know, there’s kind of like symbolic, you know, games where you can kind of use a symbol as a substitute for a censored term. And, and, and so yeah, are we is there this kind of arms race dynamic happening, where the state wants to control associations and meanings, and people want to express meanings that are not approved of, and so then they change the meaning of words. And you know, maybe even in China, we would see, like a faster cycling or evolution of the meaning of words, because you have this cat and mouse game?

Molly Roberts  50:55

Yeah, I think that’s absolutely right. And I have definitely talked to people who have created applications for like suggesting words that would get around censorship, right. And, and, and that’s, you know, would be like an interesting technology, cat and mouse game around this with AI being used to censor and also adding news to evade censorship. I think one of the interesting implications of what we’re like if you think about the sort of the political structure of AI, as you think about, you know, maybe a set of developers who aren’t necessarily political in themselves, they’re creating applications that are, you know, productivity applications, entertainment applications that are being used in a wide from a wide variety of people. And they’re looking for the biggest data, right, and so and like the most data, the data that’s going to get them the highest accuracy. And because of that, I think the state has a lot of influence over what types of training data sets are developed. And, and has a lot of influence on these applications, even if the application developers themselves are not political. And I think that’s an interesting like, interaction. I’m not, you know, I think, I’m not sure how much states around the world have thought about the development, the politics within training data, and maybe, but I think it could be something that they start thinking about, and might be something to sort of try to understand that. You know, how they might, as training data begin somewhere more important, how they might try to influence it. Yeah.

Allan Dafoe  52:28

Good. Well, we’re at time. So yeah, the remaining questions, I’m afraid will will go unanswered. There was a request for your attention. What was the paper, automating fairness paper? And also, I think, I’m sure people are excited for this paper to come out. So yeah, we look forward to seeing this come out. And, you know, continuing to read, you’re really fascinating and creative. And I guess, yeah, just especially an empirical, like, your work is really thoughtful and effortful, and the extent to which you use sort of different quantitative designs and experimental designs to answer these, and almost kind of field experimental, I guess, designs where you’re, you’re really? Yeah, you can only deploy these experiments if you know the, the nature of the political phenomenon well enough, and I guess, have the resources to devise these experiments that you have been doing. So it’s very exciting work, and thanks for sharing the latest today.

Molly Roberts  53:39

Thanks. Thanks so much for having me. And yeah, thanks, Jeff, also for your fabulous comments.

Molly Roberts  53:47

Thanks, everybody, for coming.



Source link

21May

GovAI Annual Report 2018 | GovAI Blog


The governance of AI is in my view the most important global issue of the coming decades, and it remains highly neglected. It is heartening to see how rapidly this field is growing, and exciting to be part of that growth. This report provides a short summary of our work in 2018, with brief notes on our plans for 2019.

2018 has been an important year for GovAI. We are now a core research team of 5 full-time researchers and a network of research affiliates. Most importantly, we’ve had a productive year, producing over 10 research outputs, ranging from reports (such as the AI Governance Research Agenda and The Malicious Use of AI) to academic papers (e.g. When will AI exceed human performance? and Policy Desiderata for Superintelligent AI) and manuscripts (including How does the Offense-Defense Balance Scale? and Nick Bostrom’s Vulnerable World Hypothesis).

We have ambitious aspirations for growth going forward. Our recently added 1.5 FTE Project Manager capacity between Jade Leung and Markus Anderljung, will hopefully enable this growth. As such, we are always looking to help new talent get into the field of AI governance. If you’re interested, visit www.governance.ai for updates on our latest opportunities.

Thank you to the many people and institutions that have supported us, including our institutional home–the Future of Humanity Institute and the University of Oxford–our funders–including the Open Philanthropy Project, the Leverhulme Trust, and the Future of Life Institute–and the many excellent researchers who contribute to our conversation and work. We look forward to seeing what we can all achieve in 2019.

Allan Dafoe
Director, Centre for the Governance of AI
Future of Humanity Institute
University of Oxford

Below is a summary of our research, public engagement, in addition to our team and growth.

Research

On the research front we have been pushing forward a number of individual and collaborative research projects. Below is a summary of some of the biggest pieces of research published over the past year.

AI Governance: A Research Agenda
GovAI/FHI Report.
Allan Dafoe

The AI Governance field is in its infancy and rapidly developing. Our research agenda is the most comprehensive attempt to date to introduce and orient researchers to the space of plausibly important problems in the field. The agenda offers a framing of the overall problem, an attempt to be comprehensive in posing questions that could be pivotal, and references to published articles relevant to these questions.

Malicious Use of Artificial Intelligence
GovAI/FHI Report.
Miles Brundage et al [incubated and largely prepared by GovAI/FHI]

Artificial intelligence and machine learning capabilities are growing at an unprecedented rate. These technologies have many widely beneficial applications, ranging from machine translation to medical image analysis. Countless more such applications are being developed and can be expected over the long term. Less attention has historically been paid to the ways in which artificial intelligence can be used maliciously. The report surveys the landscape of potential security threats from malicious uses of artificial intelligence technologies, and proposes ways to better forecast, prevent, and mitigate these threats.

The report was featured in over 50 outlets, including the BBC, The New York Times, The Telegraph, The Financial Times, Wired and Quartz.

Deciphering China’s AI Dream
GovAI/FHI Report
Jeffrey Ding

The Chinese government has made the development of AI a top-level strategic priority, and Chinese firms are investing heavily in AI research and development. This report contextualizes China’s AI strategy with respect to past science and technology plans, and it also links features of China’s technological policy with the drivers of AI development (e.g. hardware, data, and talented scientists). In addition, it benchmarks China’s current AI capabilities by developing a novel index to measure any country’s AI potential and highlights the potential implications of China’s AI dream for issues of AI safety, national security, economic development, and social governance.

Cited by dozens of outlets, including The Washington Post, Bloomberg, MIT Tech Review, and South China Morning Post, the report will form the basis for further research on China’s AI development.

The Vulnerable World Hypothesis
Manuscript.
Nick Bostrom

The paper introduces the concept of a vulnerable world: roughly, one in which there is some level of technological development at which civilization almost certainly gets devastated by default, i.e. unless it has exited the “semi-anarchic default condition”. Several counterfactual historical and speculative future vulnerabilities are analyzed and arranged into a typology.

Discussed in Financial Times.

How Does the Offense-Defense Balance Scale?
Manuscript.
Ben Garfinkel and Allan Dafoe

The offense-defense balance is a central concept for understanding the international security implications of new technologies. The paper asks how this balance scales, meaning how it changes as investments into a conflict increase. To do so it offers a novel formalization of the offense-defense balance and explores models of conflict in various domains. The paper also attempts to explore the security implications of several specific military applications of AI.

Policy Desiderata for Superintelligent AI: A Vector Field Approach
In S. Matthew Liao ed.  Ethics of Artificial Intelligence. Oxford University Press.
Nick Bostrom, Allan Dafoe, and Carrick Flynn

The paper considers the speculative prospect of superintelligent AI and its normative implications for governance and global policy. Machine superintelligence would be a transformative development that would present a host of political challenges and opportunities. The paper identifies a set of distinctive features of this hypothetical policy context, from which we derive a correlative set of policy desiderata — considerations that should be given extra weight in long-term AI policy compared to in other policy contexts.

When Will AI Exceed Human Performance? Evidence from AI Experts
Published in Journal of Artificial Intelligence Research.
Katja Grace (AI Impacts), John Salvatier (AI Impacts), Allan Dafoe, Baobao Zhang, Owain Evans (Future of Humanity Institute)

Our expert survey, we report the results from a large survey of machine learning researchers on their beliefs about progress in AI. The piece was the 16th most discussed article in 2017 according to Altmetric. It was reported on in e.g. the BBC, Newsweek, NewScientist, Tech Review, ZDNet, Slate Star Codex and The Economist.

Governing Boring Apocalypses: A New Typology of Existential Vulnerabilities and Exposures for Existential Risk Research
Published in Futures.

Hin-Yan Liu (University of Copenhagen), Kristian Cedervall Lauta (University of Copenhagen), and Matthijs Maas

This article argues that an emphasis on mitigating the hazards (discrete causes) of existential risks is an unnecessarily narrow framing of the challenge facing humanity, one which risks prematurely curtailing the spectrum of policy responses considered.  By focusing on vulnerability and exposure rather than simply existential hazards, the paper proposes a new taxonomy which captures factors contributing to these existential risks. The paper argues that these “boring apocalypses” may well prove to be the more endemic and problematic, than those commonly focused on.

Syllabus on AI and International Security
GovAI Syllabus.
Remco Zwetsloot

This syllabus covers material located at the intersection between artificial intelligence and international security. It is designed to be useful to (a) people new to both AI and international relations; (b) people coming from AI who are interested in an international relations angle on the problems; (c) people coming from international relations who are interested in working on AI.

Scaling Up Humanity: The Case for Conditional Optimism about Artificial Intelligence
In Should we fear artificial intelligence? an in-depth analysis for the European Parliament by the Scientific Foresight Unit.
Miles Brundage

This paper makes a case for conditional optimism about AI and to fleshes out the reasons one might anticipate AI being a transformative technology for humanity – possibly transformatively beneficial. If humanity successfully navigates the technical, ethical and political challenges of developing and diffusing powerful AI technologies, AI may have an enormous and potentially very positive impact on humanity’s wellbeing.

Public engagement

We have been active in various public fora – you can see a sample of the presentations, keynotes, panels and interviews that our team has engaged in here.

Allan Dafoe has been giving several talks each month, including at a hearing to the Security and Defense subcommittee of the European Parliament, Oxford’s Department for International Relations, and being featured in the documentary “Man in the Machine”, by VPRO Backlight (Video, at 33:00). He has done outreach via the Future of Life Institute, Futuremakers and 80,000Hours podcasts.

Nick Bostrom participated in several government, private and academic events, including DeepMind Ethics and Society Fellows event, Tech for Good Summit convened by French President Macron, Sam Altman’s AGI Weekend, Jeff Bezos’s MARS, World Government Summit – Dubai, Emerging Leaders in Biosecurity event in Oxford, among others. In 2018 his outreach included circa 50 media engagements, including BBC radio and television, podcasts for SYSK, WaitButWhy, print interviews, and multiple documentary filmings.

Our other researchers have also participated in many public fora. Jeffrey Ding has, on the back of his report on China’s AI Ambitions, interviewed with the likes of the BBC and has recently been invited to lecture at Georgetown University to DC policy-making circles. Additionally, he runs the ChinAI newsletter, weekly translations of writings on AI policy and strategy from Chinese thinkers, and has contributed to MarcoPolo’s ChinAI which presents interactive data on China’s AI development. Matthijs Maas presented work on “normal accidents” in AI at the AAAI/ACM conference on Artificial Intelligence, Ethics, and Society and presented at a Cass Sunstein masterclass on human error and AI (video here). Sophie Fischer  was recently invited to China as part of a German-Chinese Young Professional’s program on AI, and Jade Leung has presented on her research at conferences in San Francisco and London, notably at the latest Deep Learning Summit on AI regulation.

Moreover, we have participated in Partnership on AI working groups on Safety-Critical AI, Fair, Transparent, and Accountable AI in addition to AI, Labor and the Economy. The team has also interacted considerably with the effective altruism community, including a total of six talks at this year’s EA Global conferences.

Members of our team have also published in select media outlets. Remco Zwetsloot, Helen Toner and Jeffrey Ding published “Beyond the AI Arms Race: America, China, and the Dangers of Zero-Sum Thinking” in Foreign Affairs, a review of Kai-Fu Lee’s “AI Superpowers: China, Silicon Valley, and the New World Order.” In addition, Jade Leung and Sophie-Charlotte Fischer published a piece in the Bulletin of the Atomic Scientists on the US Defense Department’s Joint Artificial Intelligence Center.

Team and Growth

We have large ambitions and demands for growth. The Future of Humanity Institute has recently been awarded £13.3 million from the Open Philanthropy Project, we have received $276,000 from the Future of Life Institute, and we have collaborated with Baobao Zhang on a $250,000 grant from the Ethics and Governance of Artificial Intelligence Fund.

The team has grown substantially. We are now a core research team of 5 full-time researchers, with a network of research affiliates who are often in residence, coming to us from across the U.S. and Europe at institutions such as ETH Zurich and Yale University. As part of signalling our growth to date, as well as our planned growth trajectory, we are now the “Center for the Governance of AI”, housed at the Future of Humanity Institute.

We continue to receive a lot of applications and expressions of interest from researchers across the world who are eager to join our team. We are working hard with the operations team here at FHI to ensure that we can meet this demand by expanding our hiring pipeline capacity.

On the operations front, we now have 1.5 FTE Project Manager capacity between two recent hires, Jade Leung and Markus Anderljung, which has been an excellent boost to our bandwidth. FHI’s recently announced DPhil scholarship program as well as the Research Scholars Program are both initiatives that we are looking forward to growing in the coming years in order to bring in more research talent.



Source link

Protected by Security by CleanTalk