10Jul

legitimate expectations, legal certainty and economic sanctions. – European Law Blog


Blogpost 35/2024

Disclosure: the author was a member of the Applicant’s counsel team

 

Introduction

This post concerns a question which ought to be of concern to all who practise in or study EU law:  does the EU administrative law acquis provide the Union’s courts with the tools they need to supervise the exercise of Union power across a range of competences which were simply not in contemplation at the time the acquis was developed?  There are two prompts for this post.

The first prompt is Joana Mendes’ recent (European Constitutional Law Review. 2022;18(4):706-736) and persuasive demonstration of how the current EU administrative law acquis grew up as a result of a “symbiosis of judicial and scholarly developments” in the pre-Maastricht era. The result was that, by the late 1980s there was a consensus that the subjugation of EU institutions to administrative law constraints (as then understood and theorised) had become “an essential aspect of the EC’s legitimacy”. Mendes argues (again persuasively) that this consensus and the principles which underlay it were the product of (amongst other things) the “institutional and legal reality” of what was then the European Community – i.e. “a functional polity whose interventionist institutional and decision-making structures were created for the establishment and functioning of a common market”. Mendes concludes by urging scholarly (and, perhaps, judicial) “self-reflection” as to whether this framework for analysis remains “fit for purpose” in an EU with competences far beyond what those pioneering scholars and jurists had conceived of.

The second prompt is the General Court’s recent decision in Case T-426/21 Nizar Assaad v Council ECLI:EU:T:2023:114. Here, the Court was asked to apply two core components of the administrative law acquis (the principles legitimate expectation and legal certainty)  in a context which would have been inconceivable to the Court at the time the underlying legal principles were developed – targeted economic sanctions introduced to further a foreign policy objective of the Union as a whole. The Assaad decision provides an opportunity for reflection of the type urged by Mendes and, it is argued, indicates that the Court is capable standing back and interrogating the principles which underlay the early decisions establishing the EU administrative law framework, and how they ought to apply in the much changed context of the Union activity in the Lisbon era.

 

Background to the Assaad case

The Applicant in the Nizar Assaad case was Mr Nizar Assaad, a dual citizen of Canada and Syria. Mr Assaad was a prominent businessman who resided in Syria until the uprising in 2011 when he left and relocated to Beirut and Dubai. As will become apparent, Mr Assaad was never involved in politics and had no connection to the Syrian regime. Mr Assaad’s business interests from 2000 onwards were largely outside Syria, and he had no business connections in Syria at all following the 2011 uprising. Rather, he had the ill-fortune to have a surname which bore (in English transliteration) a passing similarity to that of the Syrian president Bashar al-Assad.

The story begins in August 2011 when the Council added an individual identified as “Nizar Al-Assaad” as “entry 36” to the list of those subject to the EU’s Syrian sanctions regime, which is set out in Annex II to Regulation (EU) No 36/2012 concerning restrictive measures in view of the situation in Syria. The Applicant knew that entry 36 could not relate to him as he had not done any of the things suggested in the accompanying reasons, nor did he satisfy any of the listing criteria.  However, since the Council had (it might be said, in dereliction of its duty to list individuals in compliance with the principle of legal certainty) given no identifying information, there was a real risk that third parties would conclude that he was the person listed at entry 36. Unsurprisingly, this was of the utmost concern to the Applicant, not least because he risked the severe reputational impact of third parties misapprehending that he was associated with President Assad’s regime. Furthermore, there was a risk that third parties would (wrongly) conclude that he was subject to the strictures of the sanctions regime, including the far-reaching consequences of a complete EU wide freezing of all his assets and economic resources and of being prevented from entering or travelling through any EU Member State.

The Applicant’s representatives tried repeatedly to contact the Council with a view to clarification, but to no avail. The Applicant then brought an application for annulment in respect of entry 36, on the basis that he was self-evidently not the person referred to. The Council did not dispute this. Rather, the Council wrote to the Applicant confirming that “the targeted person is President Al-Assad’s cousin” and that the Applicant was “not the subject of the listing”, although he has a “similar name”. Entry 36 was clarified, and the General Court concluded that the annulment application was inadmissible as the Applicant was not the addressee of the measure: Assaad v Council(T‑550/11, not published, EU:T:2012:266).

There the story should have ended. Indeed, there was every indication that it would. For the subsequent decade, whenever there was any confusion as to who was identified in entry 36, the Council made clear that it was not the Applicant. Occasionally, this confusion was the result of administrative errors by the Council. While this was a matter of unneeded stress and inconvenience to the Applicant, the Council always responded by making clear that the Applicant was not the man referred to in entry 36.

Against that background (and at the risk of understatement), it was a matter of surprise to the Applicant when in February 2021 the Council wrote to him maintaining that, contrary to everything it had said to him, the Court, and the world at large over the previous decade, the Council had decided that he was in fact been the person who had been listed since 2011. Furthermore, the Council asserted that it was “maintaining” his listing, and that it would be amending the published statement of reasons to make this clear.

 

The application for annulment

The Applicant immediately brought an application for annulment, the primary ground being that the Council had made a manifest error of assessment. The Applicant established that he was not a person to whom the Syrian sanctions regime could apply: he was not associated with the Syrian regime, did not have any ties (professional or personal) to either President Assad’s family or the Makhlouf family and did not have business interests in Syria at all (still less in a prominent capacity). The Court agreed, and annulled the listing on the basis that it could not be supported in fact (even given the very large margin that the Court accords to the Council in such matters).

The Court did not, however, let matters rest there. The Court went on to find that the Council’s conduct had been breach of the applicant’s legitimate expectations and of the related principle of legal certainty. It is the Court’s approach to these issues which presents an opportunity for reflection of the kind urged by Mendes.

 

Assessment of the Court’s approach

As Mendes notes the principles of legitimate expectation came to form part of the corpus of EU administrative law as a result of the “transplanting” into EU law of principles deriving from the domestic administrative law of member states. Following that transplant, the underlying EU legal principles of legitimate expectation were settled in a line of pre-Maastricht decisions which establish that, where a Union institution considers that it has adopted an “incorrect position”, it will be permitted to resile from that position within a reasonable period, but only where that would not frustrate the legitimate expectations of the individual concerned (or those of third parties) who had been led to rely on the lawfulness of their conduct. Where a Union institution “finds that a measure which it has just adopted is tainted by illegality” it will have a right to withdraw that only “within a reasonable period”. Even then “that right may be restricted by the need to fulfil the legitimate expectations of a beneficiary of the measure, who has been led to rely on the lawfulness thereof”: Case C-365/89 Cargill v Produktschap voor Margarine, Vetten en Oliën paragraph 18, citing Case 14/81 Alpha Steel v Commission.

All very well in circumstances where the contested act concerned steel quotas (Alpha Steel) or agricultural subsidies to a legal person (Cargill). But how does the principle apply where the Union contends that it was previously mistaken as to a matter as serious as whether the Applicant was a supporter or beneficiary of the Syrian regime who is to be treated as, in effect, persona non grata? Does one apply the same approach? Does one give the Council a greater freedom to correct what it contends are errors? Does one weigh the interests of the affected individual differently?

Returning to the Nizar Assaad case, the Council (for its part) denied that there was any retrospectivity at all. The Council’s argument was that because economic sanctions operated only prospectively, there could be no question of retrospectivity. In their telling, it was only if the contested measure could be said to have retrospective economicconsequences that the principle would bite. One can see the logic of the Council’s position, having regard to the circumstances of the (pre-Maastricht) cases which established this principle.

The Court’s reasons, however, evince a sensitivity to the quite different context of the case before them, and in particular what one might call the human context of the contested measure. This is evident in the terms in which the Court rejected the Council’s restrictive approach, concluding that while it was “true that, in principle, the funds of a person or entity may be frozen only for the future”, this was not a principled answer to the Applicant’s claim. Accordingly the Court went on (at para 198) to hold that “confining the effects of the 2021 measures solely to the freezing of the applicant’s funds and economic resources, or to restrictions on admission to the territory of the Member States, wrongly disregards the effects which the adoption of those measures has had on the applicant’s overall legal situation and, in particular, on his reputation and integrity”. This was undoubtedly correct – as the Court went on to explain at para 200: “in establishing, by means of the 2021 measures, that the applicant’s name has been included on the lists at issue since the 2011 measures, the Council asserts that, since that date, the applicant has had links with the Syrian regime and has carried out the various acts which justified his name being entered on the lists at issue and retained since then. Such an assertion is sufficient to alter retroactively the applicant’s legal situation, quite beyond the freezing of his funds alone.”

The same sensitivity is evident in the Court’s treatment of the Council’s alternative submission, which was that any retrospectivity or frustration of the Applicant’s legitimate expectations could be justified by reference to the Council’s objectives. Again, the objectives relied upon (“consolidating and supporting human rights and international humanitarian law”) were of a nature far removed from the economic context in which the Court’s general principles were settled. The Court accepted that correction of errors in sanctioning measures could contribute to this aim, and that this was in the general interest (para 219). Nevertheless, the Court concluded that the Council “failed to have due regard for the applicant’s legitimate expectations by adopting restrictive measures with retroactive effect against him” (para 241). Here, again, the Court demonstrated an acute awareness of the human situation before it, reasoning (at para 246) that the Council’s error correction prerogative was “subject to limits, namely observance of the principle of the protection of legitimate expectations”, cautioning that “the compliance with which is all the more important” in the sanctions context “since the consequences for the legal situation of the persons and entities concerned by the restrictive measures are not insignificant”. The Court’s assessment, like the author’s above, might, perhaps be accused of understatement.

 

Conclusion

Standing back, the Court’s approach in the instant case is – it is suggested – an instance of the kind of self-reflection urged by Mendes. Faced with a situation far removed from that considered in the leading authorities, the Court stood back and interrogated what principles underlay those decisions, and how they ought to apply in the much changed context of the Union activity in issue in the particular case before it. To return to one of Mendes’ themes, such introspection (judicial and scholarly) is not only welcome, but also essential to the continued legitimacy of the EU legal order.



Source link

05Jul

meaningful ban or paper tiger? – European Law Blog


Blogpost 34/2024

After years of anticipation, the final text of the Artificial Intelligence Act (‘the Act’) was approved by the Council on May 21st of this year. The landmark regulation, first of its kind, positions the EU at the forefront of the global effort to establish a comprehensive legal framework on artificial intelligence. The Act aims to safeguard fundamental rights and promoting the development of safe and trustworthy AI by adopting a risk-based approach, mandating stricter scrutiny for higher-risk applications. At the highest level of risk, the Act contains a list of “prohibited uses” of artificial intelligence (Article 5) due to their potentially detrimental consequences for fundamental rights and Union values, including human dignity, freedom, and equality (see Recital 28). While the Act prohibits the use of specific instances of AI predictive policing, we should seriously consider whether the ban will have meaningful effects in practice, or may become a mere instrument of symbolic politics. Leaning towards the latter, this blog cautiously implies that this concern reflects broader questions about the Act’s commitment to developing “human-centric” AI and whether it effectively encompasses all individuals within its protective scope.

Predictive policing is not defined in the Act, but a leading definition provided by Perry et. al, is ‘the use of analytical techniques to identify promising targets’ to forecast criminal activity. As highlighted by Litska Strikwerda (Dutch only), this may involve identifying potential crime locations (predictive mapping), as well as assessing the likelihood that an individual will either become a victim of a crime or commit a crime (predictive identification). While predictive identification has significant potential as a crime prevention tool, it has faced substantial criticism, particularly concerning potential human rights implications. For example, the extensive data collection and processing involved in predictive identification raise serious concerns about data protection and privacy, including the correct legal basis for such data processing and the potential intrusion into individuals’ private lives. Additionally, the discriminatory nature of algorithms can exacerbate existing structural injustices and biases within the criminal justice system. Another issue is the presumption of innocence, given that predictive identification approaches criminality from an almost entirely opposite perspective, labelling individuals as potential criminals before they have engaged in any criminal conduct. Recital 42 of the Act cites this concern in justifying the prohibition on AI based predictive identification.

Initially classified as a high-risk application of artificial intelligence under the Commission’s proposal, predictive identification is now designated as a prohibited use of artificial intelligence under Article 5(1)(d) of the Act. This post seeks to demonstrate the potential limitations of the ban’s effectiveness through a critical analysis of this provision. After providing a brief background on the ban, including the substantive lobbying by various human rights organisations after earlier versions of the Act failed to include predictive identification as a prohibited use, the provision and its implications will be analysed in depth. First, this post points out the potential for a “human in the loop” workaround due to the prohibition’s reference to “profiling”. Secondly, it will discuss how the Act’s general exemption clause for national security purposes contributes to a further weakening of the ban’s effectiveness.

 

The Ban in the Act

The practice of predictive identification has been under scrutiny for years before the final adoption of the AI Act. For example, following the experiments of “living labs” in the Netherlands, Amnesty International published an extensive report on the human rights consequences of predictive policing. The report highlights one experiment in particular, namely the “Sensing Project”, which involved collecting data about bypassing cars (such as license plate numbers and brands) to predict the occurrence of petty crimes such as pickpocketing and shoplifting. The idea was that certain indicators, such as the type of car, could help identify potential suspects. However, the system disproportionately targeted cars with Eastern European number plates, assigning them a higher risk-score. This bias highlights the potentially discriminatory effects of predictive identification. Earlier that same year (2020), a Dutch lower court ruled that the fraud detection tool SyRI violated the right to private life under the ECHR, as it failed to fulfil the “necessary in a democratic society”-condition under Article 8(2) ECHR. This tool, which used “foreign names” and “dual nationality” as possible risk-indicators, was a key element in the notorious child benefits scandal in the Netherlands.

Despite widespread concerns, a ban on predictive policing was not included in the Commission’s initial proposal of the Act. Shortly after the publication of the proposal, several human rights organizations, including Fair Trials, started intensive lobbying for a ban on predictive identification to be included in the Act. Subsequently, the IMCO-LIBE report recommended prohibiting predictive identification under Article 5 of the Act, citing its potential to violate the presumption of innocence, human dignity, and its discriminatory potential. Lobbying efforts continued vigorously throughout the negotiations (see this signed statement of 100+ human rights organizations).

Eventually, the clause was incorporated in the Parliament’s resolution and is now part of the final version of the Act, reading as follows:

[ The following AI practices shall be prohibited: ] the placing on the market, the putting into service for this specific purpose, or the use of an AI system(s) for making risk assessments of natural persons in order to assess or predict the likelihood of a natural person committing a criminal offence, based solely on the profiling of a natural person or on assessing their personality traits and characteristics. [ … ] This prohibition shall not apply to AI systems used to support the human assessment of the involvement of a person in a criminal activity, which is already based on objective and verifiable facts directly linked to a criminal activity. (Article 5(1)(d)).

 

The ”Human in the Loop” Problem

The prohibition applies to instances of predictive identification based solely on profiling, or on the assessment of a natural person’s personality traits and/or characteristics. The specifics of these terms are unclear. For the definition of “profiling”, the Act (Article 3(52)) refers to the definition given in the GDPR, which defines it as any automated processing of personal data to evaluate personal aspects relating to a natural person (Article 4(4) GDPR).

The first question that arises here relates to the difference between profiling and the assessment of personality traits and characteristics. Inger Marie Sunde has highlighted this ambiguity, noting that profiling inherently involves evaluating personal characteristics. A difference between “profiling” and “assessing” may lie in the degree of human involvement. While profiling implies an (almost) entirely automated process with no meaningful human intervention, there is no clear indication on the level of human involvement required for “assessing”.

A deeper concern lies in the question as to what should be understood by “automated processing”. The test for a decision to qualify as solely-automated, including profiling, is that there has been no meaningful human interventionin the decision-making process. However, the exact meaning of “meaningful” here has not been spelled out. For example, the CJEU in the SCHUFA Holding case confirmed automated credit scoring to be a solely automated decision (in the context of Article 22 GDPR), but did not elaborate on the details. While it is clear that the human role should be active and real, not symbolic and marginal (e.g. pressing a button), a large grey area remains (for more, see also here). In the context of predictive identification, this creates uncertainty as to the extent of the human involvement required, opening the door for a potential “human in the loop”- defense. Law enforcement authorities could potentially circumvent the ban on predictive identification by demonstrating “meaningful” human involvement in the decision-making process. This problem is further aggravated by the lack of a clear threshold for the definition of “meaningful” in this context.

The second paragraph of the prohibition on predictive identification in the Act states that the prohibition does not apply to AI systems supporting human assessment of criminal involvement, provided this is based on “objective and verifiable facts directly linked to a criminal activity”. This could be understood as an instance of predictive identification where the human involvement is sufficiently “meaningful”. Nevertheless, there is room for improvement in terms of clarity. Additionally, this conception of predictive identification does not reflect its default operational mode – where AI generates predictions first, followed by human review or verification – but rather the opposite scenario.

In the event that an instance of predictive identification does not fit the definition of a prohibited use, this does not result in the entire practice being effectively free from restrictions. Other instances of predictive identification, not involving profiling or the assessment of an individual’s personality traits, may be classified as “high-risk” applications under the Act (See Article 6 in conjunction with Annex III 6(d)). This distinction between prohibited and high-risk practices may hinge on whether the AI system operates solely automatically, or includes meaningful human input. If the threshold for meaningful human intervention is not clearly defined, there is a risk that predictive identification systems with a degree of human involvement just beyond being “marginal and symbolic” might be classified as high-risk rather than prohibited. This is significant, as high-risk systems are simply subject to certain strict safety and transparency rules, rather than being outright prohibited.

In this regard, another issue that should be considered is the requirement of human-oversight. According to Article 14 of the Act, high-risk applications of AI should be subject to “human-oversight” to guarantee their safe use, ensuring that such systems are used responsibly and ethically. However, as is the case with the requirement of “meaningful human intervention”, the exact meaning of “human oversight” is also unclear (as explained thoroughly in an article by Johann Laux). As a consequence, even in instances where predictive identification does not classify as a prohibited use under Article 5(1)(d) of the Act, but is considered high-risk instead, uncertainty about the degree of human involvement required remains.

Finally, it should be noted that even if the AI would only have a complementary task compared to the human, another problem exists. It pertains to the potential biases of the actual “human in the loop”. Recent studies suggest humans are more likely to agree with AI outcomes that align with their personal predispositions. This is a problem distinct from the inherent biases present in predictive identification systems (as demonstrated by, for example, the aforementioned cases of the “Sensing Project” and the Dutch childcare benefits scandal). Indeed, even the human in the loop “safeguard” may not offer requisite counter-balance to the use of predictive identification systems.

 

General clause on national security purposes

Further, the Act includes a general exemption for AI systems used for national security purposes. As national security is beyond the EU’s competences (Article 4(2) TEU), the Act does not apply to potential uses of AI in the context of the national security of the Member States (Article 2 of the Act). It is uncertain to what extent this exception may influence the ban on predictive identification. National security purposes are not uniformly understood, although established case law has confirmed several instances, such as espionage and (incitement to- and approval of) terrorism to be included within its meaning (see this report by the FRA). Yet, given the degree of discretion granted to the Member States in this area, it is uncertain which instances of predictive identification might be excluded from the Act’s application.

Several NGOs focusing on human rights (particularly in the digital realm) have raised concerns about this potential loophole, arguing that the exemption under the Act is broader than permitted under European law. Article 19, an advocacy group for freedom of speech and information, has argued that such a broad exemption contradicts European law, stating that ‘the adopted text makes the national security a largely digital rights-free zone’. Similar concerns have been raised by Access Now. The fear is that Member States might invoke the national security exemption to justify the use of predictive identification techniques under the guise of safeguarding national security. This could undermine the effectiveness of the ban in practice, allowing for the continued use of such technologies despite their potential to infringe upon fundamental rights. For example, the use of predictive policing in counter-terrorism efforts could disproportionately target minority communities and individuals from non-Western backgrounds. Combined with the existing concerns about biases and the potential for discriminatory outcomes in the context of predictive identification, this is a serious ground for concern.

Rather than a blanket exemption, national security considerations should be addressed on a case-by-case basis. This approach finds support in the case law of the ECJ, including its ruling in La Quadrature du Net, where it reiterated that the exemption is not by definition synonymous with the absolute non-applicability of European law.

 

Conclusion

While at first sight the ban on predictive identification appears like a significant win for fundamental rights, its effectiveness is notably weakened by the potential for a “human in the loop”-defence and the national security exemption. The human in the loop-defence may allow law enforcement authorities to engage in predictive identification if they assert human involvement, and the lack of a clear definition for “meaningful human intervention” limits the provision’s impact. Additionally, the exemption for AI systems offering mere assistance to human decision-making still allows for human biases to influence outcomes, and the lack of clarity regarding the standards for “human oversight” for high-risk applications are not promising either. The national security exemption further undermines the ban’s effectiveness. Given the broad and ambiguous nature of the exemption, there is significant scope for Member States to invoke this exemption.

Combined, these loopholes risk reducing the ban on predictive policing to a symbolic gesture rather than a substantial protection of fundamental rights. In addition to the well-documented downsides of predictive identification, there is an inherent tension between these limitations in the ban, and the overarching goals of the AI Act, including its commitment to safeguard humanity and develop AI that benefits everyone (see for example Recitals 1 and 27 of the Act). Predictive identification may aim to enhance safety by mitigating the threat of potential crime, but it may very well fail to benefit those already marginalised, for example minority communities and individuals from non-Western backgrounds, who are at higher risk of being unfairly targeted, for example under the guise of counter-terrorism efforts. Addressing these issues requires clearer definitions, stricter guidelines on human involvement, and a nuanced approach to national security exceptions. Without such changes, the current ban on this instance of predictive policing risks becoming merely symbolic: a paper tiger failing to confront the real challenges and potential harms of the use of AI in law enforcement.



Source link

01Jul

Examining Public and Private Control of Media Organs in Hungary and Italy – European Law Blog


Blogpost 33/2024

The state of media pluralism around the world stands at one of its most transformative points in modern history. The development of new technologies and the impact of social media platforms have radically reshaped society. Governments around the world have responded in kind. According to Freedom House, governments have shifted from open, laissez-faire internet exchange to ‘greater government intervention in the digital sphere.’ In 2023, global internet freedom had declined for the 13th consecutive year. Many point to the European Union as a bastion for ‘third way’ media co-regulation—balancing China’s authoritarian grip on expression and the United States’ unrestricted accommodations for free speech. Whereas one might view the European Union as a leader in media pluralism with appropriate safeguards for personal privacy, several Member State national governments stand in direct violation of such values. By April 2024, the Liberties Media Freedom Report declared that media freedom and pluralism stand ‘perilously close to the breaking point’ within the European Union. The European Union has produced legislation—specifically the General Data Protection Regulation (GDPR), the Digital Services Act (DSA), and the European Media Freedom Act (EMFA)—to try to address degrading media freedom within the EU community. This article examines how said legislation—specifically the EMFA—does not sufficiently secure media pluralism guarantees in two Member State case studies, Hungarian public media and Italian private media. With the European Union historically perceived as a ‘beacon of openness and liberal democracy,’ Member State derogations from media pluralism present hypocritical complicating factors for such international standards of liberal democratic governance.

 

Codifying EU Media Law

As enshrined in EU law, media pluralism and media freedom stand as one of the EU’s core principles and as a fundamental right for all EU citizens. Importantly, Article 11 of the EU Charter of Fundamental Rights states:

  1. Everyone has the right to freedom of expression. This right shall include freedom to hold opinions and to receive and impart information and ideas without interference by public authority and regardless of frontiers.
  2. The freedom and pluralism of the media shall be respected.

To this end, three major media protection packages have made their debut on the EU institutional stage. Implemented in 2018, the EU General Data Protection Regulation (GDPR) Regulation (EU) 2016/679 serves as unparalleled ‘third-way’ legislation intended to protect the personal data of EU citizens while still bolstering necessary information-related services such as journalistic free expression via Article 85. The Digital Services Act (DSA) Regulation (EU) 2022/2065 represents a novel avenue for confronting levels of hate speech, terrorist propaganda, and disinformation that have plagued major social media platforms in recent years; the DSA would require tech companies to enact policies aggressively combating illicit content or face billions of euros in fines. And most recently, in 2024, the European Media Freedom Act (EMFA) Regulation (EU) 2024/1083 formulates strict protections for journalistic practices and seeks transparency in public media funding and editorial independence.

Such legislation from the EU institutions display a concerted effort to preserve media freedom at the supranational level. However, such practices do not reflect the ‘on-the-ground’ situation at the Member State level nor will these laws serve as a panacea for long-standing, entrenched, and anti-competitive media freedom violations in various EU Member States. Two Member State case studies—Hungary and Italy—expose the gaps in the attempted remedies of these media packages, specifically the EMFA. To date, the EMFA represents the European Union’s foremost legislation on ensuring the integrity, independence, and durability of media freedom and media organizations. While the EMFA provisions would work to create a comprehensive future framework for media operations in a theoretical silo, this legislation arrives too late given the current state of affairs within the EU. As such, this piece will examine the disconnect between several of the more apparently robust Articles of the EMFA—Articles 4, 5, 6, 8-13, 22, and 25—and the media freedom environments in Hungarian public media and Italian private media. Whereas these measures might serve generally as substantive approaches to the reinforcement of media pluralism, they ultimately fail to address the deeply rooted and anti-competitve nature of leading Hungarian and Italian media organs.

 

Exerting Control over Hungarian Public Media

Hungary’s ruling Fidesz Party has openly and legally curtailed independent media since 2010 with an illiberal structure that will persist despite the aforementioned EU legislation. The illiberal structure’s public media constriction in Hungary functions through entirely legal and open parliamentary procedures to control and restrict media content. In 2011, Fidesz established the Media Authority and Media Council in Cardinal Act CLXXXV and CIV. The Media Authority serves as an umbrella media regulatory commission made up of three central branches: the President, the Media Council, and the Office of the Media Council. The media laws require official registration with the Media Authority before commencing media services, stipulate morality clauses and unbiased content, impose sanctions upwards of €720,000, and consolidate all public broadcasting and advertising under one organization—the Media Services and Support Trust Fund (MTVA). As the Council of Europe noted, the President of the Media Authority ‘holds extensive and concentrated powers for nine years over all regulatory, senior staffing, financing and content matters across all media sectors.’ Despite the EMFA’s proposed intention of ‘avoid[ing] the risk of… undue political influence on the media,’ [EMFA Recital 73 of the Preamble] the Regulation will not effect any material change in this highly concentrated, ruling-party aligned state organization.

Technically, the appointment process of the President of the Media Authority entirely aligns with Article 5(2) EMFA requiring ‘transparent and non-discriminatory procedures’ for appointments to management boards of public media service providers. The Hungarian government points to the fact that a constitutionally-codified confirmation vote of a two-thirds majority in Parliament would attribute popular, universal consensus to Media Authority appointees. However, these claims only provide a rhetorical veneer of nonpartisan composition. A gerrymandered two-thirds parliamentary Fidesz supermajority accommodates a streamlined confirmation process for pro-Fidesz political appointees. As such, the Media Authority regulatory commission is singularly composed of allies of the Hungarian ruling party who cannot—nor would not—be recalled from their positions—another point of alignment with Article 5(2) EMFA. The first President of the Media Authority, Annamária Szalai, was a Fidesz MP. The second President—Mónika Karas—served as the defense attorney for two Fidesz-aligned media outlets. The third and current President—András Koltay—has carried a lead position in Mathias Corvinus Collegium, the Fidesz-affiliated think tank and educational institution.

With the European Union attempting to outline some basic standards for media pluralism, many of their responses have come far too delayed, particularly in the Hungarian case. In assessing the novel EU legal mechanisms for media pluralism, one does not see possible redress from the European supranational level. While the EMFA seeks transparency in appointment processes, it does not carry any mechanism for fully ensuring nonpartisan government-appointees in regulatory bodies—nor could it given appointees are determined at the Member State level. One study found that by 2017, nearly 90% of all Hungarian media was already ‘directly or indirectly controlled by Fidesz.’ At the Prague European Summit 2024, European Commissioner for Values and Transparency Věra Jourová indicated that while the EMFA makes significant strides for establishing protections of editorial independence in public media and media ownership transparency, Hungarian media state capture is ultimately at the whim of the national government and fundamentally irreversible from the European level. Commissioner Jourová is correct in this assessment particularly given that much of the EMFA approaches media institutions with a ‘freedom from interference’ negative liberty approach [EMFA Recitals 15, 18 and 19].

The well-entrenched, intricate, and legalistic implementation of the Hungarian Media Authority will continue unaffected by the EMFA. Article 4(2) EMFA outlines the need for Member State self-restraint in intervening in editorial decisions in media organs and regulatory authorities to preserve editorial independence. This guideline falls entirely flat à la hongrois; the now-purged editorial boards of Hungarian media providers are composed of decision-makers who voluntarily align with the government position. As previously mentioned, Article 5(2) EMFA mandates transparent, open, and non-discriminatory appointment processes for the heads of public media providers. The procedure for appointing a new President of the Media Authority is entirely transparent and outlined in Hungarian law; however, the appointee him or herself has consistently come from a pro-Fidesz background in the media. Articles 8-13 EMFA shape the role of the newly-established European Board for Media Services. While an entirely respectable mandate, the Board however would be composed of respective Member State national regulatory authorities, effectively legitimizing the Hungarian Media Authority in European-level decision-making. Finally, Article 6(1) EMFA seeks to clarify and publicize the ownership structure of private media. In Hungary, it is not unknown that close Orbán allies Andrew Vajna owns TV 2—the most-watched television channel in Hungary in 2022—and Lőrinc Mészáros owns Hungary’s largest print media company, Mediaworks. Their outsized power over private media will not change with simple audience knowledge of the ownership of these companies. Already as of 2020, 74% of Hungarian voters believed that Hungarian media has a strong political bias and 66% believed it was ‘disconcerting that the media are increasingly concentrated in Fidesz’s hands.’ Even with the changes of the EMFA entering into force on 8 August 2025, Hungarian state capture of media capably evades EU media pluralism guarantees.

 

Establishing Conflicts of Interest in Italian Private Media

To turn to the Italian case as it pertains to the EMFA, the concern over privately-owned, party-affiliated media dominating the advertising markets prompts major conflict of interest considerations. A number of party-aligned television channels controlled by one individual have dominated the media advertising market share in Italy over the past three decades—former Prime Minister Silvio Berlusconi and his Mediaset conglomerate. The top six most-viewed television channels from 2008 to 2017 divided across the state-run RAI and private Mediaset company—with RAI channels maintaining a plurality of viewers. However, because of legal limits on advertising spend in public channels, Mediaset has consistently captured disproportionate advertising market share. For example, in 2009, RAI and Mediaset respectively maintained 39.2% and 38.8% of the total television audience, but Mediaset held 63.7% of advertising spend to RAI’s 25.5% the same year. European Commissioner for Values and Transparency Věra Jourová noted at the Prague European Summit 2024 that one of the EMFA’s goals is to establish transparency concerning party-affiliated media channels and to promote fair competition in the media markets. And yet the problem arises in the Italian case where a private, partisan media outlet already controls a dominate market share and the EMFA regulatory efforts are only specific to public advertising spend.

The effort to assess fair competition in media markets manifests in Article 22 EMFA, and transparent public spending on media platforms is codified in Article 25 EMFA. Article 22 EMFA establishes a reporting mechanism regarding media market concentrations. Article 25 EMFA seeks proportionate, transparent, and objective measures for determing public-advertising spend on media platforms. With Article 22 EMFA, it is difficult to see a ‘through-line’ between a report on highly-concentrated media outlets and the actual remediation of said monopolizing force. Article 25 EMFA would successfully combat arbitrary Member State funding for a media company which might result in illegitimately awarded public monies. But while this provision would stimy willful ruling-party media clientelism, it is unable to address private advertising spend, which can serve as a source of indirect conflict of interest lobbying. In the Berlusconi case where he actually owned the media outlets, one study found that firms shifted their allocated advertising spend to Mediaset during Berlusconi’s respective tenures as Prime Minister boosting Mediaset profits by 25% through his years as Prime Minister. Mediaset’s growth in advertising market share between 1993 and 2011 was marked by major increases at the start of his third and fourth governments. While Mediaset saw a 25% increase in profits during the period of various Berlusconi governments from 1994 to 2011, RAI’s profits decreased by 9% despite viewership remaining relatively consistent. The EMFA provisions do not provide any recourse for addressing such conflicts of interest or monopolizing tendencies in privately-owned media companies and resultant discretionary firm-by-firm advertising spend. And with Mediaset functioning from a majority position in the media advertising market—the company managed on average 55% of television advertising revenue from 2019 to 2022—the possibility of retrofitting fair competition procedures is unlikely. As such, Article 22 EMFA’s competition guidelines are toothless and Article 25 EMFA is too narrowly tailored in the Italian case, considering the reality that Berlusconi’s Mediaset already controls both a strong television viewership and an even stronger advertising stake. While the proportionality and transparency measures are respectable from behind a ‘veil of ignorance,’ the Berlusconi media empire has already positioned itself as the controlling stake in advertising revenue, and private firms can continue to operate via indirect conflict of interest lobbying beyond the confines of EMFA regulation.

 

Concluding Comments

The reality is that changes in the media landscape take place at the national level; the EU’s EMFA regulation can only do so much to secure Member State-specific media pluralism—particularly if editorial offices and ownership structures for these media organs have already been usurped. Even more concerning is the fact that these methods for state or partisan capture of media outlets serve as entirely replicable models for other nations—carrying grave connotations for the future of liberal democratic governance in constitutional democracies in the EU and around the world. In Hungary, Orbán’s efforts to control independent media and propagate his political agenda have irreversibly violated principles of media pluralism which—as the European Court of Human Rights once noted—stands as the ‘cornerstone of [a] democratic and pluralist society’ (Manole and Others v. Moldova, para 54). In Italy, Berlusconi’s media congolmerate Mediaset found avenues to solidify advertising control and financially benefit from firm advertising spend during his time as Prime Minister. While the EMFA prompts some important regulatory changes for the future state of media pluralism, it falls short of fully addressing the current state of Hungarian public media and Italian private media ecosystems. Such a topic provides context to the worldwide retreat of media pluralism, internet freedom, and free speech in liberal democratic societies; the backsliding of media pluralism—and liberal democratic principles writ large—is not confined to strictly authoritarian regimes but instead osmotically permeates throughout previously entrenched liberal democracies.





Source link

28Jun

Regulating the Virtual World as a new State – European Law Blog


By Annelieke Mooij and Jip Tushuizen

Blogpost 32/2024

The European Commission has recently published an initiative that aims to regulate virtual worlds and Web 4.0 which is structured around the objectives of the Digital Decade policy programme. Virtual reality (VR) is a relatively old concept that was introduced primarily through gaming environments but given a new meaning through the introduction of the “Metaverse”. The Metaverse allows users to enter an immersive virtual reality that offers relaxation, education or an office environment. The wide variety of virtual realities that are part of the Metaverse brings expected use to new levels. It is estimated that, by 2026, 25% of the global population will spend at least one hour a day in the Metaverse for the purposes of either work, shopping, education or entertainment. Unlike current online stores or movie platforms, the Metaverse will provide a 3D immersive environment where users can interact with other users. Companies like Apple, Google, Roblox and Microsoft have made significant investments, with the total market size expected to hit 800 billion US dollars by 2030, potentially contributing 2.8% to global GDP in the tenth year after its creation.

The interaction with other users has been proven to produce positive, but also very negative virtual experiences, sometimes even amounting to virtual rape. Victims have stated that whilst this act was virtual, the emotional damage was physical. VR technology has improved since 1993 when the first virtual rape occurred. Its current state can be so realistic as to confuse the human body with reality, impacting both our conscious and subconscious emotional state. Immersive environments further have a significant impact on users’ vision of the world. For example, gamers who are continuously confronted with oversexualized female avatars in games are more likely to tolerate sexual harassment and to support the rape myth.

Regulating the Metaverse hence does not seem an unnecessary luxury. This post will argue that the current regulatory approach under the Digital Services Act is insufficient. Whilst new regulation is highly desirable, it should not extend to provide de facto statehood to Metaverse providers.

 

Regulatory choices in the EU

The European Commission is currently working on a new legislative proposal to regulate virtual realities. While the initiative is still in its infancy, it concretely puts forward four pillars. The most important from a regulatory perspective is the third pillar: government. The Commission is not clear in how it intends to regulate the virtual worlds, but it refers to the applicability of the Digital Services Act (DSA). The DSA’s approach is primarily focused on the transparency of the terms & conditions and complaint procedures, but it does not regulate content. It determines applicability of fundamental rights (see e.g. Art. 1(1)) but fails to provide concrete elaboration. It further considers that content flagged as ‘illegal’ should be appropriately taken care of, but only refers to Union law and national law of Member States for the exact definition of what exactly constitutes ‘illegal content’ (see e.g. Arts. 16 and Art. 3(h)). Harmful content is furthermore excluded from this regime.

The counter-model to the DSA’s regulatory approach, so far not considered by the Commission in its Initiative, would be an emphasis on content regulation, whereby providers have to allow all speech without discrimination. Speech could only be limited when it is prohibited by law. This type of approach severely limits the freedom to conduct a business (Art. 16 CFREU) as all virtual realities are de facto regulated as public spaces. Nevertheless, this approach is considered to contribute to a safe digital environment. It would, however, entail assigning legal duties and limits on private legal persons that closely resemble those of a State. A legal person would have to monitor and effectively enforce the fundamental rights of its users. In this monitoring, the provider arguably becomes an extension of the State’s police. Similarly, virtual worlds can install their own dispute resolution proceedings. Increasing regulatory responsibilities for the Metaverse providers could reach a point where they are de facto mini-States. Whilst this approach may increase digital safety it raises the question of whether we could and should think of virtual realities as the new State?

 

Human rights and the Metaverse

Earlier generations of the internet were expected to produce substantial societal benefits by facilitating more efficient communication infrastructures. However, the destructive force of the internet has arguably turned out greater than initially anticipated with its ability to foster strong polarization, spread misinformation and reinforce pre-existing patterns of oppression. An example of the latter can be found on Facebook, with the platform notoriously punishing black women’s comments speaking out against racism and sexism. In fact, Facebook’s algorithms have targeted hate speech directed at white persons disproportionately compared to hate speech directed at any other societal group. Platform policies seeking to protect marginalized communities hence actually reinforce marginalization. Algorithms further generally consider the white male as the default, which resurfaced when Amazon had to discontinue using an AI hiring tool which rendered resumes containing variations of the word “women’s” as less desirable.

With further development of newer generations of the internet facilitating the development of entirely virtual spaces, the foregoing issues will aggravate exponentially if left regulated insufficiently or incorrectly. In fact, it has already been established that users of existing virtual spaces struggle with reporting mechanisms. Users describe that it is often difficult to identify the speaker, that usernames are not easily traceable and that it is relatively difficult for a new user to figure out how to report harassment. The definition of “online harassment” is further highly subjective. Harassment within a virtual space is experienced much more intensely by some identities than others and besides, full embodiment and presence within a virtual space facilitate a far more intense experience. It logically follows that users choose to customize their avatar in a way that reflects an identity that is subjected to the least amount of harassment, rather than have their avatar reflect their own physical identity. As a person of colour has pointed out: “Since I can choose to get treated like a black person or not get treated like a black person—I’m probably going to choose not to get treated like a black person.

Where one identity is deemed more “favourable” than the other, it logically follows that Metaverse spaces risk being overrepresented by identities rendered more “favourable” compared to others. Not only does this inherently communicate a narrative of desirability, it also projects a remarkably one-sided view of the world. Such a one-sided projection of reality unarguably runs the risk of seriously enhancing existing patterns of oppression towards minority groups both virtually and physically.

 

Human rights obligations of States vs companies

The modern conceptualization of Statehood is defined by the Westphalian system, identifying State sovereignty and the principle of territorial integrity as the foundations for the international legal system since 1648. Consequently, international human rights law is traditionally premised on the assumption that the sovereign State as the quintessential bearer of international obligations is responsible for the protection of fundamental rights within its territory. This logic firstly insinuates a hierarchy between the “oppressive” sovereign on the one hand and the citizen requiring protection from this oppression on the other. Secondly, this Westphalian logic is premised on the notion that the sovereign State is the exclusive actor within a legal system that is capable of wielding oppressive power against an individual.

Crucially, corporations are not, or at least not directly, subjected to international human rights obligations as it is the State that is burdened with this responsibility. Currently, companies merely face the moral responsibility to conduct a process of assessing, preventing and mitigating existing and potential adverse human rights impacts of operations across their supply chain. However, this process of human rights due diligence is derived from a soft law mechanism which does not produce legally binding obligations. Whilst the EU legislator has recently adopted a legally binding framework, emphasis remains on the avoidance of contribution to human rights violations rather than a responsibility to actively safeguard human rights protection across business operations.

 

The oppressive corporation

The traditional idea of the State monopoly on power and coercion has been proven to hold less relevance for today’s realities, with surveillance tasks increasingly becoming fragmented across various public and private actors. In fact, the idea of assigning State-like regulatory duties to private companies is far from modern, with former colonial companies like the Dutch and English East and West India companies being granted sovereign powers ranging from the right to form colonies to the right to use force. Interpreting the concept of ‘power’ in a broader sense, namely the ability to create or destroy wealth within a system, it follows that this trend undeniably mirrors today’s realities, with corporations representing 69 out of the top 100 largest economic entities globally in 2015.

With citizens increasingly practicing their daily needs and responsibilities in the Metaverse, the question to what extent this virtual world then factually still differs from life in a nation State is not far-fetched. Metaverse operators, predominantly represented by white or Asian non-queer men, can decide who gets to enter their virtual space and what type of behavior is deemed desirable. While the DSA mentions the applicability of fundamental rights to the regulation of online platforms, it is still questionable how this precisely plays out in practice. For example, the question arises whether operators can exclude certain identities from their virtual space without a valid cause. Upon entry, a user is obliged to accept the rules and guidelines of the platform. If the user disagrees, it is still uncertain to what extent these guidelines could effectively be challenged in a court. Users are left with the option of either agreeing and signing away their rights or disagreeing with subsequent exclusion from the platform. Such corporate policies are therefore capable of imposing restrictions on the user’s fundamental rights protection that undeniably resemble the intrusive character of regulatory decisions taken by the nation State.

 

Corporate sovereignty?

Accordingly, the corporate creator of the virtual space increasingly assumes the factual position of a regulatory actor with consequences that reach considerably further than previously seen. It takes on an authoritative role that inherently insinuates a hierarchy towards its users which mirrors the hierarchical position of the State against its citizens. Mark Zuckerberg has already indicated that he considers Facebook as a government with the policies it is developing and the number of users it has gathered. The company even announced the introduction of its own digital currency: the Libra.

Apart from a government, a recognized State under international law possesses a permanent population, a defined territory and the capacity to enter into relations with other States. The population of a Metaverse consists of its users, with the distinct virtual space providing for a defined territory these users can inhabit. Some argue that the sovereignty of the company is based on data rather than territory, rendering the boundaries of this sovereignty rather fluid. Metaverse companies could further enter into agreements of interoperability with other companies which determine the conditions based on which users and their data could ‘travel’ from one virtual space to the other. Yet, the extent to which these criteria apply to companies remains highly debatable. Indeed, corporate actors are not authorized to exercise physical coercion against citizens or collect taxes. While the latter issue could reasonably be refuted by the argument that the collection of data largely equates to the collection of taxes due to their monetizable character, or by selling data storage plans based on the amount of virtual goods a user wishes to store, the argument remains that corporate sovereignty inherently takes on a different form than State sovereignty. This becomes more apparent when considering that States and companies inherently project different narratives onto their target audience, with the former employing a vocabulary of citizenry while the latter considers its subordinates as ‘customers’ with the subsequent prioritization of commodification over human autonomy.

Nevertheless, the foregoing proves that Metaverse operators are factually exercising regulatory actions that mirror those of a State. Scholars draw an analogy with the financial principle of ‘same activity, same regulation’, prioritizing a logic of assigning regulatory duties based on an actor’s conduct rather than their status. In the context of a Metaverse, the overwhelming majority of power and factual control over the virtual space is likely assigned to one or a few dominant actors. Evidently, the extent to which the sovereign State can then still exercise factual control over this space that is entirely detached from State borders is severely limited. Subsequently, the ability of the regulatory approach taken under the DSA to effectively regulate such Metaverse spaces is highly questionable.

 

Conclusion

The development of Metaverse spaces undeniably creates promising societal benefits. Yet, as seen with the regulation of Web 2.0, the stakes for the Commission’s web 4.0 initiative are exceptionally high. It is crucial to be ahead of the developments in order to prevent power balances between States and private corporations to shift drastically. If the issue of human rights protection remains to be overlooked by the initiative, the possibility of an all-powerful Metaverse operator arising, or possibly even a “Virtual Wild West”, becomes increasingly realistic. While legislative efforts provide for promising frameworks, further elaboration on human rights duties of companies is crucial to facilitate a responsible transition into the virtual space. While it is largely undisputable that rendering Metaverse platforms as entirely sovereign States is rather undesirable and unrealistic, it is quintessential to assign responsibilities that mirror the factual position and regulatory actions of operators. Yet, the EU legislator will have no easy task in determining to what extent such duties should be assigned upon providers and what form these duties should have.



Source link

26Jun

Predicting AI’s Impact on Work


GovAI research blog posts represent the views of their authors, rather than the views of the organisation.

Introduction

In the coming years, AI will impact many people’s jobs.

AI systems like ChatGPT and Claude can already perform a small but growing number of tasks. This includes, for example, drafting emails, producing illustrations, and writing simple computer programs.

Increasingly capable AI systems will produce increasingly significant economic effects. Automation will boost economic growth, but it will also disrupt labour markets: new jobs will be created and others will be lost. At a minimum, there will be immediate harm to workers who are forced to look for new work. Depending on how different groups are affected, inequality could rise or fall.

While automation is not a new phenomenon, some economists have suggestedcontroversially — that AI’s impacts might be more disruptive than those of previous labour-saving technologies. First, AI-driven automation could potentially happen faster. Second, at least in the long-run, AI might more heavily substitute for human labour. The second concern means that, beyond some level of automation, the typical person’s ability to find well-paid work could actually begin to decline. However, there is still no consensus on what to expect.

If policymakers could more clearly foresee AI’s economic impacts, then they could more readily develop policies to mitigate harms and accelerate benefits. To this end, some researchers have begun to develop automation evaluations: forward-looking assessments of AI’s potential to automate work, as well as automation’s downstream impacts on labour markets.

All existing approaches to automation evaluations have major limitations. Researchers are not yet in a position to make reliable predictions.

Fortunately, though, there is a great deal of room to produce more informative evaluations. This post will discuss a number of promising directions for further work, such as adopting more empirically-grounded methods for estimating automation potential and leveraging the “staged release” of AI systems to study early real-world effects. As the uncertain economic impact of AI looms increasingly large, advancing the science of automation evaluations should be a policy priority.

The Importance of Automation Evaluations

Policymakers will need to address the economic impacts of AI to some extent. This may mean crafting policies to support high-growth industries, limit harm to displaced workers (for example, by supporting retraining efforts), or redistribute concentrated gains.

Reliable automation evaluations would give policymakers more time to plan and craft effective policies, by offering them foresight into AI’s economic impacts.1 Without this foresight, policymakers could find themselves scrambling to respond to impacts after-the-fact. A purely reactive approach could be particularly inadequate if AI’s impacts unfold unusually quickly.2

Current Automation Evaluation Methods

The potential automation impacts of AI can be evaluated in several ways, though existing approaches have important limitations.

In this post, I review two prominent methods: estimating “occupational exposure” to AI using task descriptions and measuring the task-level productivity impacts that workers get from using AI systems.3 The former helps give a broad but imprecise overview of potential labour market impacts across the economy. The latter provides more precise, but narrowly focused evidence on the effect of AI on particular occupations and tasks.

After describing these two approaches, along with their respective advantages and limitations, I will discuss limitations that are relevant to both methods. I will then outline a number of research directions that could help to mitigate these limitations.

Estimating Occupational Exposure to AI Using Task Descriptions

What is “occupational exposure”?

One way to evaluate potential automation impacts is to estimate occupational exposure to AI. While definitions vary across studies, a task is generally considered “exposed” to AI if AI can be meaningfully helpful for completing it. One job is then typically said to be “more exposed” to AI than another if a larger proportion of the tasks that make up this job are exposed.

For example, because AI has proven to be useful for tasks involving writing, and less useful for physical tasks, secretaries are likely to be more exposed to today’s AI systems than roofers.

Exposure is a flexible concept that has been operationalised in a range of ways. For example, in a recent Science paper, my co-authors and I define a task as “exposed” to AI systems like ChatGPT if these systems could double the productivity of the worker performing the task.

How is exposure currently estimated?

Exposure estimates are not typically based on empirical observations of AI being applied to tasks.

Instead, these estimates are produced by drawing on existing datasets — such as the United States Bureau of Labor Statistics’ O*NET database — that describe a wide range of worker tasks. These descriptions may then be given to AI experts, who are asked to apply their knowledge of AI to judge whether the task is exposed. Alternatively, experts may develop a grading rubric that classifies tasks as exposed or unexposed based on whether they have particular traits (e.g. whether the tasks are described as “physical” and “routine”). There are also a number of other possible variations on these approaches.4

Drawing on existing task descriptions is appealing, because it avoids the need to collect new data or perform costly experiments. As a result, these studies are often able to report exposure estimates for a large number of jobs across the economy. This can help identify broad patterns and macro-level findings that would not be clear if only a handful of occupations were considered.

How accurate are current exposure estimates?

To some extent, these exposure studies achieve breadth by sacrificing accuracy. We cannot expect perfect accuracy from estimates that are based only on descriptions of tasks.

The methodologies applied in these studies, particularly the earliest studies, have attracted a number of critiques. It has also been noted that different exposure studies produce conflicting estimates, which implies that at least some estimates must be substantially inaccurate.

On the other hand, some patterns have emerged across recent exposure studies. Arguably, some of them are also beginning to be empirically validated. For example, recent studies have consistently found that higher-wage work is more exposed to AI in the US. This finding matches a pattern in AI adoption data in the US: on average, industries with higher AI adoption rates also have higher average wages.

Ultimately, we do not yet know exactly how accurate description-based exposure estimates are or can become.

Measuring Task-Level Worker Productivity Impacts

A common alternative approach to automation evaluation is to measure the task-level productivity impacts of AI systems on workers. Using this method, researchers randomly assign access to an AI system to some workers but not to others. They then measure how much more or less productively workers with access to the AI system can perform various tasks compared to workers without access. 

Unlike description-based exposure estimates, these worker productivity impact estimates are based on empirical observation. They also attempt to provide information about exactly how useful AI is for a given task, rather than simply classifying a task as “exposed” or “not exposed.”

For example, one study reported a 40% time savings and 18% boost in quality on professional writing tasks. Another reported a 55.8% time savings for software developers working on a coding task using GitHub Copilot.5

Ultimately, these experiments can offer more reliable and fine-grained information about how useful AI is for performing a specific task. The chief limitation of these experiments, however, is that they can be costly to design and run, particularly if they are implemented in real-world work settings. As a result, unlike description-based exposure estimates, they are typically applied to individual occupations and small sets of tasks. They are therefore more limited in scope and do not provide the broad economy-wide insights that can be captured by occupational exposure studies.

Limitations of Existing Methods

These two automation evaluation methods both have distinct strengths and weaknesses. Description-based exposure studies sacrifice accuracy for breadth, while worker productivity studies offer greater accuracy but on a smaller scale. Researchers deciding between these methods therefore need to consider their priorities in light of a significant trade-off between accuracy and breadth.

While the two methods have distinct limitations, there is a third category of limitations that applies to both methods. Because of these limitations, neither method can be used to directly predict the impact of AI on real-world variables such as wages, employment, or growth.

In particular, neither approach can effectively predict or account for:

  • Barriers to AI adoption
  • Changes in demand for workers’ outputs
  • The complexity of real-world jobs
  • Future AI progress
  • New tasks and new ways of producing the same outputs

To accurately predict real-world impacts, further evidence and analysis are ultimately needed.

Neither method accounts for barriers to AI adoption or for changes in demand for workers’ outputs

Ultimately, these methods can only predict whether AI has the potential to affect an occupation in some significant way. They do not tell us that AI actually will have a significant impact. They also do not tell us whether the impact will be positive or negative for workers.

For example, if we learn that AI can boost productivity in some occupation, this does not mean that it actually will be widely adopted by that occupation within any given time frame. There may be important barriers that delay adoption, such as the need for employee training, process adjustments, or costly capital investments.

Even if AI does boost productivity within an occupation, the implications for wages and employment are not necessarily clear. For example, if copy editors become more productive, this may allow them to earn more money by completing more assignments. However, it may also cause them to earn less, since the amount they earn per assignment could decline as the overall supply of copy-editing grows. The net effect will depend, in part, on how much demand for copy-editing rises as prices fall. However, neither exposure studies or worker productivity impact studies tell us anything about impacts on the demand for copy-editing services.

Neither method fully accounts for the complexity of real-world jobs

For the most part, both of the automation evaluation methods I have discussed treat jobs as collections of well-defined, isolated tasks.6 The evaluations consider how useful AI is for individual tasks, with the hope that researchers can easily make inferences about how AI will affect occupations that contain those tasks.

However, this approach overlooks the nuanced reality of jobs. Many occupations actually involve a complex web of interrelated tasks, interpersonal interactions, and contextual decisions. 

For example, even if an AI system can perform most of the individual tasks that are considered to be part of a particular worker’s job, this does not necessarily imply that the job can be automated to the point that the work is completely replaced by the technology. Furthermore, researchers cannot reliably judge how AI will impact a worker’s overall productivity if they do not understand how their workflow or role will shift in response to the adoption of AI.

Neither method accounts for future AI progress

The economic impact of AI will partly depend on how existing AI capabilities are applied throughout the economy. However, the further ahead we look, the more these impacts will depend on what new AI capabilities are developed.

Empirical worker productivity studies can only measure the impact of existing AI systems. Exposure studies typically ask analysts to judge how exposed various tasks are to existing AI capabilities. It will inevitably be harder to estimate exposure to future capabilities, when we do not yet know what these capabilities will be.

Neither method accounts for new tasks and new ways of producing the same outputs

The introduction of new technologies like the electric lightbulb or digital camera did not automate work by performing the same tasks that workers had previously performed in order to light a gas lamp or develop a photograph in a darkroom. Instead, these technologies completely changed the set of tasks that a worker needed to perform in order to produce the same or better-quality output (e.g. a brightly lit street lamp or a photograph).

These historical examples suggest that we cannot necessarily assume that a job will remain immune to significant changes just because AI is not helpful for performing the tasks it currently involves.

When considered on a broad, historical scale, technological progress does not only allow existing tasks to be performed more efficiently. It also makes entirely new tasks possible. Existing approaches to automation evaluations do little to help predict or understand the implications of these new tasks.

Towards Improved Evaluations

Below, I discuss a few ways in which evaluations could be improved to overcome some of these trade-offs and limitations. These are: 

  • Running large-sample worker productivity studies
  • Piloting evaluations of AI performance on worker tasks
  • Modelling additional economic variables
  • Measuring automation impacts in real-world settings (perhaps leveraging the “staged release” of new AI systems)

Running large-sample worker productivity studies

One way to overcome the trade-off between breadth and accuracy in automation evaluations would be to simply invest in much larger-scale worker productivity studies.

For example, a large-scale study (or set of studies) could attempt to empirically measure productivity impacts across a representative sample of economically relevant tasks and occupations. If the sample is large enough, it could offer the same kinds of insights about economy-wide patterns and trends that exposure studies aim to offer — but with greater accuracy and precision.

While the costs involved would be significant, it is possible that the insights produced would warrant these costs.

Piloting evaluations of AI performance on worker tasks

Another, potentially more scalable, approach to achieving both breadth and empirical grounding could be to pilot evaluations of AI performance on a wide variety of worker tasks. These evaluations would involve having AI systems perform tasks that workers currently perform and then assessing how helpful the technology has been in terms of reducing task time or improving the quality of task outputs. 

In practice, this approach would involve treating automation evaluations the same way researchers treat evaluations on performance-based benchmarks for other AI capabilities that are not directly tied to work. The goal of this approach would be to directly assess what AI systems can do, rather than assessing (as in the case of worker productivity studies) what workers can do with AI systems.

As an illustrative example, an evaluation focused on the specific task of writing professional emails could judge how well a model performs at drafting a representative sample of professional emails (e.g. a polite rejection email, a scheduling email, a workshop invitation). Evaluation scores could then be either considered directly or converted into binary judgements about whether or not a task is “exposed.” They could even be used to judge whether or not a system is technically capable of reliably automating a task.

Of course, there are significant technical challenges associated with implementing this approach. These include:

  • Translating task descriptions into clear prompts to an AI system7
  • Developing and validating efficient and reliable methods for rating AI system performance on a variety of tasks8,9

Despite these challenges, running initial pilots to develop and apply these evaluations could be a worthwhile experiment. If the early pilots are promising, then the approach could be scaled to examine a broader and more representative set of occupational tasks. 

Having the technical infrastructure to run evaluations of AI performance on worker tasks could become increasingly important as the automation capabilities of new systems advance.

Modelling additional economic variables

Beyond the breadth/accuracy trade-off, a shared limitation of the methods I have discussed so far is that neither can account for additional economic variables (such as the elasticity of demand for a worker’s outputs) that will help to determine real-world automation impacts.

One path forward here seems to be for researchers to attempt to estimate some of these additional variables and integrate those estimates into economic models.

It is not clear how far researchers can follow this path, since many of the relevant variables are themselves difficult to estimate. However, there is some early work that moves in this direction. For instance, a recent pioneering paper from a team at MIT sets out to go “beyond exposure” to evaluate which tasks it is currently cost-effective to automate with computer vision.

Measuring automation impacts in real-world settings

Another important limitation of the methods I have discussed is that they study tasks in isolation and are not capable of addressing the complexity of real-world jobs. For example, estimates of task-level productivity impacts do not allow us to infer how a worker’s overall productivity will change.

Experiments that vary AI access across teams of workers within real-world firms, over long periods of time, could potentially allow us to overcome this limitation. Researchers could observe how AI increases the overall productivity and performance of both workers and teams. In addition, these studies could potentially allow us to begin to observe how AI affects demand for labour.

It would be costly to run these experiments and would require complicated negotiations with individual firms. Fortunately, however, there may be another approach to studying real-world impacts that would be more feasible.

Specifically, researchers could leverage the staged release of AI systems. Many AI companies already deploy frontier AI systems through “staged release” processes, which often involve giving different actors access to their systems at different points in time. With the cooperation of AI companies and other firms, researchers could take advantage of the variation in adoption during staged releases to estimate the effect of AI on productivity and labour demand in the real world.10,11 Because some companies will get access earlier than others, staged releases enable comparisons between companies with access and those without access.

Conclusion

Automation evaluations could prove to be a critical tool for policymakers as they work to minimise the societal harms and maximise the economic benefits of powerful AI systems. Predicting how those systems will affect the labour market is a challenge, and current methods for evaluation have limitations. It is important for policymakers and researchers to be mindful of these limitations and invest in post-deployment monitoring of impacts as well. However, researchers can also improve their predictions by running large-sample worker productivity studies, piloting evaluations of AI performance on worker tasks, modelling additional economic variables, and measuring automation impacts in real-world settings.

The author of this piece would like to thank the following people for helpful comments on this work: Ben Garfinkel, Stephen Clare, John Halstead, Iyngkarran Kumar, Daniel Rock, Peter Wills, Anton Korinek, Leonie Koessler, Markus Anderljung, Ben Bucknall, and Alan Chan for helpful conversations and feedback

Sam Manning can be contacted at

sa*********@go********.ai













Source link

25Jun

The Complex Landscape of Asylum Border Procedures in the new Asylum Procedures Regulation – European Law Blog


Blogpost 31/2024

At the heart of the negotiations for the New Pact on Migration and Asylum lies one of its most contentious elements: the regulation of border procedures. During the Council negotiations, the Asylum Procedures Regulation (APR) underwent significant modifications, particularly in the provisions that regulate border procedures, to incorporate perspectives from all Member States. Despite expectations for improvements during trialogues with the Parliament, the final outcome in December 2023 witnessed a step back from many of the anticipated safeguards. Border procedures are perceived in the agreed text as an important ‘migration management tool’ and as a responsibility mechanism, mandating the examination of asylum applications at the borders, while asylum seekers will be subject to the ‘non-entry’ fiction. This blogpost aims to examine the complex landscape of border procedures based on the final text of the APR.

 

The Arduous Negotiations on Border Procedures

The EU Pact placed a paramount emphasis on the EU’s external borders, introducing a  ‘seamless link’ between all stages of the pre-entry phase, from the screening procedure, to an expanded use of asylum border procedures and where applicable, return border procedures for rejected asylum seekers. Border procedures involve the swift processing of asylum claims at border locations, while third-country national are subject to the ‘non-entry’ fiction. The main reason for their implementation is to guarantee the first-entry states’ responsibility by keeping asylum seekers at the external borders and preventing secondary movements within the EU. Despite being initially regulated in only two provisions within the amended proposal for an APR (Article 41 and 41a APR), the final text includes twelve provisions on  border procedures (Article 43-54 APR), highlighting their contentious nature during the negotiations and the difficulty of Member States in reaching an agreement.

The most difficult and divisive question during the negotiations was whether border procedures should be obligatory or voluntary.  On the one hand, central EU countries sought to make the use of border procedures obligatory to prevent  ‘secondary’ movements of asylum seekers and manage migration at the EU external borders. On the other hand, southern EU states opposed this, given that their widespread implementation would place a further strain on their resources and overburden their capacities for processing asylum claims. In addition, they argued that whether or not to apply border procedures, as well as the categories of persons to whom these should apply, should remain a prerogative of Member States, that are best placed to decide if a procedure is feasible given their specific circumstances.

Despite years of negotiations, with the APR text being discussed since 2016, the outcome is an extended regulation of border procedures, rendering them mandatory in some cases. This prolonged negotiation process has resulted in a complex framework with many provisions designed to accommodate the diverse interests of all involved Member States.

 

The scope of application of border procedures

Despite challenging negotiations on border procedures, the agreed text extends their scope of application (Articles 44-45 APR). Firstly, it renders their use mandatory when certain acceleration grounds are met. The mandatory application of border procedures is stipulated for those that have a low probability of international protection (20%) according to Union-wide average Eurostat data (Article 45 APR), those who pose potential threats to national security or public order and cases involving applicants who mislead the authorities. Regarding the last category of applicants, the APR text foresees that ‘after having been provided with a full opportunity to show good cause‘, those considered to have intentionally misled the authorities are subject to mandatory border procedures. While this wording aims to guard against arbitrary practices, there still remains a risk of wide interpretation by authorities.

Regarding the first reason, and according to the Council, an effective and meaningful border procedure should ensure that the number of persons that would actually be channeled to the border procedure remains high, and despite proposals from the Parliament to reduce the threshold  to 10%, the recognition rate of 20% remained in the final text with a corrective mechanism introduced during the negotiations with the Parliament (Article 45 and Article 42j APR). The corrective mechanism allows authorities to deviate from this threshold if there has been a significant change in the applicant’s country of origin since the publication of the relevant Eurostat data. It also allows states to take into account significant differences between first-instance decisions and final decisions (appeals). For example, if there is a notable discrepancy indicating that many initial rejections are overturned on appeal, this could be a factor in deciding not to apply the border procedure to an applicant from that country. However, this practice introduces a nationality-based criterion for the application of border procedures which may lead to discrimination, and it also raises important issues as there are significant discrepancies in the recognition rates of asylum seekers across European countries.

In addition to these obligatory cases, border procedures may be used at the discretion of authorities to examine the merits or the inadmissibility of an application under certain conditions. Specifically, this discretion applies if any of the circumstances listed in Article 42(1), points (a) to (g) and (j), and Article 42(3), point (b), are met, as well as when there is an inadmissibility ground in accordance with Article 38. This discretionary use could impede harmonization across the EU due to varying interpretations and implementations by different Member States.

Moreover, the regulation broadens the personal scope of border procedures, allowing their application following the screening, and when an application is made a) at an external border crossing point or transit zone (this was also foreseen in the APD), but also b) following apprehension in connection with an unauthorized border crossing of the external border, which means that individuals who are already within the territory of a Member State could be subjected to border procedures, and finally c) following disembarkation after a search and rescue operation (Article 43 APR).

Another important aspect discussed during the negotiations was the application of border procedures to unaccompanied minors with an agreement on excluding them from border procedures always, except for national security grounds (Article 53 (1) APR). Families with minors will be included in border procedures with additional safeguards: de-prioritisation of their examination and always reside in facilities that comply with the Reception Conditions Directive (RCD). Specifically, Article 44 (3) APR foresees that where the number of applicants exceeds the number referred to in the provision that regulates the member State’s adequate capacity level, priority shall be given to applications of certain third-country nationals that are not minor applicants and their family members. To the contrary, following admission to a border procedure, priority shall be given to the examination of the applications of minor applicants and their family members. Finally, vulnerable individuals will be exempted from border procedures only when it is assessed that the ‘necessary support’ cannot be provided to applicants with special reception or procedural needs (Article 53 (2) APR).

 

The concept of adequate capacity

In exchange for increased responsibility of frontline states through the wide implementation of border procedures, the APR introduces the concept of ‘adequate capacity’, with two distinct levels identified: the Union-level which is set at 30,000 (Article 46 APR), though the derivation of this figure remains unexplained, and the individual Member State level which is calculated based on numerical factors: by multiplying the number set out in Article 46 (Union-level adequate capacity) by the sum of irregular crossings of the external border, arrivals following search and rescue operations and refusals of entry at the external border in the Member State concerned during the previous three years and dividing the result thereby obtained by the sum of irregular crossings of the external border, arrivals following search and rescue operations and refusals of entry at the external border in the Union as a whole during the same period according to the latest available Frontex and Eurostat data (Article 47 APR). Only applications subject to the border procedure should be calculated towards reaching the adequate capacity.

Once ‘adequate capacity’ is reached (Article 48), the Commission will be notified and it will have to examine if the state is identified as being under a migratory pressure according to the Asylum and Migration Management Regulation. In such case, states will be able to derogate from the provisions that mandate the use of border procedures, and e.g. choose to keep asylum seekers at the borders and refer them in regular asylum procedures or transfer them within the territory and once again implement regular asylum procedures. However, such authorisation will not exempt the Member State from the obligation to examine in the border procedure applications made by applicants that are considered as a danger to national security or public order.

The introduction of the concept of ‘adequate capacity’ was designed to render the prescribed use of border procedures cognizant to the needs and migratory pressures on first-entry states and in this way to ensure their buy in. However, the final provisions demonstrate that the calculation of ‘adequate capacity’ is rather complex, while it relies solely on numerical data, overlooking the specific characteristics of arrivals or the actual capacity of first-entry countries. It seems that, in essense, this concept was added to ensure ‘predictability‘ by making sure that southern states will fulfill their responsibilities by examining a minimum number of applications through border procedures.

In addition, this will in practice incentivise Member States to use even more border procedures to reach their ‘adequate capacity’, in detention or other designated spaces created for these procedures, turning the process into a ‘lottery’ largely dependent on the timing of arrivals. If a person arrives before the ‘adequate capacity’ is reached, they will most probably be subjected to border procedures. Conversely, if they are fortunate enough to arrive once the capacity is reached, their cases will be examined under a regular asylum procedure with more safeguards. Finally, this approach is also potentially hindering harmonisation by prioritising national-level exception measures over solidarity and relocation in times of pressure. 

 

Rights at Risk

Although border procedures were initially implemented exceptionally in some Member States to address the 2015-2016 refugee ‘crisis,’ this practice has become the ‘norm’ in certain Member States, such as Greece and Italy, where they are routinely applied, even in situations with no notable increase in arrivals. It is expected that their use will rise as border procedures become mandatory for certain categories of asylum seekers.

Border procedures have been described as sub-standard procedures, due to the fast processing of asylum claims, the locations where these procedures are implemented, and the legal fiction of ‘non-entry’, a concept which means that asylum seekers will be considered as not entered into the territory while their claim will be examined in a border procedure. This provision is also maintained in the final text (Article 43 (2) APR). The legislation creates therefore avenues for disentangling the relation between physical presence of an asylum seeker on the territory and the legal presence. As scholars have pointed out, this legal fiction, justifies the creation of  ‘liminal’ space or ‘anomalous’ zones where common legal rules do not fully apply. Notably, Article 54 APR, allows their implementation within the territory, justifying the application of the ‘non-entry’ fiction even in locations far away from the actual territorial border. By shifting the border inwards, entire areas are treated as ‘borders’, and asylum seekers in these locations are subjected to a different, often more restrictive, set of rights compared to those who apply for asylum through regular in-country procedures. This practice can imperil several key rights of asylum seekers as it will be described below.

 

Towards more detention

During border procedures, asylum seekers should be kept at or close to the borders, leading to increased and systematic detention or other area-based restrictions. Within the APR, detention is not prescribed clearly, but it is not precluded either (Article 54 APR). The legal basis for imposing detention during border procedures can be found however in the agreed Reception Conditions Directive, where it is envisaged that detention may be imposed ‘in order to decide, in the context of a procedure, on the applicant’s right to enter the territory’ (Article 8c RCD). To what extent policies of non-entry undermine the right to liberty and freedom of movement is a matter raised many times in the case law of the CJEU, and in some cases of the ECtHR where the case-law on detention to prevent unauthorized entry (Article 5 (1) (f)) seems to be rather controversial. What is important to note though is that the ‘non-entry’ fiction in conjunction with the absence of clarifying the reception conditions (Article 54 APR) applicable in border procedures may lead to increased and routinised detention practices in EU external states.

 

The issue of legal aid

The question of free legal assistance in border procedures has been another area of contention during the negotiations. While the European Parliament stressed its importance, the Member States were against expanding it to the first instance procedure due to financial and administrative constraints. A compromise solution was agreed offering free legal counseling for the administrative procedure (interview), excluding representation and allowing flexibility for Member States (Article 16 APR).

As outlined in the new APR (Article 16), legal counseling includes guidance and explanations of the administrative procedure, including information on rights and obligations during the process. Additionally, the legal counsellor will offer assistance with lodging the application as well as guidance on the different examination procedures and the reasons for their application e.g. admissibility rules or when someone is referred to accelerated or border procedures. However, this form of assistance does not extend to escorting individuals during the asylum interview, preparing them for the interview, or submitting legal memos at the first instance procedure.

In contrast, legal assistance and representation which is applicable in the appeal procedure (Article 17 APR) goes further, including the preparation of procedural documents and active participation in the hearing. Despite the supposed extension of legal aid, highlighted in a dedicated section (Section III), its provision remains in the form of counseling, marking a notable step back from the Parliament’s initial proposal. Furthermore, in practice, limited access both to counselling and legal assistance may occur due to the locations that border procedures take place such as detention or remote locations near the borders. This situation underscores potential challenges in ensuring effective legal support within the border procedures.Top of Form

 

The right to asylum and protection from refoulement

Other rights that may be undermined in the context of border procedures are the right to asylum and the protection from refoulement.  These rights may be compromised primarily due to the limited procedural safeguards applicable in border procedures, such as the very short time-limits (as stipulated in Article 51 APR, border procedure shall be as short as possible and a maximum of 12 weeks) combined with the limited access to legal assistance due to the locations where border procedures are taking place (detention or de facto detention) which may significantly impact the overall quality of the asylum procedure.

In addition, implementing border procedures to vulnerable applicants raises concerns that their special procedural needs may not be appropriately addressed. These individuals shall be provided with the necessary support to enable them to benefit from their rights. However, the notion of ‘necessary support’ yet remains undefined in the agreed text. It seems that it is mainly related to the special reception needs and the locations where the border procedures are implemented, assuming that border procedures are appropriate for applicants with special procedural needs unless ‘the necessary support cannot be provided in the locations referred to in Article 54’. Failure to provide special procedural guarantees to asylum seekers who require them directly impacts the quality and effectiveness of the asylum procedure.

Finally, the right to appeal is modified in the APR. According to Article 68 APR, the appeal will not have suspensive effect when the case is examined under border procedures. Some guarantees should nevertheless be preserved in this case, such as the possibility for the applicant to request a right to remain within a time-limit of at least 5 days and the provision of interpretation, information and free legal assistance (Article 68 (3) a (ii) in conjunction with Article 68 (5) APR). Even though it is positive to at least ensure that these guarantees are applicable in border procedures, the time-limit of 5 days to prepare and lodge an appeal and an application to request the right to remain may not be enough to ensure an effective remedy in practice.

 

Concluding Observations

The extensive regulation of border procedures in the final APR underscores their role as a crucial ‘migration management tool’. The persistence, during negotiations, to uphold border procedures at any cost resulted in intricate and complex provisions, emphasising their importance in ensuring responsibility of first-entry states. However, by containing asylum seekers at external borders, the EU risks exacerbating existing deficiencies, leading to overcrowd reception and detention centres and consequently violation of human rights. This directly impacts both asylum seekers, that will have to navigate asylum procedures with limited safeguards, and states grappling with overburdened capacities. As these rules take shape, a focus on rights-based interpretations and increased judicial oversight and monitoring are essential to safeguard the principles of fairness and respect for human rights at the borders.



Source link

10Jun

Research Scholar (Technical Research) | GovAI Blog


Note: There is a single, shared application form and application process for all Research Scholar position listings.

About the Team

GovAI was founded to help humanity navigate the transition to a world with advanced AI. Our first research agenda, published in 2018, helped define and shape the nascent field of AI governance. Our team and affiliate community possess expertise in a wide variety of domains, including AI regulation, responsible development practices, compute governance, AI company corporate governance, US-China relations, and AI progress forecasting.

GovAI researchers have closely advised decision makers in government, industry, and civil society. Our researchers have also published in top peer-reviewed journals and conferences, including International Organization, NeurIPS, and Science. Our alumni have gone on to roles in government, in both the US and UK; top AI companies, including DeepMind, OpenAI, and Anthropic; top think tanks, including the Centre for Security and Emerging Technology and RAND; and top universities, including the University of Oxford and the University of Cambridge.

Although we are based in Oxford, United Kingdom — and currently have an especially large UK policy focus — we also have team members in the United States and European Union.

About the Role

Research Scholar is a one-year visiting position. It is designed to support the career development of AI governance researchers and practitioners — as well as to offer them an opportunity to do high-impact work.

As a Research Scholar, you will have freedom to pursue a wide range of styles of work. This could include conducting policy research, social science research, or technical research; engaging with and advising policymakers; or launching and managing applied projects.

For example, past and present Scholars have used the role to:

Over the course of the year, you will also deepen your understanding of the field, connect with a network of experts, and build your skills and professional profile, all while working within an institutional home that offers both flexibility and support.

You will receive research supervision from a member of the GovAI team or network. The frequency of supervisor meetings and feedback will vary depending on supervisor availability, although once-a-week or once-every-two-weeks supervision meetings are typical. There will also be a number of additional opportunities for Research Scholars to receive feedback, including internal work-in-progress seminars. You will receive further support from an additional mentor chosen from within the organisation.

Note that for researchers with significant AI governance research experience, we are also hiring for Research Fellows. Research Fellow positions are longer-term roles, offering two-year renewable contracts, which place less emphasis on career exploration and more emphasis on contributing to existing or planned workstreams. There is a shared application for the Research Scholar and Research Fellow roles, so you need only submit the application once.

Highlighted Interest Area: Technical Research

In this round, we would especially like to highlight our interest in candidates who can conduct technical research to inform AI governance decisions. This type of research is sometimes known as “technical governance.”

Examples of technical governance questions include:

These kinds of questions often have foundational policy implications, but most AI governance researchers lack the technical expertise needed to answer them. For that reason, we are especially excited to receive applications from candidates with strong technical backgrounds.

Qualifications and Selection Criteria

We are open to candidates with a wide range of backgrounds. We have previously hired or hosted researchers with academic backgrounds in computer science, political science, public policy, economics, history, philosophy, and law. We are also interested in candidates with professional backgrounds in government, industry, and civil society.

For all candidates, we will look for:

  • A strong interest in using their career to positively influence the lasting impact of artificial intelligence, in line with our organisation’s mission
  • Demonstrated ability to produce excellent work (typically research outputs) or achieve impressive results
  • Self-direction and proactivity
  • The ability to evaluate and prioritise projects on the basis of impact
  • A commitment to intellectual honesty and rigour
  • Receptiveness to feedback and commitment to self-improvement
  • Strong communication skills
  • Collaborativeness and motivation to help others succeed
  • Some familiarity with the field of AI governance
  • Some expertise in a domain that is relevant to AI governance
  • A compelling explanation of how the Research Scholar position may help them to have a large impact

For candidates who are hoping to do particular kinds of work (e.g. technical research) or work on particular topics (e.g. US policy), we will also look for expertise and experience that is relevant to the particular kind of work they intend to do.

There are no educational requirements for the role. We have previously made offers to candidates at a wide variety of career stages. However, we expect that the most promising candidates will typically have either graduate degrees or relevant professional experience.

Duration, Location, and Salary

Duration

Contracts will be for a fixed 12-month term. Although renewal is not an option for these roles, Research Scholars may apply for longer-term positions at GovAI — for instance, Research Fellow positions — once their contracts end.

Location

Although GovAI is based in Oxford, UK, we are a hybrid organisation. Historically, a slight majority of our Research Scholars have actually chosen to be based in countries other than the UK. However, in some cases, we do have significant location preferences:

  • If a candidate plans to focus heavily on work related to a particular government’s policies, then we prefer that the candidate is primarily based in or near the most relevant city. For example, if someone plans to focus heavily on US federal policy, we will tend to prefer that they are based in or near Washington, DC.

  • If a candidate would likely be involved in managing projects or launching new initiatives to a significant degree, then we will generally prefer that they are primarily based out of our Oxford office.

  • Some potential Oxford-based supervisors (e.g. Ben Garfinkel) also have a significant preference for their supervisees being primarily based in Oxford.

If you have location restrictions – and concerns about your ability to work remotely might prevent you from applying – please inquire at

re*********@go********.ai











. Note that we are able to sponsor both UK visas and US visas.

Salary

Depending on their experience, we expect that successful candidates’ annual compensation will typically fall between £60,000 and £75,000 if based in Oxford, UK. If a Research Scholar resides predominantly in a city with a higher cost of living, their salary will be adjusted to account for the difference. As a reference point, a Research Scholar based in Washington, DC would typically receive between $85,000 and $115,000. In rare cases where salary considerations would prevent a candidate from accepting an offer, there may also be some flexibility in compensation.

Benefits associated with the role include health, dental, and vision insurance, a £5,000 (~$6,000) annual wellbeing budget, an annual commuting budget, flexible work hours, extended parental leave, ergonomic equipment, a competitive pension contribution, and 33 days of paid vacation (including public holidays).

Please inquire with

re*********@go********.ai











if questions or concerns regarding compensation or benefits might affect your decision to apply.

How to Apply and What to Expect

The application process consists of a written submission in the first round, a paid remote work test in the second round, and a final interview round. The interview round usually consists of one interview but might involve an additional interview in some cases. We also conduct reference checks for all candidates we interview.

Please feel free to reach out to

re*********@go********.ai











if you would need a decision communicated by a particular date, if you need assistance with the application due to a disability, or if you have questions about the application process.

We are committed to fostering a culture of inclusion, and we encourage individuals with underrepresented perspectives and backgrounds to apply. We especially encourage applications from women, gender minorities, people of colour, and people from regions other than North America and Western Europe who are excited about contributing to our mission. We are an equal opportunity employer.



Source link

10Jun

Research Scholar (US Policy) | GovAI Blog


Note: There is a single, shared application form and application process for all Research Scholar position listings.

About the Team

GovAI was founded to help humanity navigate the transition to a world with advanced AI. Our first research agenda, published in 2018, helped define and shape the nascent field of AI governance. Our team and affiliate community possess expertise in a wide variety of domains, including AI regulation, responsible development practices, compute governance, AI company corporate governance, US-China relations, and AI progress forecasting.

GovAI researchers have closely advised decision makers in government, industry, and civil society. Our researchers have also published in top peer-reviewed journals and conferences, including International Organization, NeurIPS, and Science. Our alumni have gone on to roles in government, in both the US and UK; top AI companies, including DeepMind, OpenAI, and Anthropic; top think tanks, including the Centre for Security and Emerging Technology and RAND; and top universities, including the University of Oxford and the University of Cambridge.

Although we are based in Oxford, United Kingdom — and currently have an especially large UK policy focus — we also have team members in the United States and European Union.

About the Role

Research Scholar is a one-year visiting position. It is designed to support the career development of AI governance researchers and practitioners — as well as to offer them an opportunity to do high-impact work.

As a Research Scholar, you will have freedom to pursue a wide range of styles of work. This could include conducting policy research, social science research, or technical research; engaging with and advising policymakers; or launching and managing applied projects.

For example, past and present Scholars have used the role to:

Over the course of the year, you will also deepen your understanding of the field, connect with a network of experts, and build your skills and professional profile, all while working within an institutional home that offers both flexibility and support.

You will receive research supervision from a member of the GovAI team or network. The frequency of supervisor meetings and feedback will vary depending on supervisor availability, although once-a-week or once-every-two-weeks supervision meetings are typical. There will also be a number of additional opportunities for Research Scholars to receive feedback, including internal work-in-progress seminars. You will receive further support from an additional mentor chosen from within the organisation.

Note that for researchers with significant AI governance research experience, we are also hiring for Research Fellows. Research Fellow positions are longer-term roles, offering two-year renewable contracts, which place less emphasis on career exploration and more emphasis on contributing to existing or planned workstreams. There is a shared application for the Research Scholar and Research Fellow roles, so you need only submit the application once.

Highlighted Interest Area: US Policy

In this round, we would especially like to highlight our interest in candidates who intend to focus on US policy and work primarily from Washington, DC.

Although the UK is currently the largest focus of GovAI’s policy work, we have expanded our US policy engagement over the past year. We are now interested in expanding it further, potentially by building up a DC-based unit of the organisation.

A DC-based Research Scholar could serve as a bridge between US policy conversations and other research happening at GovAI. They could also lead projects on US policy questions, such as:

  • What could sensible federal-level regulation of frontier AI look like?
  • Are US-led export controls likely to have their intended effects?
  • What state-level regulations are plausible – and how will they interact with regulatory activity at the federal level?

A DC-based Research Scholar could also help inform GovAI’s decisions about whether and how to expand our US policy engagement. It is possible that they would ultimately play a significant role in helping us to establish a new DC unit of the organisation after their one-year term.

Qualifications and Selection Criteria

We are open to candidates with a wide range of backgrounds. We have previously hired or hosted researchers with academic backgrounds in computer science, political science, public policy, economics, history, philosophy, and law. We are also interested in candidates with professional backgrounds in government, industry, and civil society.

For all candidates, we will look for:

  • A strong interest in using their career to positively influence the lasting impact of artificial intelligence, in line with our organisation’s mission
  • Demonstrated ability to produce excellent work (typically research outputs) or achieve impressive results
  • Self-direction and proactivity
  • The ability to evaluate and prioritise projects on the basis of impact
  • A commitment to intellectual honesty and rigour
  • Receptiveness to feedback and commitment to self-improvement
  • Strong communication skills
  • Collaborativeness and motivation to help others succeed
  • Some familiarity with the field of AI governance
  • Some expertise in a domain that is relevant to AI governance
  • A compelling explanation of how the Research Scholar position may help them to have a large impact

For candidates who are hoping to do particular kinds of work (e.g. technical research) or work on particular topics (e.g. US policy), we will also look for expertise and experience that is relevant to the particular kind of work they intend to do.

There are no educational requirements for the role. We have previously made offers to candidates at a wide variety of career stages. However, we expect that the most promising candidates will typically have either graduate degrees or relevant professional experience.

Duration, Location, and Salary

Duration

Contracts will be for a fixed 12-month term. Although renewal is not an option for these roles, Research Scholars may apply for longer-term positions at GovAI — for instance, Research Fellow positions — once their contracts end.

Location

Although GovAI is based in Oxford, UK, we are a hybrid organisation. Historically, a slight majority of our Research Scholars have actually chosen to be based in countries other than the UK. However, in some cases, we do have significant location preferences:

  • If a candidate plans to focus heavily on work related to a particular government’s policies, then we prefer that the candidate is primarily based in or near the most relevant city. For example, if someone plans to focus heavily on US federal policy, we will tend to prefer that they are based in or near Washington, DC.

  • If a candidate would likely be involved in managing projects or launching new initiatives to a significant degree, then we will generally prefer that they are primarily based out of our Oxford office.

  • Some potential Oxford-based supervisors (e.g. Ben Garfinkel) also have a significant preference for their supervisees being primarily based in Oxford.

If you have location restrictions – and concerns about your ability to work remotely might prevent you from applying – please inquire at

re*********@go********.ai











. Note that we are able to sponsor both UK visas and US visas.

Salary

Depending on their experience, we expect that successful candidates’ annual compensation will typically fall between £60,000 and £75,000 if based in Oxford, UK. If a Research Scholar resides predominantly in a city with a higher cost of living, their salary will be adjusted to account for the difference. As a reference point, a Research Scholar based in Washington, DC would typically receive between $85,000 and $115,000. In rare cases where salary considerations would prevent a candidate from accepting an offer, there may also be some flexibility in compensation.

Benefits associated with the role include health, dental, and vision insurance, a £5,000 (~$6,000) annual wellbeing budget, an annual commuting budget, flexible work hours, extended parental leave, ergonomic equipment, a competitive pension contribution, and 33 days of paid vacation (including public holidays).

Please inquire with

re*********@go********.ai











if questions or concerns regarding compensation or benefits might affect your decision to apply.

How to Apply and What to Expect

The application process consists of a written submission in the first round, a paid remote work test in the second round, and a final interview round. The interview round usually consists of one interview but might involve an additional interview in some cases. We also conduct reference checks for all candidates we interview.

Please feel free to reach out to

re*********@go********.ai











if you would need a decision communicated by a particular date, if you need assistance with the application due to a disability, or if you have questions about the application process.

We are committed to fostering a culture of inclusion, and we encourage individuals with underrepresented perspectives and backgrounds to apply. We especially encourage applications from women, gender minorities, people of colour, and people from regions other than North America and Western Europe who are excited about contributing to our mission. We are an equal opportunity employer.



Source link

10Jun

Research Scholar (Special Projects) | GovAI Blog


Note: There is a single, shared application form and application process for all Research Scholar position listings.

About the Team

GovAI was founded to help humanity navigate the transition to a world with advanced AI. Our first research agenda, published in 2018, helped define and shape the nascent field of AI governance. Our team and affiliate community possess expertise in a wide variety of domains, including AI regulation, responsible development practices, compute governance, AI company corporate governance, US-China relations, and AI progress forecasting.

GovAI researchers have closely advised decision makers in government, industry, and civil society. Our researchers have also published in top peer-reviewed journals and conferences, including International Organization, NeurIPS, and Science. Our alumni have gone on to roles in government, in both the US and UK; top AI companies, including DeepMind, OpenAI, and Anthropic; top think tanks, including the Centre for Security and Emerging Technology and RAND; and top universities, including the University of Oxford and the University of Cambridge.

Although we are based in Oxford, United Kingdom — and currently have an especially large UK policy focus — we also have team members in the United States and European Union.

About the Role

Research Scholar is a one-year visiting position. It is designed to support the career development of AI governance researchers and practitioners — as well as to offer them an opportunity to do high-impact work.

As a Research Scholar, you will have freedom to pursue a wide range of styles of work. This could include conducting policy research, social science research, or technical research; engaging with and advising policymakers; or launching and managing applied projects.

For example, past and present Scholars have used the role to:

Over the course of the year, you will also deepen your understanding of the field, connect with a network of experts, and build your skills and professional profile, all while working within an institutional home that offers both flexibility and support.

You will receive research supervision from a member of the GovAI team or network. The frequency of supervisor meetings and feedback will vary depending on supervisor availability, although once-a-week or once-every-two-weeks supervision meetings are typical. There will also be a number of additional opportunities for Research Scholars to receive feedback, including internal work-in-progress seminars. You will receive further support from an additional mentor chosen from within the organisation.

Note that for researchers with significant AI governance research experience, we are also hiring for Research Fellows. Research Fellow positions are longer-term roles, offering two-year renewable contracts, which place less emphasis on career exploration and more emphasis on contributing to existing or planned workstreams. There is a shared application for the Research Scholar and Research Fellow roles, so you need only submit the application once.

Highlighted Interest Area: Special Projects

In this round, we would especially like to highlight our interest in candidates who intend to manage projects or launch new initiatives.

Some of our most impactful Research Scholars have dedicated the majority of their time to areas other than research and policy engagement. Example projects include organising high-impact events, serving as a project manager for policy engagement work, and launching a new organisation to facilitate international dialogue.

For this reason, we are open to Research Scholar candidates who would primarily focus on applied work. As one example: we are open to candidates who are exploring launching new AI governance organisations and would benefit from the expertise and environment that GovAI can offer.

Qualifications and Selection Criteria

We are open to candidates with a wide range of backgrounds. We have previously hired or hosted researchers with academic backgrounds in computer science, political science, public policy, economics, history, philosophy, and law. We are also interested in candidates with professional backgrounds in government, industry, and civil society.

For all candidates, we will look for:

  • A strong interest in using their career to positively influence the lasting impact of artificial intelligence, in line with our organisation’s mission
  • Demonstrated ability to produce excellent work (typically research outputs) or achieve impressive results
  • Self-direction and proactivity
  • The ability to evaluate and prioritise projects on the basis of impact
  • A commitment to intellectual honesty and rigour
  • Receptiveness to feedback and commitment to self-improvement
  • Strong communication skills
  • Collaborativeness and motivation to help others succeed
  • Some familiarity with the field of AI governance
  • Some expertise in a domain that is relevant to AI governance
  • A compelling explanation of how the Research Scholar position may help them to have a large impact

For candidates who are hoping to do particular kinds of work (e.g. technical research) or work on particular topics (e.g. US policy), we will also look for expertise and experience that is relevant to the particular kind of work they intend to do.

There are no educational requirements for the role. We have previously made offers to candidates at a wide variety of career stages. However, we expect that the most promising candidates will typically have either graduate degrees or relevant professional experience.

Duration, Location, and Salary

Duration

Contracts will be for a fixed 12-month term. Although renewal is not an option for these roles, Research Scholars may apply for longer-term positions at GovAI — for instance, Research Fellow positions — once their contracts end.

Location

Although GovAI is based in Oxford, UK, we are a hybrid organisation. Historically, a slight majority of our Research Scholars have actually chosen to be based in countries other than the UK. However, in some cases, we do have significant location preferences:

  • If a candidate plans to focus heavily on work related to a particular government’s policies, then we prefer that the candidate is primarily based in or near the most relevant city. For example, if someone plans to focus heavily on US federal policy, we will tend to prefer that they are based in or near Washington, DC.

  • If a candidate would likely be involved in managing projects or launching new initiatives to a significant degree, then we will generally prefer that they are primarily based out of our Oxford office.

  • Some potential Oxford-based supervisors (e.g. Ben Garfinkel) also have a significant preference for their supervisees being primarily based in Oxford.

If you have location restrictions – and concerns about your ability to work remotely might prevent you from applying – please inquire at

re*********@go********.ai











. Note that we are able to sponsor both UK visas and US visas.

Salary

Depending on their experience, we expect that successful candidates’ annual compensation will typically fall between £60,000 and £75,000 if based in Oxford, UK. If a Research Scholar resides predominantly in a city with a higher cost of living, their salary will be adjusted to account for the difference. As a reference point, a Research Scholar based in Washington, DC would typically receive between $85,000 and $115,000. In rare cases where salary considerations would prevent a candidate from accepting an offer, there may also be some flexibility in compensation.

Benefits associated with the role include health, dental, and vision insurance, a £5,000 (~$6,000) annual wellbeing budget, an annual commuting budget, flexible work hours, extended parental leave, ergonomic equipment, a competitive pension contribution, and 33 days of paid vacation (including public holidays).

Please inquire with

re*********@go********.ai











if questions or concerns regarding compensation or benefits might affect your decision to apply.

How to Apply and What to Expect

The application process consists of a written submission in the first round, a paid remote work test in the second round, and a final interview round. The interview round usually consists of one interview but might involve an additional interview in some cases. We also conduct reference checks for all candidates we interview.

Please feel free to reach out to

re*********@go********.ai











if you would need a decision communicated by a particular date, if you need assistance with the application due to a disability, or if you have questions about the application process.

We are committed to fostering a culture of inclusion, and we encourage individuals with underrepresented perspectives and backgrounds to apply. We especially encourage applications from women, gender minorities, people of colour, and people from regions other than North America and Western Europe who are excited about contributing to our mission. We are an equal opportunity employer.



Source link

10Jun

Research Fellow | GovAI Blog


About the Team

GovAI was founded to help humanity navigate the transition to a world with advanced AI. Our first research agenda, published in 2018, helped define and shape the nascent field of AI governance. Our team and affiliate community possess expertise in a wide variety of domains, including AI regulation, responsible development practices, compute governance, AI company corporate governance, US-China relations, and AI progress forecasting.

GovAI researchers have closely advised decision makers in government, industry, and civil society. Our researchers have also published in top peer-reviewed journals and conferences, including International Organization, NeurIPS, and Science. Our alumni have gone on to roles in government, in both the US and UK; top AI companies, including DeepMind, OpenAI, and Anthropic; top think tanks, including the Centre for Security and Emerging Technology and RAND; and top universities, including the University of Oxford and the University of Cambridge.

Although we are based in Oxford, United Kingdom — and currently have an especially large UK policy focus — we also have team members in the United States and European Union.

About the Role

Research Fellows will conduct research into open and important questions that bear on AI governance. This research could take the form of reports, policy memos, academic papers, blog posts, or whatever format is most conducive to impact. Research Fellows may also spend a substantial portion of their time engaging in direct policy advising.

We are interested in candidates with a range of academic and professional backgrounds, who have a demonstrated ability to produce excellent research and care deeply about the lasting impacts of AI on the world, in line with our mission.

Research Fellows are expected to work under the guidance of a Senior Research Fellow, but have substantial flexibility in project selection. They are also expected to offer supervision and mentorship to junior researchers, such as our Summer and Winter Fellows. Collaboration with other researchers both inside and outside of GovAI is encouraged.

We are committed to supporting the work of Research Fellows by offering expert guidance, funding for projects, productivity tools, limited obligations on one’s time, access to a broad network of experts and potential collaborators, and opportunities to communicate one’s research to policymakers and other audiences.

For promising researchers who lack sufficient experience conducting AI governance research, we may consider instead offering one-year visiting Research Scholar positions that are intended to support professional development. There is a shared application for the Research Scholar and Research Fellow roles.

Areas of Interest

We are open to work on a broad range of topics. To get a sense of our focus areas, you may find it useful to read our About page or look at examples listed on our Research page. Broad topics of interest include — but are not limited to — responsible AI development and release practices, AI regulation, international governance, compute governance, and risk assessment and impact forecasting.

Please note that we are specifically open to hiring researchers who intend to conduct technical research, so long as this technical research is relevant to AI governance. 

Qualifications and Selection Criteria

We are open to candidates with a wide range of academic or professional backgrounds. We have previously hired or hosted researchers with backgrounds in computer science, public policy, political science, economics, history, philosophy, and law. 

You might be a particularly good fit if you have:‍

  • Demonstrated ability to produce excellent research, preferably (but not necessarily) within the domain of AI governance
  • Deep interest in the lasting implications of artificial intelligence for the world, in line with our organisation’s mission
  • Established expertise in a domain with significant AI governance relevance
  • Self-directedness and desire for impact
  • Commitment to intellectual honesty and rigour
  • Good judgement regarding the promisingness and importance of different research directions
  • Excellent communication and collaboration skills
  • Proactivity and commitment to professional growth
  • Strong interest in mentorship
  • Broad familiarity with the field of AI governance

There are no specific education or experience requirements for the role, although we expect that the most promising candidates will typically possess multiple years of relevant research or policy experience.

Duration, Location and Salary

Contracts are full-time and last two years fixed-term, with the possibility of renewal.

We typically prefer for Research Fellows to work primarily from our office in Oxford, UK. However, we also consider applications from strong candidates who are only able to work remotely. In cases where a Research Fellow’s work would be specifically relevant to actors in another part of the world (such as Washington, DC), we will also often prefer that the Research Fellow works from that part of the world.

We are able to sponsor visas in the UK and the US.

Depending on their experience, we expect that successful candidates’ annual compensation will — if they are based in Oxford, UK — typically fall between £60,000 and £80,000. If a Research Fellow resides predominantly in a city with a higher cost of living, their salary will be adjusted to account for the difference. As a reference point, a Research Fellow based in Washington, DC would typically receive between $85,000 and $120,000. In rare cases where salary considerations would prevent a candidate from accepting an offer, there may also be some flexibility in compensation.

Benefits associated with the role include health, dental, and vision insurance, a £5,000 (~$6,000) annual wellbeing budget, an annual commuting budget, flexible work hours, extended parental leave, ergonomic equipment, a competitive pension contribution, and 33 days of paid vacation (including public holidays).

Please inquire with

re*********@go********.ai











if questions or concerns regarding compensation or benefits might affect your decision to apply.

How to Apply and What to Expect

The application process consists of a written submission in the first round, a paid remote work test in the second round, and a final interview round. Candidates who pass through the second round should expect to participate in a pair of interviews and may also be asked to produce additional written material. We also conduct reference checks for all candidates we interview.

Please feel free to reach out to

re*********@go********.ai











if you would need a decision communicated by a particular date, if you need assistance with the application due to a disability, or if you have questions about the application process.

We are committed to fostering a culture of inclusion, and we encourage individuals with underrepresented perspectives and backgrounds to apply. We especially encourage applications from women, gender minorities, people of colour, and people from regions other than North America and Western Europe who are excited about contributing to our mission. We are an equal opportunity employer.



Source link

Protected by Security by CleanTalk