20May

What Do We Mean When We Talk About “AI Democratisation”?


This post, authored by Elizabeth Seger, describes and compares four different meanings of “AI democratisation” as highlighted in the author’s recent paper “Democratising AI: Multiple Meanings, Goals, and Methods”. It is part of a larger project by the author on the democratisation of AI.

GovAI research blog posts represent the views of their authors, rather than the views of the organisation.

What is “AI Democratisation”?

In recent months, discussion of “AI democratisation” has surged. AI companies around the world – such as Stability AI, Meta, Microsoft, and Hugging Face – are talking about their commitment to democratising AI, but it’s not always clear what they mean. The term “AI democratisation” seems to be employed in a variety of ways, causing commentators to speak past one another when discussing the goals, methodologies, risks, and benefits of AI democratisation efforts. 

This post describes four different notions of AI democratisation currently in use: democratisation of AI use, democratisation of AI development, democratisation of AI benefits, and democratisation of AI governance. Although these different concepts of democratisation often complement each other, they sometimes conflict. For example, if the public prefers that access to certain kinds of AI systems be restricted, then the “democratisation of AI governance” may call for access restrictions – but enacting these restrictions may hinder the “democratisation of AI development”.

The purpose of this post is to illustrate the multifaceted and sometimes conflicting nature of AI democratisation and provide a foundation for more productive conversations.

Democratisation of AI Use

When people speak about democratising some technology, they typically refer to democratising its use – making it easier for a wide range of people to use the technology. For example the “democratisation of 3D printers” refers to how, over the last decade, 3D printers have become much more easily acquired, built, and operated by the general public. 

The same meaning has been applied to the democratisation of AI. Stability AI, for instance, has been a vocal champion of AI democratisation. The company proudly describes its main product, the image generation model Stable Diffusion, as “a text-to-image model that will empower billions of people to create stunning art within seconds.” Microsoft similarly claims to be undertaking an ambitious effort “to democratize Artificial Intelligence (AI), to take it from the ivory towers and make it accessible to all.” A salient part of its plan is “to infuse every application that we interact with, on any device, at any point in time, with intelligence.” 

Overall, efforts to democratise AI use involve reducing the costs of acquiring and running AI tools and providing intuitive interfaces to facilitate human-AI interaction without the need for extensive training or technical know-how.

Democratisation of AI Development

However, when the AI community talks about democratising AI, they rarely limit their focus to the democratisation of AI use. Excitement seems to primarily be about democratising AI development – that is, helping a wider range of people contribute to AI design and development processes. 

Often, the idea is that tapping into a global community of AI developers will accelerate innovation and facilitate the development of AI applications that cater to diverse interests and needs. For example, Stability AI CEO Emad Mostaque advocates that “everyone needs to build [AI] technology for themselves…. It’s something that we want to enable because nobody knows what’s best [for example] for people in Vietnam besides the Vietnamese.”1 Toward this end, Stability AI has decided to open source Stable Diffusion. This means that they allow anyone to download the model, so long as they agree to terms of use, and then modify the model on their own computer. It is a move, Mostaque explains, that “puts power back into the hands of developer communities and opens the door for ground-breaking applications” by enabling widespread contributions to the technology’s development. The company motto reads “AI by the people, for the people”.

Other efforts to democratise AI development aim to widen the community of AI developers by making it easier for people with minimal programming experience and little familiarity with machine learning to participate. For example, a second aspect of Microsoft’s AI democratisation effort focuses on sharing AI’s power with the masses by “putting tools ‘…in the hands of every developer, every organisation, every public sector organisation around the world’ to build the AI systems and capabilities they need.” Towards this end, Microsoft – and similarly Google, H2O, and Amazon – have developed “no-code” tools that allow people to build models that are personalised to their own needs without prior coding or machine learning experience.  

Overall, various factors relevant to the democratisation of AI development include the accessibility of AI models and the computational resources used to run them, the size of AI models (since smaller models require less compute to run), opportunities for aspiring developers to upskill, and the provision of tools that enable those with less experience and expertise to create and implement their own machine learning applications.

Democratisation of AI Benefits 

A third sense of “AI democratisation” refers to democratising AI benefits, which is about facilitating the broad and equitable distribution of benefits that accrue to communities that build, control, and adopt advanced AI capabilities.2 Discussion tends to focus on the redistribution of profits generated by AI products.

The notion is nicely articulated by Microsoft’s CTO Kevin Scott: “I think we should have objectives around real democratisation of the technology. If the bulk of the value that gets created from AI accrues to a handful of companies in the West Coast of the United States, that is a failure.”3 Though DeepMind does not employ “AI democratisation” terminology, CEO Demis Hassabis expresses a similar sentiment. As reported by TIME, Hassabis believes the wealth generated by advanced AI technologies should be redistributed. “I think we need to make sure that the benefits accrue to as many people as possible – to all of humanity, ideally.” 

Profits might be redistributed, for instance, through the state, philanthropy, or the marketplace.

  

Democratisation of AI Governance

Finally, some discussions about AI democratisation refer to democratising AI governance. AI governance decisions often involve balancing AI-related risks and benefits to determine if, how, and by whom AI should be used, developed, and shared. The democratisation of AI governance is about distributing influence over these decisions to a wider community of stakeholders.

Motivation for democratising AI governance largely stems from concern that individual tech companies hold unchecked control over the future of a transformative technology and too much freedom to decide for themselves what constitutes safe and responsible AI development and distribution. For example, a single actor deciding to release an unsafe and powerful AI model could cause significant harm to individuals, to particular communities, or to society as a whole. 

One of Stability AI’s stated reasons for open sourcing Stable Diffusion is to avoid just such an outcome. As CEO Emad Mostaque told the New York Times, “We trust people, and we trust the community, as opposed to having a centralised, unelected entity controlling the most powerful technology in the world.” 

However, upon closer inspection, there is an irony here. In declaring that the company’s AI models will be made open source, Stability AI created a situation in which a single tech company made a major AI governance decision: the decision that a dual-use AI system should be made freely accessible to all. (Stable Diffusion is considered a “dual-use technology” because it has both beneficial and damaging applications. It can be used to create beautiful art or easily modified, for instance, to create fake and damaging images of real people.) It is not clear in the end that Stability AI’s decision to open source their models was actually a step forward for the democratisation of AI governance.

This case illustrates an important point: unlike other forms of AI democratisation, the democratisation of AI governance is not straightforwardly about accessibility.

Democratising AI Governance Is About Introducing Democratic Processes 

For the first three forms of democratisation discussed in this piece, “democratisation” is almost always used synonymously with “improving accessibility”. The democratisation of AI use is about making AI systems accessible for everyone to use. The democratisation of AI development is about making opportunities to participate in AI development widely accessible. The democratisation of AI benefits is mostly about distributing access to the wealth accrued through AI development, use, and control. 

However, importantly, the democratisation of AI governance involves the introduction of democratic processes. Democracy is not about giving every individual the power to do whatever they would like. Rather, it involves introducing processes to facilitate the representation of diverse and often conflicting beliefs, opinions, and values into decisions about how people and their actions are governed. Indeed, very often democratic decisions place restrictions on access and individual choice. Such is the case, for instance, with speed limits, firearm-ownership restrictions, and medication access controls.

Accordingly, the democratisation of AI governance is not necessarily achieved by improving AI model accessibility so that everyone has the opportunity to use or build on AI models as they like. The democratisation of AI governance needs to involve careful consideration about how diverse stakeholder interests and values can be effectively elicited and incorporated into well-reasoned AI governance decisions.

How exactly such democratic processes should take form is too large a topic to cover here, but it is one that warrants investigation as part of a comprehensive AI democratisation effort. Promising work in this space investigates, for instance, the use of citizen assemblies and deliberative tools and digital platforms to help institutions democratically navigate challenging and controversial issues in tech development and regulation.

Conclusion

This post has outlined four different notions of “AI democratisation”: the democratisation of AI use, the democratisation of AI development, the democratisation of AI benefits, and the democratisation of AI governance. Although these different forms of AI democratisation often overlap in practice, they sometimes come apart. For example, we have just seen how efforts to democratise the development of AI – for which improving AI model accessibility is key – may conflict with the democratisation of AI governance.

We should be wary, therefore, of using the term “democratisation” too loosely or, as is often the case, as a stand-in for “all things good”. We need to think carefully and specifically about what a given speaker means when they express a commitment to “democratising AI”. What goals do they have in mind? Does their proposed approach conflict with other AI democratisation goals – or with any other important ethical goals? 

If we want to move beyond ambiguous commitments, to productive discussions of concrete policies and trade-offs, then it’s time to tighten the language and clarify intentions.

Acknowledgements

I would like to thank Ben Garfinkel, Toby Shevlane, Ben Harack, Emma Bluemke, Aviv Ovadya, Divya Siddarth, Markus Anderljung, Noemi Dreksler, Anton Korinek, Guive Assadi, and Allan Dafoe for their feedback and helpful discussion.



Source link

20May

Research Engineer at Meta – Burlingame, CA


Meta Platforms, Inc. (Meta), formerly known as Facebook Inc., builds technologies that help people connect, find communities, and grow businesses. When Facebook launched in 2004, it changed the way people connect. Apps and services like Messenger, Instagram, and WhatsApp further empowered billions around the world. Now, Meta is moving beyond 2D screens toward immersive experiences like augmented and virtual reality to help build the next evolution in social technology. To apply, click “Apply to Job” online on this web page.Research Engineer Responsibilities

  • Develop highly scalable computer technology, such as algorithms, based on state-of-the-art areas of computer science, such as machine learning and neural network methodologies.
  • Combine broad and deep knowledge of relevant software and technology research domains along with the ability to synthesize a wide range of technical requirements to make significant contributions to Meta’s feature roadmap for its cutting edge platforms.
  • Adapt Meta technology, such as machine learning and neural network algorithms and architectures, to best exploit modem parallel environments (e.g., distributed clusters, multicore SMP, and GPU).
  • Collaborate with Research Scientists to facilitate research that enables learning the semantics of data (images, video, text, audio, and other modalities).
  • Research, design and develop new algorithms and techniques to improve the efficiency and performance of Meta’s platforms.
  • Identify potential improvements in company’s products and research and present effects of current engineering efforts on Meta’s market standing.
  • Apply knowledge of relevant research domains along with expert coding skills to platform and framework development projects.

Minimum Qualifications

  • Requires a Master’s degree in Computer Science, Computer Software, Computer Engineering, Applied Sciences, Mathematics, Analytics, Physics, or related field and four years of work experience in the job offered or in a computer-related occupation. Requires four years of experience in the following:
  • 1. Developing and debugging in C, C++, Java, Scala, or Python
  • 2. Performing research, developing an experiment, or designing a prototype in machine learning
  • 3. Machine learning and optimization
  • 4. Building systems based on machine learning or deep learning methods
  • 5. Using frameworks like PyTorch, Caffee2, Tensorflow, Theano, Keras, or Chainer
  • 6. Storage systems, distributed systems, HPC, compilers, CUDA programming, file systems, or server architectures
  • 7. Parallel modem environments (distributed clusters, multicore SMP, or GPU).

LocationsAbout Meta Meta builds technologies that help people connect, find communities, and grow businesses. When Facebook launched in 2004, it changed the way people connect. Apps like Messenger, Instagram and WhatsApp further empowered billions around the world. Now, Meta is moving beyond 2D screens toward immersive experiences like augmented and virtual reality to help build the next evolution in social technology. People who choose to build their careers by building with us at Meta help shape a future that will take us beyond what digital connection makes possible today—beyond the constraints of screens, the limits of distance, and even the rules of physics. Meta is committed to providing reasonable support (called accommodations) in our recruiting processes for candidates with disabilities, long term conditions, mental health conditions or sincerely held religious beliefs, or who are neurodivergent or require pregnancy-related support. If you need support, please reach out to

ac****************@fb.com











. US$222,235/year to US$240,240/year + bonus + equity + benefits

Individual pay is determined by skills, qualifications, experience, and location. Compensation details listed in this posting reflect the base salary only, and do not include bonus, equity or sales incentives, if applicable. In addition to base salary, Meta offers benefits. Learn more about benefits at Meta.



Source link

20May

Business Development Manager – Dublin

Job title: Business Development Manager – Dublin

Company: Elk Recruitment

Job description: to help deliver improved service and innovation to the customer. Detailed knowledge of marketing and business development…Position: Business Development Manager Location: Dublin / Hybrid Salary: Negotiable D.O.E The Job: The Business

Expected salary: €50000 – 60000 per year

Location: Dublin

Job date: Sun, 19 May 2024 07:34:03 GMT

Apply for the job now!

20May

The Case for Including the Global South in AI Governance Discussions


This post discusses arguments for and against inviting countries from the Global South to international AI governance discussions hosted by developed countries. It suggests that the arguments for inclusion are underappreciated.

GovAI research blog posts represent the views of their authors, rather than the views of the organisation.

Introduction

In recent years, many countries have called for the international governance of AI. There is a growing sense that international coordination will be needed to manage risks from AI, which may range from copyright infringement to the proliferation of unconventional weapons.

However, many key early international discussions have taken place in forums that exclude the Global South. These exclusive forums include the G7, OECD, and Global Partnership on AI.1 The upcoming UK-hosted AI Safety Summit — whose invitees reportedly include China, which is a global AI power, but also some Global South countries further from the technological frontier — may partly break from this pattern. Still, there is no doubt that consequential AI governance discussions are heavily concentrated within developed countries.

There are a number of reasons why policymakers in developed countries may prefer, at least initially, to talk mostly amongst themselves. One argument is that smaller, more homogenous groups can reach consensus more quickly. Another argument is that — since only a small group of countries produce most of the cutting-edge AI technology that is used globally — only a small group of countries need to coordinate to reduce most of the global risk.

However, policymakers in developed countries should not underestimate the value of including a broad range of Global South countries. As AI capabilities diffuse, the success of global governance regimes will ultimately hinge on the participation of countries from across the world. Including a broad set of countries in conversation now can help to avoid governance failures down the line. More globally inclusive conversations can also help to preempt the emergence of competing coalitions, increase the supply of expertise, and avoid the ethical problems inherent in excluding most of the world from decisions with global importance.

Why policymakers often prefer exclusive forums

Policymakers in developed countries seem, so far, to have a preference for discussing international AI governance in exclusive forums. There are three main arguments that may explain this preference:

  • Limiting the number of parties in global AI governance discussions can make it easier to reach consensus. Involving a larger and more diverse set of stakeholders will tend to make discussions less efficient and introduce additional complexity. This additional complexity could prolong — or even derail — the already difficult process of consensus-building.
  • Effective global AI governance relies especially strongly — at least for now — on coordinated action between the leading AI developer countries.  The argument here is that risks from AI emanate mostly from the small set of countries that host leading AI companies. (With the exception of China, all of these countries are in the Global North.) Therefore, at least for the time being, having this small set of countries converge on responsible policies may be sufficient to mitigate much of the global risk from AI.
  • Policy consensuses initially reached by leading AI developer countries can later spread to other countries. There is precedent for countries adopting or taking heavy inspiration from governance frameworks developed within smaller groups. Notable examples include the EU’s General Data Protection Regulation (GDPR), which produced a so-called “Brussell’s Effect” globally, or the OECD’s plan to implement a global minimum tax rate, which was initially discussed within the OECD and G20 before eventually expanding to include more than 130 countries.

The value of including the Global South

Although the above arguments have merit, there are also several strong — and seemingly under-appreciated — arguments for including Global South countries in key discussions. These arguments highlight the importance of early inclusion for securing future buy-in, preventing the emergence of competing coalitions, drawing on additional expertise, and avoiding the ethical problems inherent in exclusion.

  • In the future — even if not immediately — the success of global AI governance regimes will probably depend on the participation of many Global South countries. AI capabilities tend to diffuse over time, as technological progress makes AI development cheaper and easier, technical knowledge spreads, and key digital resources become freely available online. This means that AI capabilities that only a small number of countries possess today will probably eventually be possessed by a much larger portion of the globe. For certain policy issues, international governance regimes can also be undermined by even a single relevant country failing to implement effective policies. For example, if a single country fails to prevent the publication of biological design tools that are useful for building biological weapons, then this single failure could have global implications.2 Ultimately, in most cases, exclusive AI governance regimes will probably fail if they do not eventually expand to include a broader range of states.
  • Including Global South countries in governance discussions now can help to secure their committed participation in the future. Early inclusion functions as a form of diplomatic capital: countries that are actively engaged and feel valued in initial stages are more likely to stay involved and committed over the long term. Early inclusion also ensures that initial agreements do not unintentionally lock in features or framings that will make it much harder to achieve broader buy-in later on. A number of past global governance failures, such as the Multilateral Agreement on Investment (MAI) and the Anti-Counterfeiting Trade Agreement (ACTA), illustrate the risks of early exclusion. Both initiatives, led by major economies such as the UK and the US, seemingly faltered due to their exclusionary approach, which fostered mistrust and skepticism among sidelined countries. A similar sense of mistrust could ultimately hamper global AI governance efforts, particularly if initial frameworks emphasize issues — such as risks from technology proliferation — that are seen to be in tension with economic development.
  • Including Global South countries in governance discussions now can preempt the emergence of competing coalitions. If Western countries exclude Global South countries for international governance dialogues, this would leave a vacuum that could be filled by other geopolitical actors who are making concerted efforts to extend their influence. For instance, China has been diligently fostering relationships with Global South countries through ambitious initiatives like the Belt and Road. Moreover, China has recently taken a significant step by announcing the formation of an AI study group within the BRICS alliance. By neglecting to involve these nations, countries such as the US and UK might inadvertently cede influence to competitors who are more attentive to these emerging voices. This risks missing an opportunity to build a more comprehensive and unified global coalition.3
  • Including Global South countries can provide an additional source of valuable expertise. These countries — although they do not host leading AI companies — do collectively possess a great deal of expertise in policy and technology. Some of these countries have also developed unique AI expertise through the distinctive roles they play in the AI supply chain, for instance by providing services such as data gathering and labeling. Furthermore, many Global South countries have confronted a spectrum of AI-related challenges less frequently encountered in developed countries. Including experts from these countries can therefore provide valuable additional insights into the multifaceted risks posed by AI.
  • Since governance regimes crafted by leading AI developer countries will impact Global South countries, it is ethically important to give them a say in the design of these regimes. The AI products that leading developer countries produce have global effects. If a leading developer country releases a system that can be used to spread misinformation, for instance, then it may be used to spread misinformation anywhere in the world. In general, the misuse potential, biases, employment effects, and safety issues of systems released by these countries can affect all countries. On the other hand, the opportunities they offer for productivity growth, education, and healthcare can — at least with sufficient support — be harnessed by all countries as well. Although pragmatic considerations cannot be ignored, policymakers also cannot ignore the ethical issues inherent in excluding most of the world’s countries from conversations that will affect them deeply.

Conclusion

Many policymakers in developed countries have a preference for discussing international AI governance in exclusive forums. Although there are arguments that support this preference, there are also powerful — and seemingly underappreciated — arguments for ensuring that a substantial portion of early conversations include countries from the Global South. Early inclusion is vital for securing future buy-in, preventing the emergence of competing coalitions, drawing on additional expertise, and avoiding the ethical problems inherent in exclusion. 

There is, of course, a time and place for tight-knit discussions between developed countries or between great powers. Nonetheless, it would be an important mistake to exclude the Global South from the AI governance discussions that matter most. The benefits of efficiency must be balanced with the benefits of inclusivity.

The author of this piece would like to thank Ben Garfinkel and Cullen O’Keefe for their feedback.

She can be contacted at

sn***@ca*.uk











.



Source link

20May

Assistant Academic Director – Business, Law, and Marketing

Job title: Assistant Academic Director – Business, Law, and Marketing

Company: Dublin Business School

Job description: Academic Director in discipline development, enhancement and innovation including opportunities for business development…Job Title: Assistant Academic Director – Business, Law, and Marketing Department: IR0021 Academic Programmes…

Expected salary: €58000 per year

Location: Dublin

Job date: Wed, 01 May 2024 05:03:35 GMT

Apply for the job now!

19May

Business Insurance Specialist

Job title: Business Insurance Specialist

Company: Radius Payment Solutions

Job description: of technology innovation and we invite you along on this journey. The role … We are currently looking for a Business Insurance…, forward-thinking global business who build transformative solutions for our customers to deliver best-in-class sustainable mobility…

Expected salary: €27500 per year

Location: Dundalk, Co Louth

Job date: Fri, 12 Apr 2024 01:38:30 GMT

Apply for the job now!

19May

Frontier AI Regulation | GovAI Blog


This summarises a recent multi-author white paper on frontier AI regulation. We are organizing a webinar about the paper on July 20th. Sign up here.

GovAI research blog posts represent the views of their authors, rather than the views of the organisation.

Summary

AI models are already having large social impacts, both positive and negative. These impacts will only grow as models become more capable and more deeply integrated into society.

Governments have their work cut out for them in steering these impacts for the better. They have a number of challenges they need to address, including AI being used for critical decision-making, without assurance that its judgments will be fair and accurate; AI being integrated into safety-critical domains, with accompanying accident risks; and AI being used to produce and spread disinformation.

In a recent white paper, we focus on one such challenge: the increasingly broad and significant capabilities of frontier AI models. We define “frontier AI models” as highly capable foundation models,1 which could have dangerous capabilities that are sufficient to severely threaten public safety and global security. Examples of capabilities that would meet this standard include designing chemical weapons, exploiting vulnerabilities in safety-critical software systems, synthesising persuasive disinformation at scale, or evading human control. 

We think the next generation of foundation models – in particular, those trained using substantially greater computational resources than any model trained to date – could have these kinds of dangerous capabilities. Although the probability that next-generation models will have these capabilities is ambiguous, we think it is high enough to warrant targeted regulation. The appropriate regulatory regime may even include licensing requirements. 

Effective frontier AI regulation would require that developers put substantial effort into understanding the risks their systems might pose, in particular by evaluating whether they have dangerous capabilities or are insufficiently controllable. These risk assessments would receive thorough external scrutiny and inform decisions about how and whether new models are deployed. After deployment, the extent to which the models are causing harm would need to be continually evaluated. Other requirements, such as high cybersecurity standards, would likely also be appropriate. Overall, regulatory requirements need to evolve over time. 

Based on current trends, creating frontier AI models is likely to cost upwards of hundreds of millions of dollars in compute and also require other scarce resources like relevant talent. The described regulatory approach would therefore likely only target the handful of well-resourced companies developing these models, while posing few or no burdens on other developers.

What makes regulating frontier AI challenging?

There are three core challenges for regulating frontier AI models:

  • The Unexpected Capabilities Problem: The capabilities of new AI models are not reliably predictable and are often difficult to fully understand without intensive testing. Researchers have repeatedly observed capabilities emerging or significantly improving suddenly in foundation models. They have also regularly induced or discovered new capabilities through techniques including fine-tuning, tool use, and prompt engineering. This means that dangerous capabilities could arise unpredictably and – absent requirements to do intensive testing and evaluation pre- and post-deployment – could remain undetected and unaddressed until it is too late to avoid severe harm. 
  • The Deployment Safety Problem: AI systems can cause harm even if neither the user nor the developer intends them to, for several reasons. Firstly, it is difficult to precisely specify what we want deep learning-based AI models to do, and to ensure that they behave in line with those specifications. Reliably controlling AI models’ behavior, in other words, remains a largely unsolved technical problem. Secondly, attempts to “bake in” misuse prevention features at the model level, such that the model reliably refuses to obey harmful instructions, have proved circumventable due to methods such as “jailbreaking.” Finally, distinguishing instances of harmful and beneficial use may depend heavily on context that is not visible to the developing company. Overall, this means that even if state-of-the-art deployment safeguards are adopted, robustly safe deployment is difficult to achieve and requires close attention and oversight.
  • The Proliferation Problem: Frontier AI models are more difficult to train than to use. Thus, a much wider array of actors have the resources to misuse frontier AI models than have the resources to create them. Non-proliferation of frontier AI models is therefore essential for safety, but difficult to achieve. As AI models become more useful in strategically important contexts and the costs of producing the most advanced models increase, bad actors may launch increasingly sophisticated attempts to steal them. Further, when models are open-sourced, accessing or introducing dangerous capabilities becomes much easier. While we believe that open-sourcing of non-frontier AI models is currently an important public good, open-sourcing frontier AI models should be approached more cautiously and with greater restraint.

What regulatory building blocks are needed for frontier AI regulation?

Self-regulation is unlikely to provide sufficient protection against the risks of frontier AI models: we think government intervention will be needed. The white paper explores the building blocks such regulation would need. These include:

  • Mechanisms to create and update standards for responsible frontier AI development and deployment. These should be developed via multi-stakeholder processes and could include standards relevant to foundation models overall – not just standards that exclusively pertain to frontier AI. These processes should facilitate rapid iteration to keep pace with the technology.
  • Mechanisms to give regulators visibility into frontier AI developments. These mechanisms could include disclosure regimes, monitoring processes, and whistleblower protections. The goal would be to equip regulators with the information they need to identify appropriate regulatory targets and design effective tools for governing frontier AI. The information provided would pertain to qualifying frontier AI development processes, models, and applications.
  • Mechanisms to ensure compliance with safety standards. Self-regulation efforts, such as voluntary certification, may go some way toward ensuring compliance with safety standards by frontier AI model developers. However, this seems likely to ultimately be insufficient without government intervention. Intervention may involve empowering a government authority to translate standards into legally binding rules, identify and sanction non-compliance with rules, or perhaps establish and implement a licensing regime for the deployment and potentially the development of frontier AI models. Designing a well-balanced frontier AI regulation regime is a difficult challenge. Regulators would need to be sensitive to the risks of overregulation and stymieing innovation on the one hand, and the risks of moving too slowly (relative to the pace of AI progress) on the other.

What could safety standards for frontier AI development look like?

The white paper also suggests some preliminary, minimum safety standards for frontier AI development and release: 

  • Conducting thorough risk assessments informed by evaluations of dangerous capabilities and controllability. This would reduce the risk that deployed models possess unknown dangerous capabilities or behave unpredictably and unreliably.
  • Engaging external experts to apply independent scrutiny to models. External scrutiny of the safety and risk profiles of models would both improve assessment rigour and foster accountability to the public.
  • Following shared guidelines for how frontier AI models should be deployed based on their assessed risk. The results from risk assessments should determine whether and how a model is deployed and what safeguards are put in place. Options could range from deploying the model without restriction to not deploying it at all until risks are sufficiently reduced. In many cases, an intermediate option – deployment with appropriate safeguards, such as restrictions on the ability of the model to respond to risky instructions – will be appropriate.
  • Monitoring and responding to new information on model capabilities. The assessed risk of deployed frontier AI models may change over time due to new information and new post-deployment enhancement techniques. If significant information on model capabilities is discovered post-deployment, risk assessments should be repeated and deployment safeguards should be updated.

Other standards would also likely be appropriate. For example, frontier AI developers could be required to uphold high cybersecurity standards to ward off attempts at theft. In addition, these standards should likely change substantially over time as we learn more about the risks from the most capable AI systems and the means of mitigating those risks.

Uncertainties and next steps

While we feel confident that there is a need for frontier AI regulation, we are unsure about many aspects of how an appropriate regulatory regime should be designed. Relevant open questions include:

  • How worried should we be about regulatory capture? What can be done to reduce the chance of regulatory capture? For example, how could regulator expertise be bolstered? How much could personnel policies help – such as cool-off periods between working for industry and for a regulatory body?
  • What is the appropriate role of tort liability for harms caused by frontier models? How can it best complement regulation? Are there contexts in which it could serve as a substitute for regulation?
  • How can the regulatory regime be designed to deal with the evolving nature of the industry and the evolving risks? What can be done to ensure that ineffective or inappropriate standards are not locked in?
  • How, in practice, should a regime like this be implemented? For instance, in the US, is there a need for a new regulatory body? If the responsibility should be located within an existing department, which one would be the most natural home? In the EU, can the AI Act be adapted to deal appropriately with frontier AI and not just lower-risk foundation models? How does the proposed regulatory scheme fit into the UK’s Foundation Model Taskforce?
  • Which other safety standards, beyond the non-exhaustive list we suggest, should frontier AI developers follow?
  • Would a licensing regime be both warranted and feasible right now? If a licensing regime is not needed now but may be in the future, what exactly should policymakers do today to prepare and pave the way for future needs?
  • In general, how should we account for uncertainty about the level of risks the next generation of foundation models will pose? It is possible that the next generation of frontier models will be less risky than we fear. However, given the uncertainty and the need to prepare for future risks, we think taking preliminary measures now is the right approach. It is not yet clear what the most useful and cost-effective preliminary measures will be.

We are planning to spend a significant amount of time and effort exploring these questions. We strongly encourage others to do the same. Figuring out the answers to these questions will be extremely difficult, but deserves all of our best efforts.

Authorship statement

The white paper was written with co-authors from a number of institutions, including authors employed by industry actors that are actively developing state-of-the-art foundation models (Google DeepMind, OpenAI, and Microsoft). Although authors based in labs can often contribute special expertise, their involvement also naturally raises concerns that the content of the white paper will be biased toward the interests of companies. This suspicion is healthy. We hope that readers will be motivated to closely engage with the paper’s arguments, take little for granted, and publicly raise disagreements and alternative ideas.

The authors of this blog post can be contacted at

ma***************@go********.ai











,

jo***********@go********.ai











, and

ro***********@go********.ai











.



Source link

19May

Programme Manager, School of Business

Job title: Programme Manager, School of Business

Company: University College Dublin

Job description: Applications are invited for a permanent post of a Programme Manager within UCD School of Business. The post holder… on student experience. The post holder will primarily support Bec Economics and Finance and Bachelor of Business and Law…

Expected salary: €49131 – 66532 per year

Location: Dublin

Job date: Fri, 12 Apr 2024 03:33:19 GMT

Apply for the job now!

Protected by Security by CleanTalk