20May

Senior Data Engineer at DoiT International – Remote EMEA


Location

Our Software Engineer will be an integral part of our Engineering teams in EMEA.

This is based remotely in EMEA in one of our legal entities: UK, Ireland, Israel, Estonia, or Spain (for full-time employment). The role is also available to contractors in other East Europe locations. 

 

Who We Are
DoiT helps fast-growing, digital native companies globally to harness public cloud technology and services to drive business growth. A full-service provider of multi-cloud technology and expertise, DoiT combines the power of intelligent software with deep expertise in Kubernetes, artificial intelligence, and more to deliver the true promise of the cloud at peak efficiency – with ease, not cost. 

An award-winning strategic partner of AWS, Google Cloud, and Microsoft Azure with $2B cloud spend under management, DoiT works alongside more than 3,000 customers in 70 countries. At DoiT, you’ll join a growing team of committed, experienced, and collaborative “Do’ers” who are passionate about solving the most complex cloud challenges. 

Our Software Engineer will be an integral part of our Engineering teams in EMEA.

This is based remotely in EMEA in one of our legal entities: UK, Ireland, Israel, Estonia, or Spain (for full-time employment). The role is also available to contractors in other East Europe locations. 

Who We Are
DoiT helps fast-growing, digital native companies globally to harness public cloud technology and services to drive business growth. A full-service provider of multi-cloud technology and expertise, DoiT combines the power of intelligent software with deep expertise in Kubernetes, artificial intelligence, and more to deliver the true promise of the cloud at peak efficiency – with ease, not cost. 

An award-winning strategic partner of AWS, Google Cloud, and Microsoft Azure with $2B cloud spend under management, DoiT works alongside more than 3,000 customers in 70 countries. At DoiT, you’ll join a growing team of committed, experienced, and collaborative “Do’ers” who are passionate about solving the most complex cloud challenges. 

 

The Opportunity

As a Senior Software Engineer, you will be working on improving our Cloud Management Platform product and implementing new features. You’ll be working closely with product managers, developers from other teams and also participate in product building decisions.

Here are some things we’ve worked on recently that might give you a better sense of what you’ll be doing day-to-day:

Our standard stack includes Cloud Composer, Firestore, BigQuery, Firebase, Pub/Sub, Go, Python, React, Google AppEngine, and an extensive list of auxiliary technologies.

  • Built machine learning and forecasting pipelines to predict cloud infrastructure cost
  • Architected large-scale distributed systems to provide actionable recommendations
  • Developed an advanced cloud analytics platform to extract cost/usage insights
  • Upgraded our codebase to Go 1.21 & Python 3.11
  • Set up a new CD pipeline delivering incremental updates many times every day
  • Created proactive monitoring system for cloud known issues and quotas usage
  • Invented a new way for companies to purchase AWS reservations
  • Ingested various formats for AWS data for use by all stakeholders in our platform

 

Responsibilities

  • Implementing features. From the proposal, through spec and implementation, to maintenance. You’re expected to propose things that you think can be a good addition to the products
  • Reviewing code. We believe in code reviews. And you will soon start reviewing pull requests as well!
  • Improving the health of the codebase. We’re mindful of accumulating technical debt. We dedicate one day per week to housekeeping
  • Providing feedback. The team plans and discusses the upcoming work. We provide feedback to each other, trying to find challenges and unknowns as early as possible

 

Qualifications

  • 7+ years of software development experience in a Data Engineering role
  • Advanced degree in computer science, engineering or statistics preferred but not mandatory.
  • Capable of coding or learning to code in any language necessary to fulfill the product needs (Golang and Python preferred)
  • Capable of building data pipelines and training machine learning models.
  • Experience with Apache Airflow (Cloud Composer) would be a plus
  • Systems thinker with a passion for DevOps
  • AWS and Google Cloud experience.
  • Solid SQL and NoSQL experience, preferably Google BigQuery and Firestore
  • Create and maintain production software systems.
  • Assist with building components of a production application.
  • Expand, refine and stabilize an API.
  • Learn new development technologies easily.
  • Fluency in written and spoken English

 

Bonus Points

  • BA/BS degree or equivalent practical experience
  • Experience with Google Cloud or AWS services from a production environment

 
Are you a Do’er?
Be your truest self. Work on your terms. Make a difference. 
 
We are home to a global team of incredible talent who work remotely and have the flexibility to have a schedule that balances your work and home life. We embrace and support leveling up your skills professionally and personally.  
 
What does being a Do’er mean? We’re all about being entrepreneurial, pursuing knowledge and having fun! Click here to learn more about our core values
Sounds too good to be true? Check out our Glassdoor Page
We thought so too, but we’re here and happy we hit that ‘apply’ button. 

  • Unlimited PTO
  • Flexible Working Options
  • Health Insurance
  • Parental Leave
  • Employee Stock Option Plan
  • Home Office Allowance
  • Professional Development Stipend 
  • Peer Recognition Program

 

Many Do’ers, One Team
DoiT unites as Many Do’ers, One Team, where diversity is more than a goal—it’s our strength. We actively cultivate an inclusive, equitable workplace, recognizing that each unique perspective enhances our innovation. By celebrating differences, we create an environment where every individual feels valued, contributing to our collective success.

#LI-Remote



Source link

20May

Business Analyst

Job title: Business Analyst

Company: Glanbia

Job description: integration, business process automation, increased efficiency and innovation. Prepare business cases and conduct requirement…Business Analyst 18 Month Fixed Term Contract Abbey Quarter, Co Kilkenny (hybrid) Tirlán Tirlán, is a world…

Expected salary:

Location: Kilkenny

Job date: Thu, 16 May 2024 22:43:19 GMT

Apply for the job now!

20May

Sharing Powerful AI Models | GovAI Blog


This post, authored by Toby Shevlane, summarises the key claims and implications of his recent paper “Structured Access to AI Capabilities: An Emerging Paradigm for Safe AI Deployment.”

GovAI research blog posts represent the views of their authors, rather than the views of the organisation.

Sharing powerful AI models

There is a trend within AI research towards building large models that have a broad range of capabilities. Labs building such models face a dilemma when deciding how to share them.

One option is to open-source the model. This means publishing it publicly and allowing anyone to download a copy. Open-sourcing models helps the research community to study them and helps users to access their beneficial applications. However, the open source option carries risks: large, multi-use models often have harmful uses too. Labs (especially industry labs) might also want to maintain a competitive edge by keeping their models private.

Therefore, many of the most capable models built in the past year have not been shared at all. Withholding models carries its own risks. If outside researchers cannot study a model, they cannot gain the deep understanding necessary to ensure its safety. In addition, the model’s potential beneficial applications are left on the table.

Structured access tries to get the best of both approaches. In a new paper, which will be published in the Oxford Handbook on AI Governance, I introduce “structured access” and explain its benefits.

What is structured access?

Structured access is about allowing people to use and study an AI system, but only within a structure that prevents undesired information leaks and misuse. 

OpenAI’s GPT-3 model, which is capable of a wide range of natural language processing tasks, is a good example. Instead of allowing researchers to download their own copies of the model, OpenAI has allowed researchers to study copies that remain in its possession. Researchers can interact with GPT-3 through an “application programming interface” (API), submitting inputs and then seeing how the AI system responds. Moreover, subject to approval from OpenAI, companies can use the API to build GPT-3 into their software products.

This setup gives the AI developer much greater control over how people interact with the model. It is common for AI researchers to open-source a model and then have no way of knowing how people are using it, and no way of preventing risky or unethical applications.

With structured access, the developer can impose rules on how the model should be used. For example, OpenAI’s rules for GPT-3 state that the model cannot be used for certain applications, such as targeted political advertising. The AI developer can then enforce those rules, by monitoring how people are using the model and cutting off access to those who violate the rules.

The development of new technologies often runs into the “Collingridge dilemma”. The theory is that, by the time the impacts of a technology have become apparent, they are already irreversible. Structured access helps to fight against this. If the developer learns that their model is having serious negative impacts, they can withdraw access to the model or narrow its scope.

At the same time, structured access allows the research community to better understand the model – including its potential risks. There has been plenty of valuable research into GPT-3, relying simply on the API. For example, a recent paper analysed the “truthfulness” of the model’s outputs, testing GPT-3 on a new benchmark. Other research has explored GPT-3’s biases.

The hope is that we can accelerate the understanding of a model’s capabilities, limitations, and pathologies, before the proliferation of the model around the world has become irreversible.

How can we go further?

Although there are existing examples of structured access to AI models, the new paradigm has not yet reached maturity. There are two dimensions along which structured access can be improved: (1) the depth of model access for external researchers, and (2) the broader governance framework.

GPT-3 has demonstrated how much researchers can accomplish with a simple input-output interface. OpenAI has also added deeper access to GPT-3 as time goes on. For example, instead of just getting GPT-3’s token predictions, users can now get embeddings too. Users can also modify the model by fine-tuning it on their own data. GPT-3 is becoming a very researchable artefact, even though it has not been open-sourced.

The AI community should go even further. An important question is: how much of a model’s internal functioning can we expose without allowing an attacker to steal the model? Reducing this tradeoff is an important area for research and policy. Is it possible, for example, to facilitate low-level interpretability research on a model, even without giving away its parameter values? Researchers could run remote analyses of the model, analogous to privacy-preserving analysis of health datasets. They submit their code and are sent back the results. Some people are already working on building the necessary infrastructure – see, for example, the work of OpenMined, a privacy-focussed research community.

Similarly, labs could offer not just the final model, but multiple model checkpoints corresponding to earlier stages in the training process. This would allow outside researchers to study how the model’s capabilities and behaviours evolved throughout training – as with, for example, DeepMind’s recent paper analysing the progression of AlphaZero. Finally, AI developers could give researchers special logins, which give them deeper model access than commercial users.

The other area for improvement is the broader governance framework. For example, with GPT-3, OpenAI makes its own decisions about what applications should be permitted. One option could be to delegate these decisions to a trusted and neutral third party. Eventually, government regulation could step in, making certain applications of AI off-limits. For models that are deployed at scale, governments could require that the model can be studied by outsiders.

Structured access is very complementary with other governance proposals, such as external audits, red teams, and bias bounties. In a promising new development, Twitter has launched a collaboration with OpenMined to allow its models (and datasets) to be audited by external groups in a structured way. This illustrates how structured access to AI models can provide a foundation for new forms of governance and accountability. 

Industry and academia

I see structured access as part of a broader project to find the right relationship between AI academia and industry, when it comes to the development and study of large, multi-use models.

One possible arrangement is where certain academic research groups and industry research groups compete to build the most powerful models. Increasingly, this arrangement looks outdated. Academics do not have the same computational resources as industry researchers, and so are falling behind. Moreover, as the field matures, building stronger and stronger AI capabilities looks less like science and more like engineering.

Instead, industry labs should help academics to play to their strengths. There is still much science to be done, without academics needing to build large models themselves. Academics are well-placed, for example, to contribute to the growing model interpretability literature. As well as being scientific in nature, such work could be extremely safety-relevant and socially beneficial. As scientists, university-based researchers are well-placed to tackle the important challenge of understanding AI systems.

This benefits industry labs, who should try to cultivate thriving research ecosystems around their models. With the rise of very large, unwieldy models, no industry lab can, working alone, truly understand and address safety or bias issues that arise in them — or convince potential users that they can be trusted. These labs must work together with academia. Structured access is a scalable way of achieving this goal.

Conclusion

This is an exciting time for AI governance. The AI community is moving beyond high-level principles and starting to actually implement new governance measures. I believe that structured access could be an important part of this broader effort to shift AI development onto a safer path. We are still in the early stages, and there is plenty of work to be done to work out exactly how structured access should be implemented.





Source link

20May

Business Development Manager – SKINCEUTICALS – ROI (Dublin based)

Job title: Business Development Manager – SKINCEUTICALS – ROI (Dublin based)

Company: L’Oréal

Job description: BUSINESS DEVELOPMENT MANAGER – SKINCEUTICALS – ROI (Dublin based) When you look at L’Oréal…, and always through digital innovation. Not only that, but taking our sustainability goals seriously; moving us to a more inclusive…

Expected salary:

Location: Dublin

Job date: Thu, 16 May 2024 23:07:13 GMT

Apply for the job now!

20May

Preliminary Survey Results: US and European Publics Overwhelmingly and Increasingly Agree That AI Needs to Be Managed Carefully


Summary

  • There is an overwhelming consensus for careful management of AI in Europe and the United States. Across the two regions, 91% of respondents agree that “AI is a technology that requires careful management”.
  • The portion of people who agree is growing. There has been an 8% increase in agreement in the United States since 2018 that AI needs to be managed carefully. 
  • The level of agreement is increasing. In the last half a decade, more individuals in both Europe and the United States say they “totally agree” that AI needs to be managed carefully.

A new cross-cultural survey of public opinion

Previous research by the Centre for the Governance of AI on the public opinion of artificial intelligence (AI), explored the US public’s attitudes in 20181. A new cross-cultural public opinion survey was conducted with collaborators at Cornell University, the Council on Foreign Relations, Syracuse University, and the University of Pennsylvania. We surveyed a representative sample of 13350 individuals in ten European countries (N = 9350) and the United States (N = 4000) in January and February 2023. To give more timely updates on our research, before the final report’s release, we will publish a series of preliminary results of select questions. In this post, we show that there is a resounding consensus across the European and US publics that AI is a technology that requires careful management.

There is an overwhelming consensus that AI requires careful management

We asked respondents to what extent they agreed with the statement that “artificial Intelligence (AI) is a technology that requires careful management.” They were able to respond that they totally agree, tend to agree, tend to disagree, totally disagree, or don’t know, to keep answer options aligned with previous versions of this question in other surveys1,2.

Figure 1: Percentages of people that agree and disagree with the statement “Artificial Intelligence (AI) is a technology that requires careful management,” in Europe and the US. The results are weighted to correct for imbalances between the survey sample and the population of interest, as well as differences in population sizes within Europe in comparison to our sample sizes.

Across the entire sample, weighted by sample size of the United States and Europe, 91% agreed with the statement that “Artificial Intelligence (AI) is a technology that requires careful management.” The survey defined AI as “computer systems that perform tasks or make decisions that usually require human intelligence.” In Europe, 92% agreed that AI is a technology that requires careful management. In the United States, 91% agreed. 

The strength of this consensus is growing

Although the above finding mirrors previous findings in the United States and Europe, the level of agreement has seemingly increased in the last five years. 

Figure 2: Percentages of people that agree and disagree with the statement “Artificial Intelligence (AI) is a technology that requires careful management,” in the United States in 2018 and 2023. The results are weighted to correct for imbalances between the survey sample and the population of interest.

There is an increase in total agreement in the United States. In 2018, 83% of the US public agreed with the statement, with no significant difference in responses whether the question asked about AI, robots, or AI and robots1. This is an 8 percentage point increase to the 91% that agree in the United States five years later. The strength of the agreement with the statement has increased in the United states since 2018 too: In 2018, 52% of individuals “totally” agreed with the statement. This figure has increased to 71% in 2023, a 19 percentage point increase. We also see a six percentage point decrease in the number of individuals who responded “I don’t know” to this question in 2023 in comparison to 2018, speaking to individuals becoming less uncertain about their views on this topic.

Figure 3: Weighted percentages of people that agree and disagree with the statement “Artificial Intelligence (AI) is a technology that requires careful management,” in Europe in 2023, and the equivalent statement about AI and robots in the 2017 Eurobarometer survey. The results are weighted to correct for imbalances between the survey sample and the population of interest, as well as differences in population sizes within Europe in comparison to our sample sizes.

In Europe the overall increase in agreement is smaller: In 2017, a Eurobarometer study of almost 28,000 members of the European public found that 88% agreed that AI and robots need to be carefully managed (European Commission, 2017). Compared to our 2023 data where 92% agreed that this was the case for AI, an almost four percentage point increase in the last six years. In Europe the strength of agreement has increased more starkly than the total agreement, with the Eurobarometer finding that 53% totally agreed in 2017, compared to 68% in 2023 in our survey. “I don’t know” responses have remained generally consistent between 2017 and 2023.

Overall the results demonstrate an increasingly strong consensus within and across the United States and Europe that AI should be carefully managed.

Conclusion

The results show that the European and US publics are increasingly aware of how important it is to carefully manage AI technology. We demonstrate the growing agreement with this question over time, as we have tracked responses longitudinally across multiple versions of the survey3. At the very least, the public does not seem to take a “move fast and break things”4 approach to how AI should be managed. This finding suggests that AI developers, policymakers, academic researchers, and civil society groups need to figure out what kind of management or regulation of AI technology would meet the demands of this public consensus.

In our upcoming report, we will examine how familiarity with and knowledge of AI technology and other demographic predictors relate to our findings. We will also evaluate how a US panel who were asked this question in 2018 now responds half a decade later. Finally, we will also take a closer look at questions such as what governance issues around AI the public are most aware of and which worry them the most, and what they predict the effects of AI to be over the next decades.

Citation

Dreksler, N., McCaffary, D., Kahn, L., Mays, K., Anderljung, M., Dafoe, A., Horowitz, M.C., & Zhang, B. (2023, 17th April). Preliminary survey results: US and European publics overwhelmingly and increasingly agree that AI needs to be managed carefully. Centre for the Governance of AI. www.governance.ai/post/increasing-consensus-ai-requires-careful-management.

For more information or questions, contact no************@go********.ai

Acknowledgements

We thank many colleagues at the Centre for the Governance of AI for their input into the design of the survey questions. Thank you in particular to Carolyn Ashurst, Joslyn Barnhart-Trager, Jeffrey Ding, and Robert Trager, for the in depth workshopping and help in the designing of questions. Thank you to the people at Yougov and Deltapoll who were incredibly patient and helpful during the process of completing such a large-scale survey: thank you Caitlin Collins, Joe Twyman, and Marissa Shih, and those working behind the scenes. For comments and edits on this blog post we would like to thank Ben Garfinkel. For help with translations we thank the professional translation firms employed by Deltapoll and a group of helpful people that checked these for us: Jesus Lopez, Ysaline Bourgine, Jacky Dreksler, Olga Makoglu, Marc Everin, Patryck Jarmakowicz, Sergio R., Michelle Viotti, and Andrea Polls.

Appendix

Tables of top-line results

Table 1: Percentage of people that agree and disagree with the statement “Artificial Intelligence (AI) is a technology that requires careful management.” in Europe and the US. The numeric mean is calculated based on the numeric labels of the data: Totally disagree (-2), tend to disagree (-1), tend to agree (1), totally agree (2). *The Europe 2017 results are taken from the European Commission’s 2017 Eurobarometer survey report and therefore we do not have the results available to two decimal places and do not supply the numeric mean. More information on how the Eurobarometer weights its data can found on the European Commission’s website. The appendix details information on our weighting procedure.

Methodology

Sample

US. A representative adult sample (N=4000) of the US public was surveyed online by YouGov between 17th January 2023 and 8th February 2023. This included 775 respondents who were re-contacted from a previous survey in 2018 (Zhang & Dafoe, 2019). The final sample was matched down from 4095 respondents to a sampling frame on gender, age, race, and education constructed by stratified sampling from the full 2019 American Community Survey (ACS) 1-year sample with selection within strata by weighted sampling with replacements (using the person weights on the public use file). 4000 did not fail both attention checks. YouGov conducted additional verification and attention checks of its users.

Europe. A representative adult sample of 10,500 respondents across ten countries in Europe was surveyed between 17th January 2023 and 28th February 2023 by DeltaPoll (after excluding individuals who failed both attention checks the total sample size was 9350 (France = 872, Germany = 870, Greece = 455, Italy = 879, the Netherlands = 452, Poland = 896, Romania = 456, Spain = 888, Sweden = 438, the United Kingdom = 3144). DeltaPoll conducted additional metadata verification of the survey respondents.

Analysis

Respondents who failed both attention checks were removed from the analysis. In the final report, we will present analysis for all questions and more detailed breakdowns by demographics and statistical tests of the relationships between variables and differences in group responses. For the United States, we will also be able to compare whether a panel sample of 775 respondents has changed their views since 2018. Upcoming analyses can be taken from the pre-analysis plan uploaded to OSF: https://osf.io/rck9p/.

US. The results for the United States were weighted using the weights supplied by YouGov. YouGov reports that “the matched cases were weighted to the sampling frame using propensity scores. The matched cases and the frame were combined and a logistic regression was estimated for inclusion in the frame. The propensity score function included age, gender, race/ethnicity, years of education, and region. The propensity scores were grouped into deciles of the estimated propensity score in the frame and post-stratified according to these deciles. The weights were then post-stratified on 2016 and 2020 Presidential vote Choice [in the case of the 2023 data], and a four-way stratification of gender, age (4-categories), race (4-categories), and education (4-categories), to produce the final weight.”

Europe. We used the R package autumn (harvest function) to generate the weights based on country-level age and gender targets supplied by Deltapoll. Harvest generates weights by iterative proportional fitting (raking), as described in DeBell and Krosnick (2009). Where needed we supplemented these targets with data from a Ipsos (2021) survey to determine targets for the non-binary third gender option and prefer not to say response. These weights were combined with a population size weight generated from the Eurostat European Union Labour Force Survey, following the European Social Survey’s procedure for calculating population size weights.

Randomisation

Generally, the order of survey questions and items was randomised in the survey. The full survey flow will be released along with the full report on OSF.

Additional information

Ethics approval

This study was ethically approved by the Institutional Review boards of the University of Oxford (# 508-21), Cornell University (# 2102010107), and Syracuse University (# 22-045). This study has received an exemption from the University of Pennsylvania Institutional Review Board (Protocol # 828933). Informed consent was required from each respondent before completing the survey.

Materials and code

The full materials and code will be made available when the full report is published and will be uploaded on OSF: https://osf.io/rck9p/. Upcoming analyses can be taken from the pre-analysis plan uploaded to OSF.  The full survey draft and translations, conducted by professional translation firms, will also be made available.

Conflicts of interest and transparency

The authors declare no conflicts of interest. For full transparency, we would like to report the following professional associations: 

  • Markus Anderljung, Centre for the Governance of AI, Center for a New American Security
  • Allan Dafoe, DeepMind, Centre for the Governance of AI, Cooperative AI Foundation
  • Noemi Dreksler, Centre for the Governance of AI
  • Michael C. Horowitz, University of Pennsylvania
  • Lauren Kahn, Council on Foreign Relations
  • Kate Mays, Syracuse University
  • David McCaffary, Centre for the Governance of AI
  • Baobao Zhang, Syracuse University, Cornell University, Centre for the Governance of AI

Allan Dafoe conducted this research in an academic capacity at the Centre for the Governance of AI; he joined DeepMind part way through the project. Michael C. Horowitz went on a leave of absence to the United States Department of Defense during the course of the research but remained a Professor at the University of Pennsylvania. Neither Allan Dafoe nor Michael C. Horowitz had veto power in determining the contents or sample of the survey nor how the research was reported on. The project was fully scoped, and funding was received before these professional association changes took place. Neither had or have access to the data before the public release. Noemi Dreksler, employed by the Centre for the Governance of AI, led the running of the survey, dealt with determining if there were any conflict of interest issues, and had final say on the content of the survey. 

Survey questions

The blog post reports preliminary results from the following survey questions:

Definition of AI. The definition of AI used in the survey was as follows:

Artificial Intelligence (AI) refers to computer systems that perform tasks or make decisions that usually require human intelligence. AI can perform these tasks or make these decisions without explicit human instructions.

Careful AI management. This question was adapted from a 2017 Eurobarometer survey (European Commission, 2017) originally and was also on the 2018 survey by Zhang & Dafoe (2019).

Please tell me to what extent you agree or disagree with the following statement.

“Artificial Intelligence (AI) is a technology that requires careful management.”

  • Totally agree (2)
  • Tend to agree (1)
  • Tend to disagree (-1)
  • Totally disagree (-2)
  • I don’t know (-88)

References

DeBell, M., & Krosnick, J.A. (2009). Computing Weights for American National Election Study SurveyData. ANES Technical Report series, no. nes012427. Ann Arbor, MI, and Palo Alto, CA: American National Election Studies. Available at http://www.electionstidies.org 

European Commission. (2017). Attitudes towards the impact of digitisation and automation on daily life: Report. https://data.europa.eu/doi/10.2759/835661

European Union (n.d.). About Eurobarometer. Eurobarometer. https://europa.eu/eurobarometer/about/eurobarometer

European Union (n.d.). Employment and unemployment (LFS) database. Eurostat. https://ec.europa.eu/eurostat/web/lfs/database

Ipsos (2021). LGBT+ Pride 2021 Global Survey. Ipsos report. https://www.ipsos.com/sites/default/files/LGBT%20Pride%202021%20Global%20Survey%20Report%20-%20US%20Version.pdf 

Rudkin, A. (n.d.). autumn. R-package. Available at https://github.com/aaronrudkin/autumn 

Zhang, B., & Dafoe, A. (2019). Artificial Intelligence: American Attitudes and Trends. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3312874



Source link

20May

Senior Business Intelligence (BI) Developer/Engineer

Job title: Senior Business Intelligence (BI) Developer/Engineer

Company: WiiGroup

Job description: standards. Join us as we shape the future of construction and create a world where innovation and efficiency thrive… forefront of the rapidly evolving AEC landscape. Innovation and pushing boundaries are paramount in the construction industry…

Expected salary:

Location: Limerick

Job date: Fri, 17 May 2024 22:40:49 GMT

Apply for the job now!

20May

Beijing Policy Interest in General Artificial Intelligence is Growing


This post summarises and analyses two recent Beijing policy documents.

GovAI research blog posts represent the views of their authors, rather than the views of the organisation.

New Chinese policy interest in general AI

Historically, developing general artificial intelligence systems has not been an explicit priority for policymakers in the People’s Republic of China (PRC). The State Council’s 2017 New Generation Artificial Intelligence Development Plan and subsequent documents rarely mention general systems — even as Chinese interest in companies aiming to develop them, such as OpenAI, has grown steadily.

That appears to be changing. A group of the country’s most senior policymakers signalled a shift in the government’s views on AI in late April, 2023. For the first time, a readout from a meeting of the 24-member Politburo — bringing together key officials from the Party, State, and People’s Liberation Army —  promoted the development of “general artificial intelligence systems” (通用人工智能):1

[The meeting pointed out that] importance should be attached to the development of general artificial intelligence, an [associated] innovation ecosystem should be constructed, and importance should be attached to risk prevention.

[会议指出] 要重视通用人工智能发展,营造创新生态,重视防范风险。

Subsequent technology development plans put out by Beijing’s powerful local government focus on support for general AI and large model development. These place particular emphasis on overcoming barriers — likely heightened by recent US-led export controls — to accessing the large volume of compute that large model training requires. One of the documents describes meeting compute needs as “urgent” (紧迫). In addition to insufficient access to compute, an inadequate supply of high-quality data is identified as a key constraint on future progress.

The documents outline plans for an array of measures including subsidies, aggregation of existing compute and data for large model developers, and more research on advanced AI algorithms in an attempt to mitigate existing compute and data bottlenecks. The documents also contain sections on increasing research in AI ethics and safety, foreshadowing a recent statement from Xi Jinping calling for “a raised level of AI safety governance.”

Beijing’s AI policy priorities

The Beijing municipal government is leading the implementation of the PRC’s policy shift. It has significant power to shape the country’s AI industry: the city hosts many of the country’s most advanced AI companies and institutes, such as the Beijing Academy of Artificial Intelligence, and its municipal government cooperates with national institutions to support them.

The municipal government released a set of “Measures to Promote General Artificial Intelligence Innovation and Development in Beijing” and a “General Artificial Intelligence Industry Innovation Partnership Plan” in the weeks following the Politburo announcement. These documents serve as concrete policy implementation guidelines for government bodies and set several priorities:

1. Increasing the availability of advanced computing power

The Beijing government is looking to ameliorate the shortage of “high-quality computing resources” (高质量算力资源) facing large model teams in the city. The city’s science and economic policy bodies will seek to create a “compute partnership” (算力伙伴) between Aliyun — Alibaba Group’s cloud compute subsidiary — and the Beijing Supercomputing Cloud Center to subsidise and aggregate compute. The documents suggest that large model teams based in the city would then have priority access. In a potential signal that Beijing companies are already struggling to locate sufficient compute for their goals, the city government plans to draw on additional compute resources from adjacent provinces such as Tianjin and Hebei to meet these goals.

2. Increasing the supply of high-quality training data

The municipal government wants to increase the supply of high-quality data to its leading large model developers. Policy measures announced here include a “data partnership” (数据伙伴) with nine initial members including the Beijing Big Data Centre (北京市大数据中心), as well as a trading platform to lower barriers to acquiring high-quality data for large model teams. The municipal government seems to also intend to support the building of high-quality training data collections, to explore making more of its own vast data reserves available for large model training, and to create a platform for crowdsourcing data labelling.

3. Supporting algorithmic research

Beijing’s government states that it will aim to help its research institutions develop key algorithmic innovations. This includes general improvements in efficiency, but also more research on basic theories for reasoning and agentic behaviour, as well as research on alternative paradigms for developing general AI systems.

4. Increasing safety and oversight for large model development

The municipal government wants to see independent, non-profit third parties create model evaluation benchmarks and methods. Models that have “social mobilisation capabilities” (社会动员能力) — i.e. models which can influence public opinion at scale — will need to undergo security assessments by regulators. Interestingly, the municipal government also seems keen on more work on “intent alignment” (人类意图对齐), a critical pillar of AI safety research at some of the companies developing leading large models.

Conclusion

These recent policy developments, at both the local and national levels, represent a clear policy shift in the PRC towards the technological paradigms being pursued by Western AI companies such as DeepMind and OpenAI. PRC policymaker concern about a shortage of advanced compute is also a clear signal that recent export controls on this technology, imposed by the United States and allied nations, are stymieing a new plank of Chinese industrial policy. 

Whether PRC policymakers can realistically overcome this barrier is unclear. It also remains to be seen whether policymakers in Beijing will create strong oversight mechanisms and safeguards to mitigate risks — from AI weaponization and AI-enabled misinformation to hypothesised extreme risks from future systems — that are garnering mounting concern.



Source link

20May

Senior Business Development Manager

Job title: Senior Business Development Manager

Company: Access Nursing

Job description: Senior Business Development Manager Clearly Defined Pathway to Equity Partner Established Organisation & Team… professional to join us on our growth mission as we expand our pioneering services. In this role, you will join our business

Expected salary:

Location: Dublin

Job date: Sat, 18 May 2024 05:03:51 GMT

Apply for the job now!

20May

Announcing the GovAI Policy Program (GAPP)


Members of the GAPP cohort will participate in an 8-week interdisciplinary program, coordinated by the Centre for the Governance of AI. The program is structured around guided self-study, workshops, and seminars on artificial intelligence and the immediately pressing policy issues it poses, with an eye toward their longer term implications. Topics covered include AI standards and regulation, governing AI hardware, and international cooperation on AI. The program also includes discussions with world-leading experts in AI development, policy, and governance. The material has been designed to distil key context on the current AI landscape and give cohort members hands-on experience engaging with policy questions, while also accommodating busy work or academic schedules.

The Centre for the Governance of AI has alumni and staff with experience working in government, top AI labs including DeepMind and OpenAI, and think tanks such as the Center for Security and Emerging Technology. GAPP cohorts will have access to this wide range of expertise. 

This year’s iteration is the first-year pilot of GAPP. As a result, participation is invite-only based on recommendations from partners working in AI governance and policy talent development. Participants on the GAPP will generally have graduated from a master’s or doctoral program, be currently enrolled in a graduate studies program, or have at least two years of professional experience related to national security, law, economics, public policy, international relations, computer science, or a related field. We may make exceptions for unusually promising candidates.

If you are potentially interested in joining the next cohort of the GovAI Policy Program (likely running in April/May 2024), please fill out this form and we will reach out to you once applications open. 

For those interested in pursuing a more research-oriented career, we also run a three-month research fellowship based in Oxford.



Source link

20May

Technology Consulting (BTA) – Senior Business Analyst – Manager

Job title: Technology Consulting (BTA) – Senior Business Analyst – Manager

Company: EY

Job description: Technology Consulting (BTA) – Senior Business Analyst – Manager General information Location Dublin Business area… like a who’s who in tech, and a highly disruptive business model, we’re advancing the art of team collaboration. Driven by honest…

Expected salary:

Location: Southside Dublin

Job date: Sat, 18 May 2024 22:07:03 GMT

Apply for the job now!

Protected by Security by CleanTalk