04Nov

When Machines Think Ahead: The Rise of Strategic AI | by Hans Christian Ekne | Nov, 2024


Image generated by the author using Canva Magic Studio

Games have provided an amazing proving ground for developing strategic AI. The closed nature of games makes it easier to train models and develop solution techniques than in open ended systems. Games are clearly defined; the players are known and so are the payoffs. One of the biggest and earliest milestones was Deep Blue, the machine that beat the world champion in chess.

Early Milestones: Deep Blue

Deep Blue was a chess-playing supercomputer developed by IBM in the 1990s. As stated in the prologue, it made history in May 1997 by defeating the reigning world chess champion, Garry Kasparov, in a six-game match. Deep Blue utilized specialized hardware and algorithms capable of evaluating 200 million chess positions per second. It combined brute-force search techniques with heuristic evaluation functions, enabling it to search deeper into potential move sequences than any previous system. What made Deep Blue special was its ability to process vast numbers of positions quickly, effectively handling the combinatorial complexity of chess and marking a significant milestone in artificial intelligence.

However, as Gary Kasparov notes in his interview with Lex Fridman¹, Deep Blue was more of a brute force machine than anything else, so it’s perhaps hard to qualify it as any type of intelligence. The core of the search is basically just trial and error. And speaking of errors, it makes significantly less errors than humans, and according to Kasparov this is one of the features which made it hard to beat.

Advancements in Complex Games: AlphaGo

19 years after the Deep Blue victory in chess, a team from Google’s DeepMind produced another model that would contribute to a special moment in the history of AI. In 2016, AlphaGo became the first AI model to defeat a world champion go player, Lee Sedol.

Go is a very old board game with origins in Asia, known for its deep complexity and vast number of possible positions, far exceeding those in chess. AlphaGo combined deep neural networks with Monte Carlo tree search, allowing it to evaluate positions and plan moves effectively. The more time AlphaGo was given at inference, the better it performs.

The AI trained on a dataset of human expert games and improved further through self-play. What made AlphaGo special was its ability to handle the complexity of Go, utilizing advanced machine learning techniques to achieve superhuman performance in a domain previously thought to be resistant to AI mastery.

One could argue AlphaGo exhibits more intelligence than Deep Blue, given its exceptional ability to deeply evaluate board states and select moves. Move 37 from its 2016 game against Lee Sedol is a classic example. For those acquainted with Go, it was a shoulder hit at the fifth line and initially baffled commentators, including Lee Sedol himself. But as would later become clear, the move was a brilliant play and showcased how AlphaGo would explore strategies that human players might overlook and disregard.

Combining Chess and Go: AlphaZero

One year later, Google DeepMind made headlines again. This time, they took many of the learnings from AlphaGo and created AlphaZero, which was more of a general-purpose AI system that mastered chess, as well as Go and shogi. The researchers were able to build the AI solely through self-play and reinforcement learning without prior human knowledge or data. Unlike traditional chess engines that rely on handcrafted evaluation functions and extensive opening libraries, AlphaZero used deep neural networks and a novel algorithm combining Monte Carlo tree search with self-learning.

The system started with only the basic rules and learned optimal strategies by playing millions of games against itself. What made AlphaZero special was its ability to discover creative and efficient strategies, showcasing a new paradigm in AI that leverages self-learning over human-engineered knowledge.

Integrating Speed and Strategy: Star Craft II

Continuing its domination in the AI space, the Google DeepMind team changed its focus to a highly popular computer game, StarCraft II. In 2019 they developed an AI called AlphaStar² which was able to achieve Grandmaster level play and rank higher than 99.8% of human players on the competitive leaderboard.

StarCraft II is a real time strategy game that provided several novel challenges for the team at DeepMind. The goal of the game is to conquer the opposing player or players, by gathering resources, constructing buildings and amassing armies that can defeat the opponent. The main challenges in this game arise from the enormous action space that needs to be considered, the real-time decision making, partial observability due to fog of war and the need for long-term strategic planning, as some games can last for hours.

By building on some of the techniques developed for previous AIs, like reinforcement learning through self-play and deep neural networks, the team was able to make a unique game engine. Firstly, they trained a neural net using supervised learning and human play. Then, they used that to seed another algorithm that could play against itself in a multi-agent game framework. The DeepMind team created a virtual league where the agents could explore strategies against each other and where the dominant strategies would be rewarded. Ultimately, they combined the strategies from the league into a super strategy that could be effective against many different opponents and strategies. In their own words³:

The final AlphaStar agent consists of the components of the Nash distribution of the league — in other words, the most effective mixture of strategies that have been discovered — that run on a single desktop GPU.

Deep Dive into Pluribus and Poker

I love playing poker, and when I was living and studying in Trondheim, we used to have a weekly cash game which could get quite intense! One of the last milestones to be eclipsed by strategic AI was in the game of poker. Specifically, in one of the most popular forms of poker, 6-player no-limit Texas hold’em. In this game we use a regular deck of cards with 52 cards, and the play follows the following structure:

  1. The Preflop: All players are given 2 cards (hole cards) which only they themselves know the value of.
  2. The Flop: 3 cards are drawn and laid face up so that all players can see them.
  3. The Turn: Another card is drawn and laid face up.
  4. The River: A final 5th card is drawn and laid face up.

The players can use the cards on the table and the two cards on their hand to assemble a 5-card poker hand. For each round of the game, the players take turns placing bets, and the game can end at any of the rounds if one player places a bet that no one else is willing to call.

Though reasonably simple to learn, one only needs to know the hierarchy of the various poker hands, this game proved to be very difficult to solve with AI, despite ongoing efforts for several decades.

There are multiple factors contributing to the difficulty of solving poker. Firstly, we have the issue of hidden information, because you don’t know which cards the other players have. Secondly, we have a multiplayer setup with many players, with each extra player increasing the number of possible interactions and strategies exponentially. Thirdly, we have the no-limit betting rules, which allow for a complex betting structure where one player can suddenly decide to bet his entire stack. Fourth, we have an enormous game tree complexity due to the combinations of hole cards, community cards, and betting sequences. In addition, we also have complexity due to the stochastic nature of the cards, the potential for bluffing and the opponent modelling!

It was only in 2019 that a couple of researchers, Noam Brown and Tuomas Sandholm, finally cracked the code. In a paper published in Science, they describe a novel poker AI — Pluribus — that managed to beat the best players in the world in 6-player no-limit Texas hold’em.⁴ They conducted two different experiments, each consisting of a 10000 poker hands, and both experiments clearly showed the dominance of Pluribus.

In the first experiment, Pluribus played against 5 human opponents, achieving an average win rate of 48 mbb/game, with a standard deviation of 25 mbb/game. (mbb/game stands for milli big blind per game, how many big blinds is won per 1000 games played.) 48 mbb/game is considered a very high win rate, especially among elite poker players, and implies that Pluribus is stronger than the human opponents.

In the second experiment, the researchers had 5 versions of Pluribus play against 1 human. They set up the experiment so that 2 different humans would each play 5000 hands each against the 5 machines. Pluribus ended up beating the humans by an average of 32 mbb/game with a standard error of 15 mbb/game, again showing its strategic superiority.

The dominance of Pluribus is quite amazing, especially given all the complexities the researchers had to overcome. Brown and Sandholm came up with several smart strategies that helped Pluribus to become superhuman and computationally much more efficient than previous top poker AIs. Some of their techniques include:

  1. The use of two different algorithms for evaluating moves. They would first use a so called “blueprint strategy” which was created by having the program play against itself using a method called Monte Carlo counterfactual regret minimization. This blueprint strategy would be used in the first round of betting, but in subsequent betting rounds, Pluribus conducts a real-time search to find a better more granular strategy.
  2. To make its real-time search algorithm be more computationally efficient, they would use a dept-limited search and evaluate 4 different possible strategies that the opponents might choose to play. Firstly, they would evaluate each strategy for 2 moves ahead. In addition, they would only evaluate four different strategies for the opponents, including the original blueprint strategy, a blueprint strategy biased towards folding, a blueprint strategy biased towards calling and a final blueprint strategy biased towards raising.
  3. They also used various abstraction techniques to reduce the number of possible game states. For example, because a 9 high straight is fundamentally similar to a 8 high straight these can be viewed in a similar way.
  4. Pluribus would discretize the continuous betting space into a limited set of buckets, making it easier to consider and evaluate various betting sizes.
  5. In addition, Pluribus also balances its strategy in way that for any given hand it is playing, it would also consider other possible hands it could have in that situation and evaluate how it would play those hands, so that the final play would be balanced and thus harder to counter.

There are quite a few interesting observations to draw from Pluribus, but perhaps the most interesting is that it doesn’t vary its play against different opponents, but instead has developed a robust strategy that is effective against a wide variety of players. Since a lot of poker players think they have to adjust their play to various situations and people, Pluribus shows us that this is not needed and probably not even optimal, given how it beat all the humans it played against.

In our short foray into game theory, we noted that if you play the NE strategy in two-player zero-sum games you are guaranteed not to lose in expectation. However, for a multiplayer game like 6-player poker there is no such guarantee. Noam Brown speculates⁵ that it is perhaps the adversarial nature of a game like poker which still makes it suitable to try to approach it with a NE strategy. Conversely, in a game like Risk where players can cooperate more, pursuing a NE strategy is not guaranteed to work, because, if you are playing a risk game with 6 people, there is nothing you can do if your 5 opponents decide to gang up on you and kill you.

Evaluating the Trend in Strategic AI

Summarizing the history of strategic AI in games, we see a clear trend emerging. The games are slowly but surely becoming closer to the real-world strategic situations that humans find themselves in on an everyday basis.

Firstly, we are moving from a two-player to a multiplayer setting. This can be seen from the initial success in two-player games to multiplayer games like 6-player poker. Secondly, we are seeing an increase in the mastery of games with hidden information. Thirdly we are also seeing an increase in mastery of games with more stochastic elements.

Hidden information, multiplayer settings and stochastic events are the norm rather than the exception in strategic interactions among humans, so mastering these complexities is key in achieving a more general superhuman strategic AI that can navigate in the real world.



Source link

04Nov

Research Leader (Computational Biology), London at Isomorphic Labs – London


Research Leader (Computational Biology), London

Isomorphic Labs is a new Alphabet company that is reimagining drug discovery through a computational- and AI-first approach.

We are on a mission to accelerate the speed, increase the efficacy and lower the cost of drug discovery. You’ll be working at the cutting edge of the new era of ‘digital biology’ to deliver a transformative social impact for the benefit of millions of people.

Come and be part of a multi-disciplinary team driving groundbreaking innovation and play a meaningful role in contributing towards us achieving our ambitious goals, while being a part of an inspiring, collaborative and entrepreneurial culture.

Your impact

This is an exciting opportunity for you to contribute to an ambitious Computational Biology research programme, working in partnership with leading Machine Learning (ML) researchers, Chemists and Biologists. Building on the successful models in place to predict protein structure (AlphaFold-3), there is an unique opportunity for a Research Leader to have a direct impact on drug discovery using innovative Computational Biology and ML approaches. This is a newly created role; driven by a passion for problem solving, you will need to use your previous experience and show initiative in order to fully carve out your contribution.

What you will do

  • Use your deep technical experience and scientific knowledge to plan, lead, and deliver original research projects that impact critical problems in drug development.
  • Use your broad expertise to undertake analysis of biomedical datasets, including genetics, genomics, transcriptomics, proteomics, functional perturbation screens, imaging, knowledge graphs, PPI, clinical or other data types. 
  • Be empowered to develop detailed research directions within the scope of the Computational Biology strategy, and design these projects to make an impact on company goals.
  • Function as a subject matter and technical expert in multi-disciplinary teams, sharing your knowledge and expertise generously with your colleagues.
  • Influence Iso’s strategy, ensuring innovative insights are consistently brought to bear within drug development programmes.
  • Contribute to the wider development of Iso’s ML computational platform, working with other engineers to architect, build, and operate the platform’s components in a cross-disciplinary working environment.
  • Work with other members of the Computational Biology team and key stakeholders to deliver a unified team strategy.
  • Provide documentation, guidance, and communication on computational biology to the wider organisation.
  • Proactively identify complex problems and solve them in a creative and innovative way with minimal guidance.
  • You may have formal line management and/or matrix management responsibilities, as well as responsibility for mentoring junior colleagues to enable the technical work of others.
  • You may represent the company in external situations such as working with partners, within consortia, and with CROs as a project or collaboration lead. 

Skills and qualifications

Essential:

  • A PhD plus extensive experience in computational biology, potentially including in leadership roles
  • Track record of delivery of outstanding research
  • Expertise with detailed data quality control procedures and data visualisation
  • Experience with experimental design, and data generation, quality control and statistical analysis
  • Sophisticated understanding of computational biology methodologies as applied in biomedical research
  • Deep understanding of the principles of molecular cell biology, genetics, or related biological disciplines
  • Familiarity with data processing pipelines and analytical tools
  • Ability to effectively communicate complex scientific concepts to a variety of audiences
  • Programming skills in a language such as Python (ideally), or R
  • Experience working in a Linux environment
  • Demonstrate ongoing career progression / trajectory and a passion for learning and problem solving

Nice to have:

  • Prior experience in the context of therapeutic or diagnostic development programmes
  • Familiarity with a variety of assaying techniques, including NGS, cell-based assays, functional genomics, single-cell techniques, and image-based assays with expertise in their respective data analysis approaches
  • Experience working with clinical data
  • Expertise in developing computational biology methods for problems relevant to drug discovery
  • Experience applying computational biology workflows on Google Cloud Platform
  • Strong experience in developing software in Python
  • Hands-on experience in applying ML (especially deep learning) models in the field of computational biology


Culture and values

What does it take to be successful at IsoLabs? It’s not about finding people who think and act in the same way, but we do have some shared values:

Thoughtful
Thoughtful at Iso is about curiosity, creativity and care. It is about good people doing good, rigorous and future-making science every single day.

Brave
Brave at Iso is about fearlessness, but it’s also about initiative and integrity. The scale of the challenge demands nothing less.

Determined
Determined at Iso is the way we pursue our goal. It’s a confidence in our hypothesis, as well as the urgency and agility needed to deliver on it. Because disease won’t wait, so neither should we.

Together
Together at Iso is about connection, collaboration across fields and catalytic relationships. It’s knowing that transformation is a group project, and remembering that what we’re doing will have a real impact on real people everywhere.


Creating an inclusive company

We realise that to be successful we need our teams to reflect and represent the populations we are striving to serve. We’re working to build a supportive and inclusive environment where collaboration is encouraged and learning is shared. We value diversity of experience, knowledge, backgrounds and perspectives and harness these qualities to create extraordinary impact. 

We are committed to equal employment opportunities regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy or related condition (including breastfeeding) or any other basis protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.


Hybrid working

It’s hugely important for us to be able to share knowledge and establish relationships with each other, and we find it easier to do this if we spend time together in person. This is why we’ve decided to follow a hybrid model, and for full time positions we would require you to be able to come into the office 3 days a week (currently Tue, Wed, and one other day depending on which team you’re in). For part time positions this may vary.  As an equal opportunities employer we are committed to building an equal and inclusive team. If you have additional needs that would prevent you from following this hybrid approach, we’d be happy to talk through these if you’re selected for an initial screening call.

Please note that when you submit an application, your data will be processed in line with our privacy policy.

>> Click to view other open roles at Isomorphic Labs



Source link

04Nov

Lead Ingénieur mécatronique (H/F) at Bertrandt – Colmar, France


Depuis plus de 40 ans, le groupe Bertrandt fournit des solutions de développement pour les industries automobile et aéronautique à l’international. Mais aussi pour les
secteurs de la construction de machines, de l’énergie, des technologies médicales et de l’électrotechnique en Europe, en Chine et aux États-Unis. L’ensemble de nos collaborateurs
représente une expertise approfondit dans leurs domaines, avec des solutions de projet tournées vers l’avenir, tout en intégrant la satisfaction client.

Lead Ingénieur mécatronique (H/F)

Lieu: 68000, Colmar

Lead Ingénieur mécatronique (H/F)
Lieu de travail: 68000, Colmar

Vos missions et votre environnement:

Vous intégrerez Bertrandt au département Electronique et Électricité de la branche Automobile en tant qu’Lead Inégnieur mécatronique.

Avec 54 sites implantés en Europe et aux Etats-Unis, le Groupe BERTRANDT est un acteur international de l’ingénierie automobile et aéronautique. Chaque jour, plus de 13 000 collaborateurs du groupe accompagnent les constructeurs et équipementiers dans leurs projets de développement, du design à l’industrialisation. Bertrandt vous offre des opportunités dans plusieurs domaines du secteur automobile (Carrosserie, équipements intérieurs/extérieurs, moteurs et électronique) et aéronautique (conception, essais, calcul) mais aussi sur les métiers transverses : Management de projet, Management de la qualité, Logistique/Industrialisation et Validation.

Vous êtes rattaché au responsable d’activité EE Powertrain. Dans le cadre d’un projet relatif au développement d’un système de localisation des engins , nous recherchons un profil dont la mission consistera à :

Rattaché à un Responsable d’Activité, vous aurez pour mission de :

  • Sélectionner la technologie de capteur adaptée à chaque type d’engin minier
  • Participer à l’intégration de ces capteurs avec les équipes hardware et réaliser des tests d’intégration pour vérifier leur fonctionnement
  • Définir les spécifications du projet
  • Développer des algorithmes de fusion de données issues de ces capteurs
  • Définir et développer les modèles géométriques correspondant aux différentes cinématiques rencontrées
  • Mettre en place des outils de tests unitaires et de test d’intégration de ces modèles.
  • Définir et documenter la procédure d’installation et de mise en route de ce modèle
  • Porter ces modèles sur une cible embarqué temps réelle.

Participer aux essais de validation sur site du produit

Vos compétences et connaissances:

De formation ingénieur/Bac +5 mécatronique , vous avez :

  • Une expérience d’au moins 5 dans la conception de systèmes embarqués en lien avec les domaines : Système mécatronique complexe, fusion données capteurs temps réel, programmation logiciel embarqué temps réel.
  • Maitrise de langages de programmation (C, C++, C#, Matlab, Python, …)
  • Connaissance dans les protocoles de communications : CAN, Serial, Ethernet…
  • Bon niveau d’anglais oral et écrit, l’allemand est un plus
  • Mobile (2 missions prévues en Australie de 2 semaines chacune)

Vous souhaitez rejoindre nos équipes ? N’hésitez pas à postuler ! 

 

Nature du contrat : CDI

Lieu de la mission : Colmar (68066)

 

Pourquoi Bertrandt ?

Bertrandt est un partenaire technologique de premier niveau dans les secteurs de l’automobile, de l’aérospatiale et de la mécanique. Avec nos 50 ans d’expérience, notre présence à l’international et nos collaborateurs experts dans leur domaine, Bertrandt contribue à façonner l’avenir 4.0 en s’appuyant sur les tendances de numérisation, des systèmes autonomes, de connectivité et de mobilité électrique.

Le groupe Bertrandt c’est également,

  • L’assurance d’un plan de carrière varié, passionnant, adapté aux souhaits des collaborateurs,
  • Concilier vie personnelle et vie professionnelle grâce aux horaires flexibles et au télétravail,
  • Percevoir un salaire ajusté à son profil,
  • Opter pour un employeur responsable et partenaire de la santé de ses collaborateurs, avec notamment des assurances vieillesse, invalidité et accidents de travail,
  • Monter en compétences grâce à la Bertrandt Academy

Choisir Bertrandt, c’est prendre en main son avenir !

Le processus de recrutement :

  • Un échange téléphonique et un premier entretien avec une chargée de recrutement
  • Un entretien technique avec un Responsable Technique
  • Un entretien de validation avec un Responsable d’Activité

Dans le cadre de sa politique diversité, Bertrandt étudie, à compétences égales, toutes les candidatures dont celles de personnes en situation de handicap.

Ce que nous proposons:

  •  Travail en autonomie 
  •  Indemnité transport 
  •  Formation sur le lieu de travail 

Contact:

Laura BAUDRY
Tel.:

Contact:

Laura BAUDRY
Tel.:



Source link

03Nov

Machine Learning Researcher – Internship – Paris at Wiremind – Paris, France


Since 2014, Wiremind has positioned itself as a technical company transforming the world of transport and events with a 360° approach combining UX, software, and AI.

Our expertise lies primarily in optimizing and distributing our clients’ capacity. We work on various projects such as ticket forecasting and pricing, 3D optimization of air freight or scraping competitor prices. Our applications are the preferred tool of companies such as SNCF, United Airlines, Qatar Airways or even PSG to visualize, analyze and optimize their capacity.

Dynamic and ambitious, we strive to maintain our technical DNA which is the engine of our success. The company, profitable and self-financed since its creation 10 years ago, is mainly composed of engineers and experts and currently supports the growth of our business model based on “software-as-a-service” solutions.

Your missions 🚀

At Wiremind, the Data Science team is responsible for the development, monitoring and evolution of all ML-powered forecasting and optimization algorithms in use in our Revenue Management systems.

Our algorithms are divided in 2 parts:

  • A modelling of the unconstrained demand using ML models (e.g.deep learning, boosted trees) trained on historical data in the form of time-series 
  • Constrained optimizations problems solved using linear programming techniques.

You will be joining a team shaped to have all profiles necessary to constitute an autonomous department (devops, software and data engineering, data science, AIML, operational research).

There, under supervision of a Wiremind tutor and researchers from UBC (https://www.ubc.ca/), you will push the boundaries of state-of-the-art causal inference modeling for time series.

As a research intern, you will have the opportunity to contribute to innovative projects at the intersection of deep learning and causal modeling.

You will be involved in topics such as:

  • Leveraging causal inference methods like Regression Discontinuity Design or Orthogonal Learning to analyze and model complex demand patterns using time series data. 
  • Developing state-of-the-art deep learning architectures to improve the accuracy of current best models while maintaining causality and elasticity.
  • Exploring the impact of pricing sequences on demand by modeling consumer behavior from a series of price changes instead of single adjustments.

Technical stack:

  • Backend: Python 3.11+ with SQLAlchemy 
  • Orchestration: Argo workflows over an auto-scaled Kubernetes cluster 
  • Datastores: Druid and postgresql 
  • Common ML libraries/tools: TensorFlow/Keras, LightGBM, XGBooost, Pandas, Dask, Dash, Jupyter notebooks
  • Model versioning and registry tool: Mlflow 
  • Gitlab / Kubernetes for CI/CD 
  • Prometheus/Grafana and Kibana for operations

Your profile 🔍

  • Strong computer science background in python, with a keen interest for code quality and best practices (unit testing, pep8, typing)
  • Knowledge about at least one major deep learning framework, e.g. tensorflow, pytorch
  • A pragmatic, prod-oriented approach to ML: frequent, incremental gains beat a grand quest for perfection.

What Would be a plus 🔍

  • A first experience in a pricing-related domain
  • A wish to puruse a career in academia with a PHD following the internship

Our benefits 🤌

By joining us, you will integrate:

  • A self-financed startup with a strong technical identity! 🧬
  • Beautiful 700 m² offices in the heart of Paris (Bd Poissonnière) ✨
  • Attractive remuneration 💪
  • A caring and stimulating team that encourages skills development through initiative and autonomy
  • A learning environment with opportunities for evolution 🧑‍💻

You will also benefit from:

  • 1 day of remote work per week
  • A great company culture (monthly afterworks, regular meetings on technology and products, annual off-site seminars, team-building…

Our Recruitment Process 🤞

  1. An initial discussion with our Talent Acquisition Manager
  2. A technical test to be prepared
  3. A last interview at our offices to discuss your technical test with the Hiring Manager and meet with members of the team

Wiremind is committed to equality of opportunity, diversity, and fairness. We encourage all candidates with the necessary experience to apply for our job offers.



Source link

03Nov

Beyond Skills: Unlocking the Full Potential of Data Scientists. | by Eric Colson | Oct, 2024


Image created through DALL-E / OpenAI by author.

Unlock the hidden value of data scientists by empowering them beyond technical tasks to drive innovation and strategic insights.

[This piece is cross-posted from O’Reilly Radar here]

Modern organizations regard data as a strategic asset that drives efficiency, enhances decision making, and creates new value for customers. Across the organization — product management, marketing, operations, finance, and more — teams are overflowing with ideas on how data can elevate the business. To bring these ideas to life, companies are eagerly hiring data scientists for their technical skills (Python, statistics, machine learning, SQL, etc.).

Despite this enthusiasm, many companies are significantly underutilizing their data scientists. Organizations remain narrowly focused on employing data scientists to execute preexisting ideas, overlooking the broader value they bring. Beyond their skills, data scientists possess a unique perspective that allows them to come up with innovative business ideas of their own — ideas that are novel, strategic, or differentiating and are unlikely to come from anyone but a data scientist.

Sadly, many companies behave in ways that suggest they are uninterested in the ideas of data scientists. Instead, they treat data scientists as a resource to be used for their skills alone. Functional teams provide requirements documents with fully specified plans: “Here’s how you are to build this new system for us. Thank you for your partnership.” No context is provided, and no input is sought — other than an estimate for delivery. Data scientists are further inundated with ad hoc requests for tactical analyses or operational dashboards¹. The backlog of requests grows so large that the work queue is managed through Jira-style ticketing systems, which strip the requests of any business context (e.g., “get me the top products purchased by VIP customers”). One request begets another², creating a Sisyphean endeavor that leaves no time for data scientists to think for themselves. And then there’s the myriad of opaque requests for data pulls: “Please get me this data so I can analyze it.” This is marginalizing — like asking Steph Curry to pass the ball so you can take the shot. It’s not a partnership; it’s a subordination that reduces data science to a mere support function, executing ideas from other teams. While executing tasks may produce some value, it won’t tap into the full potential of what data scientists truly have to offer.

The untapped potential of data scientists lies not in their ability to execute requirements or requests but in their ideas for transforming a business. By “ideas” I mean new capabilities or strategies that can move the business in better or new directions — leading to increased³ revenue, profit, or customer retention while simultaneously providing a sustainable competitive advantage (i.e., capabilities or strategies that are difficult for competitors to replicate). These ideas often take the form of machine learning algorithms that can automate decisions within a production system⁴. For example, a data scientist might develop an algorithm to better manage inventory by optimally balancing overage and underage costs. Or they might create a model that detects hidden customer preferences, enabling more effective personalization. If these sound like business ideas, that’s because they are — but they’re not likely to come from business teams. Ideas like these typically emerge from data scientists, whose unique cognitive repertoires and observations in the data make them well-suited to uncovering such opportunities.

A cognitive repertoire is the range of tools, strategies, and approaches an individual can draw upon for thinking, problem-solving, or processing information (Page 2017). These repertoires are shaped by our backgrounds — education, experience, training, and so on. Members of a given functional team often have similar repertoires due to their shared backgrounds. For example, marketers are taught frameworks like SWOT analysis and ROAS, while finance professionals learn models such as ROIC and Black-Scholes.

Data scientists have a distinctive cognitive repertoire. While their academic backgrounds may vary — ranging from statistics to computer science to computational neuroscience — they typically share a quantitative tool kit. This includes frameworks for widely applicable problems, often with accessible names like the “newsvendor model,” the “traveling salesman problem,” the “birthday problem,” and many others. Their tool kit also includes knowledge of machine learning algorithms⁵ like neural networks, clustering, and principal components, which are used to find empirical solutions to complex problems. Additionally, they include heuristics such as big O notation, the central limit theorem, and significance thresholds. All of these constructs can be expressed in a common mathematical language, making them easily transferable across different domains, including business — perhaps especially business.

The repertoires of data scientists are particularly relevant to business innovation since, in many industries⁶, the conditions for learning from data are nearly ideal in that they have high-frequency events, a clear objective function⁷, and timely and unambiguous feedback. Retailers have millions of transactions that produce revenue. A streaming service sees millions of viewing events that signal customer interest. And so on — millions or billions of events with clear signals that are revealed quickly. These are the units of induction that form the basis for learning, especially when aided by machines. The data science repertoire, with its unique frameworks, machine learning algorithms, and heuristics, is remarkably geared for extracting knowledge from large volumes of event data.

Ideas are born when cognitive repertoires connect with business context. A data scientist, while attending a business meeting, will regularly experience pangs of inspiration. Her eyebrows raise from behind her laptop as an operations manager describes an inventory perishability problem, lobbing the phrase “We need to buy enough, but not too much.” “Newsvendor model,” the data scientist whispers to herself. A product manager asks, “How is this process going to scale as the number of products increases?” The data scientist involuntarily scribbles “O(N²)” on her notepad, which is big O notation to indicate that the process will scale superlinearly. And when a marketer brings up the topic of customer segmentation, bemoaning, “There are so many customer attributes. How do we know which ones are most important?,” the data scientist sends a text to cancel her evening plans. Instead, tonight she will eagerly try running principal components analysis on the customer data⁸.

No one was asking for ideas. This was merely a tactical meeting with the goal of reviewing the state of the business. Yet the data scientist is practically goaded into ideating. “Oh, oh. I got this one,” she says to herself. Ideation can even be hard to suppress. Yet many companies unintentionally seem to suppress that creativity. In reality our data scientist probably wouldn’t have been invited to that meeting. Data scientists are not typically invited to operating meetings. Nor are they typically invited to ideation meetings, which are often limited to the business teams. Instead, the meeting group will assign the data scientist Jira tickets of tasks to execute. Without the context, the tasks will fail to inspire ideas. The cognitive repertoire of the data scientist goes unleveraged — a missed opportunity to be sure.

Beyond their cognitive repertoires, data scientists bring another key advantage that makes their ideas uniquely valuable. Because they are so deeply immersed in the data, data scientists discover unforeseen patterns and insights that inspire novel business ideas. They are novel in the sense that no one would have thought of them — not product managers, executives, marketers — not even a data scientist for that matter. There are many ideas that cannot be conceived of but rather are revealed by observation in the data.

Company data repositories (data warehouses, data lakes, and the like) contain a primordial soup of insights lying fallow in the information. As they do their work, data scientists often stumble upon intriguing patterns — an odd-shaped distribution, an unintuitive relationship, and so forth. The surprise finding piques their curiosity, and they explore further.

Imagine a data scientist doing her work, executing on an ad hoc request. She is asked to compile a list of the top products purchased by a particular customer segment. To her surprise, the products bought by the various segments are hardly different at all. Most products are bought at about the same rate by all segments. Weird. The segments are based on profile descriptions that customers opted into, and for years the company had assumed them to be meaningful groupings useful for managing products. “There must be a better way to segment customers,” she thinks. She explores further, launching an informal, impromptu analysis. No one is asking her to do this, but she can’t help herself. Rather than relying on the labels customers use to describe themselves, she focuses on their actual behavior: what products they click on, view, like, or dislike. Through a combination of quantitative techniques — matrix factorization and principal component analysis — she comes up with a way to place customers into a multidimensional space. Clusters of customers adjacent to one another in this space form meaningful groupings that better reflect customer preferences. The approach also provides a way to place products into the same space, allowing for distance calculations between products and customers. This can be used to recommend products, plan inventory, target marketing campaigns, and many other business applications. All of this is inspired from the surprising observation that the tried-and-true customer segments did little to explain customer behavior. Solutions like this have to be driven by observation since, absent the data saying otherwise, no one would have thought to inquire about a better way to group customers.

As a side note, the principal component algorithm that the data scientists used belongs to a class of algorithms called “unsupervised learning,” which further exemplifies the concept of observation-driven insights. Unlike “supervised learning,” in which the user instructs the algorithm what to look for, an unsupervised learning algorithm lets the data describe how it is structured. It is evidence based; it quantifies and ranks each dimension, providing an objective measure of relative importance. The data does the talking. Too often we try to direct the data to yield to our human-conceived categorization schemes, which are familiar and convenient to us, evoking visceral and stereotypical archetypes. It’s satisfying and intuitive but often flimsy and fails to hold up in practice.

Examples like this are not rare. When immersed in the data, it’s hard for the data scientists not to come upon unexpected findings. And when they do, it’s even harder for them to resist further exploration — curiosity is a powerful motivator. Of course, she exercised her cognitive repertoire to do the work, but the entire analysis was inspired by observation of the data. For the company, such distractions are a blessing, not a curse. I’ve seen this sort of undirected research lead to better inventory management practices, better pricing structures, new merchandising strategies, improved user experience designs, and many other capabilities — none of which were asked for but instead were discovered by observation in the data.

Isn’t discovering new insights the data scientist’s job? Yes — that’s exactly the point of this article. The problem arises when data scientists are valued only for their technical skills. Viewing them solely as a support team limits them to answering specific questions, preventing deeper exploration of insights in the data. The pressure to respond to immediate requests often causes them to overlook anomalies, unintuitive results, and other potential discoveries. If a data scientist were to suggest some exploratory research based on observations, the response is almost always, “No, just focus on the Jira queue.” Even if they spend their own time — nights and weekends — researching a data pattern that leads to a promising business idea, it may still face resistance simply because it wasn’t planned or on the roadmap. Roadmaps tend to be rigid, dismissing new opportunities, even valuable ones. In some organizations, data scientists may pay a price for exploring new ideas. Data scientists are often judged by how well they serve functional teams, responding to their requests and fulfilling short-term needs. There is little incentive to explore new ideas when doing so detracts from a performance review. In reality, data scientists frequently find new insights in spite of their jobs, not because of them.

These two things — their cognitive repertoires and observations from the data — make the ideas that come from data scientists uniquely valuable. This is not to suggest that their ideas are necessarily better than those from the business teams. Rather, their ideas are different from those of the business teams. And being different has its own set of benefits.

Having a seemingly good business idea doesn’t guarantee that the idea will have a positive impact. Evidence suggests that most ideas will fail. When properly measured for causality⁹, the vast majority of business ideas either fail to show any impact at all or actually hurt metrics. (See some statistics here.) Given the poor success rates, innovative companies construct portfolios of ideas in the hopes that at least a few successes will allow them to reach their goals. Still savvier companies use experimentation¹⁰ (A/B testing) to try their ideas on small samples of customers, allowing them to assess the impact before deciding to roll them out more broadly.

This portfolio approach, combined with experimentation, benefits from both the quantity and diversity of ideas¹¹. It’s similar to diversifying a portfolio of stocks. Increasing the number of ideas in the portfolio increases exposure to a positive outcome — an idea that makes a material positive impact on the company. Of course, as you add ideas, you also increase the risk of bad outcomes — ideas that do nothing or even have a negative impact. However, many ideas are reversible — the “two-way door” that Amazon’s Jeff Bezos speaks of (Haden 2018). Ideas that don’t produce the expected results can be pruned after being tested on a small sample of customers, greatly mitigating the impact, while successful ideas can be rolled out to all relevant customers, greatly amplifying the impact.

So, adding ideas to the portfolio increases exposure to upside without a lot of downside — the more, the better¹². However, there is an assumption that the ideas are independent (uncorrelated). If all the ideas are similar, then they may all succeed or fail together. This is where diversity comes in. Ideas from different groups will leverage divergent cognitive repertoires and different sets of information. This makes them different and less likely to be correlated with each other, producing more varied outcomes. For stocks, the return on a diverse portfolio will be the average of the returns for the individual stocks. However, for ideas, since experimentation lets you mitigate the bad ones and amplify the good ones, the return of the portfolio can be closer to the return of the best idea (Page 2017).

In addition to building a portfolio of diverse ideas, a single idea can be significantly strengthened through collaboration between data scientists and business teams¹³. When they work together, their combined repertoires fill in each other’s blind spots (Page 2017)¹⁴. By merging the unique expertise and insights from multiple teams, ideas become more robust, much like how diverse groups tend to excel in trivia competitions. However, organizations must ensure that true collaboration happens at the ideation stage rather than dividing responsibilities such that business teams focus solely on generating ideas and data scientists are relegated to execution.

Data scientists are much more than a skilled resource for executing existing ideas; they are a wellspring of novel, innovative thinking. Their ideas are uniquely valuable because (1) their cognitive repertoires are highly relevant to businesses with the right conditions for learning, (2) their observations in the data can lead to novel insights, and (3) their ideas differ from those of business teams, adding diversity to the company’s portfolio of ideas.

However, organizational pressures often prevent data scientists from fully contributing their ideas. Overwhelmed with skill-based tasks and deprived of business context, they are incentivized to merely fulfill the requests of their partners. This pattern exhausts the team’s capacity for execution while leaving their cognitive repertoires and insights largely untapped.

Here are some suggestions that organizations can follow to better leverage data scientists and shift their roles from mere executors to active contributors of ideas:

  • Give them context, not tasks. Providing data scientists with tasks or fully specified requirements documents will get them to do work, but it won’t elicit their ideas. Instead, give them context. If an opportunity is already identified, describe it broadly through open dialogue, allowing them to frame the problem and propose solutions. Invite data scientists to operational meetings where they can absorb context, which may inspire new ideas for opportunities that haven’t yet been considered.
  • Create slack for exploration. Companies often completely overwhelm data scientists with tasks. It may seem paradoxical, but keeping resources 100% utilized is very inefficient¹⁵. Without time for exploration and unexpected learning, data science teams can’t reach their full potential. Protect some of their time for independent research and exploration, using tactics like Google’s 20% time or similar approaches.
  • Eliminate the task management queue. Task queues create a transactional, execution-focused relationship with the data science team. Priorities, if assigned top-down, should be given in the form of general, unframed opportunities that need real conversations to provide context, goals, scope, and organizational implications. Priorities might also emerge from within the data science team, requiring support from functional partners, with the data science team providing the necessary context. We don’t assign Jira tickets to product or marketing teams, and data science should be no different.
  • Hold data scientists accountable for real business impact. Measure data scientists by their impact on business outcomes, not just by how well they support other teams. This gives them the agency to prioritize high-impact ideas, regardless of the source. Additionally, tying performance to measurable business impact¹⁶ clarifies the opportunity cost of low-value ad hoc requests¹⁷.
  • Hire for adaptability and broad skill sets. Look for data scientists who thrive in ambiguous, evolving environments where clear roles and responsibilities may not always be defined. Prioritize candidates with a strong desire for business impact¹⁸, who see their skills as tools to drive outcomes, and who excel at identifying new opportunities aligned with broad company goals. Hiring for diverse skill sets enables data scientists to build end-to-end systems, minimizing the need for handoffs and reducing coordination costs — especially critical during the early stages of innovation when iteration and learning are most important¹⁹.
  • Hire functional leaders with growth mindsets. In new environments, avoid leaders who rely too heavily on what worked in more mature settings. Instead, seek leaders who are passionate about learning and who value collaboration, leveraging diverse perspectives and information sources to fuel innovation.

These suggestions require an organization with the right culture and values. The culture needs to embrace experimentation to measure the impact of ideas and to recognize that many will fail. It needs to value learning as an explicit goal and understand that, for some industries, the vast majority of knowledge has yet to be discovered. It must be comfortable relinquishing the clarity of command-and-control in exchange for innovation. While this is easier to achieve in a startup, these suggestions can guide mature organizations toward evolving with experience and confidence. Shifting an organization’s focus from execution to learning is a challenging task, but the rewards can be immense or even crucial for survival. For most modern firms, success will depend on their ability to harness human potential for learning and ideation — not just execution (Edmondson 2012). The untapped potential of data scientists lies not in their ability to execute existing ideas but in the new and innovative ideas no one has yet imagined.



Source link

03Nov

Senior Data Scientist (Analytics) – Deliveries at Grab – Petaling Jaya, Malaysia


Company Description

About Grab and Our Workplace

Grab is Southeast Asia’s leading superapp. From getting your favourite meals delivered to helping you manage your finances and getting around town hassle-free, we’ve got your back with everything. In Grab, purpose gives us joy and habits build excellence while harnessing the power of Technology and AI to deliver the mission of driving Southeast Asia forward by economically empowering everyone, with heart, hunger, honour, and humility.

Job Description

Get to Know the Team

The Deliveries Analytics team is the analytics and data powerhouse behind Grab’s fastest-growing segments: GrabFood, GrabMart. Here, innovation meets action. We’re not just a team; we’re trailblazers, committed to solving the most pressing challenges for our consumers, driver-partners, and merchant-partners leveraging data. From revolutionizing the consumer order experience to enhancing platform reliability, we strive to make Grab the first choice, every time.

Get to Know the Role

You will report to the Product Analytics Manager II, ACE and you’ll have the unique opportunity to collaborate across disciplines (Product, Business, Engineering, Design, Data Science) to transform data into dynamic solutions. Your insights will directly contribute to developing groundbreaking products and initiatives, setting new benchmarks for excellence. This isn’t just any role; it’s a chance to make a tangible impact on millions of lives every day.

This role is based in Petaling Jaya and onsite.

The Critical Tasks You Will Perform

  • You will understand our requirements and outcomes to ensure data-driven decision-making.
  • You will conduct tailored analyses for specific products and operations, define critical business metrics, track them rigorously, and recommend continuous improvements.
  • You will frame business scenarios and propose features that impact critical business processes and decisions.
  • You will transform requirements into concise insights through reports, presentations, and dashboards, and consolidate data from multiple sources to create comprehensive views for decision-making.
  • You will develop data pipelines and custom data science models to solve identified problems.
  • You will launch A/B tests, analyze the results, and provide recommendations based on your findings.

Qualifications

What Essential Skills You Will Need

  • You have at least 4 years of experience in data-related or quantitative fields such as Analytics, Science, Statistics, or Mathematics.
  • You are fluent with SQL, Python, R or other scripting/programming languages.
  • You are experienced in handling large datasets and maintaining complex Extract, Transform and Load (ETL) processes.
  • You have solid statistical knowledge and hands-on experience running and analysing controlled experiments.
  • You are proficient in creating dashboards using Tableau, PowerBI or other visualisation tools.

Additional Information

Life at Grab

We care about your well-being at Grab, here are some of the global benefits we offer:

  • We have your back with Term Life Insurance and comprehensive Medical Insurance.
  • With GrabFlex, create a benefits package that suits your needs and aspirations.
  • Celebrate moments that matter in life with loved ones through Parental and Birthday leave, and give back to your communities through Love-all-Serve-all (LASA) volunteering leave
  • We have a confidential Grabber Assistance Programme to guide and uplift you and your loved ones through life’s challenges.

What we stand for at Grab

We are committed to building an inclusive and equitable workplace that enables diverse Grabbers to grow and perform at their best. As an equal opportunity employer, we consider all candidates fairly and equally regardless of nationality, ethnicity, religion, age, gender identity, sexual orientation, family commitments, physical and mental impairments or disabilities, and other attributes that make them unique.



Source link

03Nov

Ingénieur Big Data confirmé – Services Financiers – Bordeaux at Sopra Steria – Mérignac, France


Description de l’entreprise

Sopra Steria, acteur majeur de la Tech en Europe avec 56 000 collaborateurs dans près de 30 pays, est reconnu pour ses activités de conseil, de services numériques et d’édition de logiciels. Il aide ses clients à mener leur transformation digitale et à obtenir des bénéfices concrets et durables. 

Le Groupe apporte une réponse globale aux enjeux de compétitivité des grandes entreprises et organisations, combinant une connaissance approfondie des secteurs d’activité et des technologies innovantes à une approche résolument collaborative. Sopra Steria place l’humain au centre de son action et s’engage auprès de ses clients à tirer le meilleur parti du digital pour construire un avenir positif.

En 2023, le Groupe a réalisé un chiffre d’affaires de 5,8 milliards d’euros.
The world is how we shape it

La division « Services financiers » s’est développée autour des métiers de la banque de détail, de la banque privée et des services financiers spécialisés. Nous participons à la révolution digitale grâce à notre expertise en automatisation des processus, Big Data, IA, Cloud. Nous accompagnons la transformation de nos clients en y associant nos compétences dans les domaines fonctionnels des Crédits, des Risques/Conformité et des Moyens de Paiement.

Description du poste

Votre futur environnement de travail

Intégré(e) au sein d’une équipe Sopra Steria, pour un de nos Grands Comptes Bancaires, vous participerez à un projet Big Data, en mode Agile, et interviendrez en tant que référent technique au sein de votre équipe.

Votre rôle et missions :

A cette occasion vous serez amené à :

  • Apporter votre expertise et votre expérience à vos collègues lors des phases de conception et développement ;
  • Accompagner vos collègues dans leur montée en compétence technique au sein du projet ;
  • Définir et implémenter des solutions au sein d’un périmètre applicatif existant ;
  • Proposer des idées d’amélioration continue à votre client et à votre équipe (revue de procédures, mise en place de nouveaux outils dans le cadre de livraison, test ou qualimétrie) ;
  • Concevoir et développer des sujets complexes.

Environnement du projet :

  • Méthodologie projet : Mode Agile (Framework Scrum).

Environnement technique :

  • Hdfs, hive, spark, oozie
  • Scala, HQL, Shell
  • GitLab, Nexus, Maven, Jenkins, Sonar

Environnement fonctionnel :

  • Alimentation d’un DataLake jusqu‘au build d’un moteur de calcul
  • Intervention sur la mise en place de règles relatives aux normes Bâlois

Qualifications

Votre profil:

De formation Bac+5, école d’ingénieur université ou équivalent, vous avez au moins 5 ans d’expérience dans la data dont 3 ans en Big Data, notamment sur les technologies mentionnées ci-dessus.

Vous aimez travailler en équipe, relever les défis techniques, conseiller et apporter votre valeur ajoutée à une équipe.

Vous savez être challengeant et leader envers le client et l’équipe de développement en ce qui concerne l’amélioration continue (processus de livraison, automatisation du testing/validation, maintenance des bonnes pratiques et performance des runs).

Vous aimez vous tenir informé des nouveautés technologiques et êtes à la recherche d’une évolution de carrière basée sur l’expérience projet et l’acquisition de nouvelles compétences.

#LI-HYBRID  #PF

Informations supplémentaires

  • Un accord télétravail pour télétravailler jusqu’à 2 jours par semaine selon vos missions. 
  • Un package avantages intéressant : une mutuelle, un CSE, des titres restaurants, un accord d’intéressement, des primes vacances et cooptation.
  • Un accompagnement individualisé avec un mentor.
  • Des opportunités de carrières multiples : plus de 30 familles de métiers, autant de passerelles à imaginer ensemble.
  • Plusieurs centaines de formations accessibles en toute autonomie depuis l’app mobile avec Sopra Steria Academy.
  • La possibilité de s’engager auprès de notre fondation ou de notre partenaire « Vendredi ».
  • L’opportunité de rejoindre le collectif Tech’Me UP (formations, conférences, veille, et bien plus encore…).

Employeur inclusif et engagé, notre société œuvre chaque jour pour lutter contre toute forme de discrimination et favoriser un environnement de travail respectueux. C’est pourquoi, attachés à la mixité et à la diversité, nous encourageons toutes les candidatures et tous les profils.

https://www.soprasteria.fr/nous-connaitre/nos-engagements

 



Source link

02Nov

AI & Automation Specialist at Storytel – Stockholm, Sweden


Storytel is one of the world’s leading audiobook and ebook streaming services, offering unlimited listening to consumers in 25+ markets. Our vision is to make the world more empathetic, with great stories to be shared and enjoyed by anyone, anywhere, anytime. We are now on the hunt for a talented AI & Automation Specialist for our Customer Support Team.

About the team

You will be part of the Customer Support team and report to the Head of Customer Support, but you will also work closely with other functions within Operations (and possibly other parts of the organization) to develop the use of automated workflows and AI.

About the role

At Storytel, we integrate innovative Gen AI models/solutions to deliver great customer support experiences. We believe in enhancing efficiency while maintaining a personal touch, and now we are looking to strengthen the team with an AI & Automation Specialist to help us refine and elevate our bot-driven support systems. In this role, you will be responsible for maintaining and developing automation and AI bot solutions for teams within Operations (e.g. Customer Support, Content Operations). You will be implementing GPT solutions into our existing AI bots, conducting data reporting and analysis of bot activity, and facilitating daily communication with support agents and other stakeholders in the business.

What you will do

  • Oversee the daily operations of our chat and email bots to ensure accuracy, reliability and customer satisfaction.

  • Analyze customer inquiries and chatbot data to improve automation and increase customer satisfaction.

  • Continuously research and launch new features and improvements for our bots to increase automation and enhance the customer experience.

  • Develop and refine bot scripts and answers to reflect a helpful, consistent, and human-like tone, while regularly stepping in to manually address complex queries

  • Communicate daily with stakeholders across the business for knowledge and information sharing.

About you

If you are passionate about AI, customer support, and leveraging data-driven insights to improve operational efficiency, we encourage you to apply for this exciting opportunity.

  • The ideal candidate has either:
    • recently completed an engineering degree or a similar program, with an interest in AI.

    • proven experience in implementing and working with AI and automation solutions in a modern customer support function.

  • An analytical, data-driven and problem-solving mindset with a proven ability to work with analysis of big data sets and methodically define actions for improvement.

  • An interest in and knowledge of building automated workflows.

  • An agile and continuous learning mindset and thrive on working in iterative and changing environments.

  • Willingness and ability to empathize with customer pain points and balance between internal efficiency and customer experience. 

  • Strong aptitude and passion for written communication, with the ability to craft clear, engaging, and well-tailored content.

What we offer you

Storytel offers a friendly, entrepreneurial, and fast-moving work environment where new ideas and creativity are welcomed. We like doing things in new ways and questioning old methods. The Storytel culture is vital to us, characterized by being welcoming and helpful. We don’t believe in hierarchy and micromanagement; we highly believe in giving responsibility and having people grow alongside it.

Our culture encourages self-leadership, innovation, and collaboration – our employees are passionate about high-quality content and stories. There is a strong emphasis on continuous development, learning through experimentation, and actively sharing feedback to constantly improve performance. Our approach to leadership is rooted in communication, trust, and genuine care, recognizing that innovation often stems from collective wisdom, enabling individuals to thrive and achieve collective success.

If you believe that Storytel is a place where you could thrive, submit your application and we will contact you in a little while. No need to write a cover letter or submit a CV – your LinkedIn profile together with the application questions is enough. Thanks!



Source link

02Nov

Should you learn how to code in the next decade? | by Ivo Bernardo | Nov, 2024


Or will AI eat up all the software in the world?

Photo by steinart @unsplash.com

Many people today are facing a dilemma: if you’re young, should you pursue a software engineering degree? And if you’re already established in another career, should you make a switch to something involving coding? These questions stem from a larger one: with all the excitement around large language models (LLMs), is it really worth learning to code?

Recently Google’s CEO stated that 25% of the code generated by the company is written by AI. Are we seeing the death of coding as we know it?

And these questions are not just asked by people entering the field. Several professionals whose job depend on coding are also asking them. Should they continue to invest a large portion of their life improving their coding abilities?

To me the short answer is: coding will still be relevant — but maybe not for the reason you are thinking about. Because I think it’s undeniable that coding related jobs will change a lot in the next decade.

In this post, we’ll see some predictions of the future of coding and some arguments in favor of learning a programming language. With this post, I hope to provide you with a fresh perspective on why



Source link

02Nov

Staff Software Engineer I at Etsy – Brooklyn, NY


Company Description
Etsy is the global marketplace for unique and creative goods. We build, power, and evolve the tools and technologies that connect millions of entrepreneurs with millions of buyers around the world. As an Etsy Inc. employee, whether a team member of Etsy, Reverb, or Depop, you will tackle unique, meaningful, and large-scale problems alongside passionate coworkers, all the while making a rewarding impact and Keeping Commerce Human.

Salary Range:

$197,000.00 – $231,000.00

What’s the role?

We are looking for a Staff Engineer to join the Data Platform team at Etsy. You’ll be a leader in our initiative, driving technical strategy and contributing directly to our most critical projects. This role blends hands-on engineering with strategic leadership. You’ll be responsible for guiding our tools and platforms towards a cohesive vision, ensuring they align with business priorities and unlock new opportunities by incorporating the latest industry capabilities. You’ll stay in close communication with our internal customers and ensure our work is tightly aligned to their areas of greatest need. 

This is a full-time position reporting to the Senior Engineering Manager, Data Platform. In addition to salary, you will also be eligible for an equity package, an annual performance bonus, and our competitive benefits that support you and your family as part of your total rewards package at Etsy.

For this role, we are considering candidates based in the United States. Candidates living within commutable distance of Etsy’s Brooklyn Office Hub or in the San Francisco Bay Area may be the first to be considered. For candidates within commutable distance, Etsy requires in-office attendance once or twice per week depending on your proximity to the office. Etsy offers different work modes to meet the variety of needs and preferences of our team. Learn more details about our  work modes and workplace safety policies here.

What does this team look like at Etsy?

Our mission is to build seamless and intuitive tools for collecting and processing data at scale, in support of machine learning, marketing, business analysis, and ultimately delightful buyer and seller experiences. This is a big job as Etsy has millions of buyers and over 100 million unique, vintage, and handmade items! Our team values are empathy, kindness, accountability, and autonomy and we strive to foster an environment where individuals are supported, respected, and empowered to take ownership and collaborate effectively. 

What does the day-to-day look like?

  • You will contribute directly to projects, shaping our technical direction, and collaborating with our customers.

  • You will engage in development work on projects, guide your team in technical decisions, deeply understand the problem space, and mentor engineers.

  • You will also collaborate closely with engineering managers, product managers, internal data customers, and other engineers to design robust end-to-end solutions and incorporate industry trends that can drive business value.

  • You will help build alignment and support for your ideas and will create presentations, write technical documentation, and share insights widely across a variety of stakeholders. 

  • Of course, this is just a sample of the kinds of work this role will require! You should assume that your role will encompass other tasks, too, and that your job duties and responsibilities may change from time to time at Etsy’s discretion, or otherwise applicable with local law.

Qualities that will help you thrive in this role are:

  • 6+ years of industry experience 

  • Ability to balance hands-on engineering with strategic planning

  • Strong communication skills for influencing across teams and leadership levels

  • Ability to guide architectural direction while being receptive to feedback

  • Collaborative mindset, comfortable working with a diverse range of roles and departments

  • Enjoys mentoring and growing other engineers

  • Deep familiarity with at least some of our tools: Airflow, Spark, SQL, Python, Scala. 

  • Familiarity with big data cloud platforms. We use GCP, including BigQuery, Dataproc, Dataflow. 

Additional Information

What’s Next
If you’re interested in joining the team at Etsy, please share your resume with us and feel free to include a cover letter if you’d like. As we hope you’ve seen already, Etsy is a place that values individuality and variety. We don’t want you to be like everyone else — we want you to be like you! So tell us what you’re all about.

Our Promise
At Etsy, we believe that a diverse, equitable and inclusive workplace furthers relevance, resilience, and longevity. We encourage people from all backgrounds, ages, abilities, and experiences to apply. Etsy is proud to be an equal opportunity workplace. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. If, due to a disability, you need an accommodation during any part of the interview process, please let your recruiter know. While Etsy supports visa sponsorship, sponsorship opportunities may be limited to certain roles and skills.



Source link

Protected by Security by CleanTalk