08Oct

Specialist – SAP Analytics at Sanofi – Hyderabad


Specialist – SAP Analytics

  • Job Title: Specialist – SAP Analytics
  • Location: Hyderabad, India
  • Job type: Permanent, Full time
  • Working Hours: India

About Growing with us

We are seeking a highly skilled and experienced Specialist in SAP Analytics to join our team. As a specialist, you will play a vital role in building and configuring analytical reports in the areas of SAP Business Warehouse and Embedded Analytics (CDS Views), collaborating with other developers to ensure efficient and effective solutions.

The ideal candidate should possess a minimum of 8 years of SAP Analytics application expertise in SAP BW and Embedded Analytics (CDS Views). SAP Analytics Specialist will validate design and effort estimates, manage change requests, review requirements and specifications, develop analytical solutions, participate in projects, scope new demands, and provide business advisory on SAP solutions. They will focus on embedded analytics CDS views and SAP Business Warehouse reports, ensuring data quality, performance, and integration.

Main responsibilities:

Embedded Analytics CDS View Development:

  • Design and develop CDS views to provide data models for embedded analytics applications.
  • Collaborate with business users and data architects to define data requirements and create efficient CDS views.
  • Ensure data quality, consistency, and performance of CDS views.

SAP Analytics Report Development:

  • Create and maintain SAP Analytics reports using tools like SAC, AFO, Web Intelligence, and Lumira based on data models from SAP BW4HANA and/or Embedded Analytics.
  • Develop complex reports and dashboards to meet the analytical needs of business users.
  • Optimize report performance and improve data visualization for effective decision-making.

Data Integration:

  • Integrate data from various sources (e.g., S/4HANA, SAP ECC, external systems) into BW and CDS views.
  • Develop data extraction, transformation, and loading (ETL) processes using BW’s ETL capabilities and work with the Integration team to facilitate data sharing using IICS.

Data Modelling:

  • Design and implement data models in BW and CDS views to support analytical requirements using industry best practices.
  • Ensure data consistency and integrity throughout the data landscape.

Performance Optimization:

  • Analyse report performance and identify bottlenecks.
  • Implement optimization techniques to improve query performance and reduce system load.

Technical Support:

  • Provide technical support for embedded analytics and BW4HANA reporting solutions.
  • Troubleshoot issues and resolve problems related to data, reports, and performance.

Collaboration:

  • Work closely with business analysts, data architects, and developers to understand requirements and deliver effective solutions.
  • Collaborate with other teams to ensure data integration and consistency across the organization.

About you

Experience

  • 8+ years of SAP Analytics experience in total with 2+ years of experience doing ERP delivery in a business-oriented context, preferably in a global company.
  • Extensive experience in implementing and maintaining SAP BW4HANA, Embedded Analytics (CDS Views) & SAC, AFO, WEBI (or any other BI Tool).
  • Must have good knowledge of functional modules like FI, SCI, SD, MM, PP, PM, and eWM for SAP Analytics.  
  • ERP transformation experience in a global company 
  • Executive-level communication and engagement skills, both written and verbal 
  • Strong configuration experience, ability to determine when to use configuration vs. code as well as advanced troubleshooting skills. 
  • Ability to translate functional specifications into technical design documents, provide efforts and cost estimates, and manage delivery of the desired functionality. 
  • Maintain a high level of quality while working on complex problems under pressure and deadlines. 
  • Guide Data Migration teams with master data & transactional data loads. 
  • Project management experience; continuous improvement skills and mindset.
  • Experience with multi-geography, multi-tier service design and management. 
  • Deep understanding of Application and Technology Architecture. 
  • Knowledge of SAP S/4 HANA is a must in the Embedded Analytics(CDS Views) context. 
  • Knowledge of agile ways of working is a plus. 
  • Knowledge of current ERP software trends in their area of expertise is a plus.
  • Knowledge of Snowflake and IICS for integration is a plus.
  • Knowledge of AI ML capabilities is a plus.

Soft skills

  • Demonstrated conflict resolution & problem-solving skills in a global environment. 
  • Strong appetite to learn and discover. 
  • Adaptable and open to changes. 
  • Company-first mindset: able to put the interests of the company before their own or those of their teams if applicable. 
  • Excellent analytical skills: able to frame and formalize problem statements and formulate robust solution proposals clearly and concisely. 
  • Autonomous and Results-driven.
  • Role model our 4 values: teamwork, integrity, respect, courage 

 

Education

  • Bachelor’s Degree or equivalent in Information Technology or Engineering

Languages

  • Fluent spoken and written English

When joining our team, you will experience:

  • If you have a passion for SAP Analytics Industrial processes or Back Office and are looking for a challenging role where you can make a significant impact, we would love to hear from you.
  • An international work environment, in which you can develop your talent and realize ideas and innovations within a competent team.
  • Your own career path within Sanofi. Your professional and personal development will be supported purposefully.

Pursue progress, discover extraordinary

Better is out there. Better medications, better outcomes, better science. But progress doesn’t happen without people – people from different backgrounds, in different locations, doing different roles, all united by one thing: a desire to make miracles happen. So, let’s be those people. 
 
At Sanofi, we provide equal opportunities to all regardless of race, colour, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, or gender identity 
 
At Sanofi diversity and inclusion is foundational to how we operate and embedded in our Core Values. We recognize to truly tap into the richness diversity brings we must lead with inclusion and have a workplace where those differences can thrive and be leveraged to empower the lives of our colleagues, patients, and customers. We respect and celebrate the diversity of our people, their backgrounds and experiences and provide equal opportunity for all.

Pursue progress, discover extraordinary

Better is out there. Better medications, better outcomes, better science. But progress doesn’t happen without people – people from different backgrounds, in different locations, doing different roles, all united by one thing: a desire to make miracles happen. So, let’s be those people.

At Sanofi, we provide equal opportunities to all regardless of race, colour, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, ability or gender identity.

Watch our ALL IN video and check out our Diversity Equity and Inclusion actions at sanofi.com!



Source link

07Oct

2025 Full Time Analyst Program in Quantitative Risk Management at Citi – GRZYBOWSKA 60


You’re the brains behind our work

You’re ready to bring your knowledge from the classroom to the boardroom, and Citi wants to help you get there. Whether it’s honing your skills or building your network, we know that success can’t come without growth. The program equips you with the knowledge and training you need to play a valuable role on your team and establish a long-term career here.

Citi’s Quantitative discipline of Risk Management Team in Warsaw is looking for Analysts to join its Analyst Program. As part of the Risk Management function, your work can make an immediate impact. You will gain an understanding of Risk best practices, learn about Citi’s businesses and the Risks it manages. Additionally, you will gain a broad understanding of how a portfolio of Risk is managed in a global financial institution using various measurement techniques including VaR, stress-testing and scenario analysis. Analysts will also learn about the Risks and rewards from individual financial instruments.

The Full Time Analyst Program starts in summer 2025 with an intensive orientation where you will learn more about our business, strategy, and shared vision.

Your time here will look something like this

Citi’s Quantitative Risk Management Analyst Program is a two–year rotational global leadership development program designed to create a pipeline of future Citi Risk Management leaders with experience in multiple Risk disciplines (Treasury Risk Management, Global Market Risk, Model Risk Management and Quantitative Risk & Stress Testing, Risk Reporting).

For all rotations including first placements, Analysts will have an opportunity to gain experience working closely with senior managers to build leadership capabilities. Upon successful completion of the program, Analysts are considered for placements within Risk Management.

Several key elements of the Quantitative Risk Management Analyst program include:
 

  • Senior Access – Analysts have frequent access to Citi’s senior Risk leaders while working with selected managers who guide them to maximize their potential and develop their careers.
  • Global Reach – Exposure to global perspectives and businesses. Analysts have opportunities to build relationships within the Risk Management community that can be leveraged throughout their career.
  • Leadership – Analysts are encouraged to develop their management skills during their rotations.
  • Responsibility – Part of Citi’s growth strategy depends on the entrepreneurial spirit of employees. Analysts can apply their leadership skills in a customized curriculum to accelerate learning and professional development.
  • Empowerment – Analysts are challenged to become involved in the fast-paced analytical and dynamic financial environment of a world-class corporation.

We want to hear from you, if

  • You graduate with Master degree between December 2024 and July 2025 in Mathematics, Economics, Finance, Statistics or Accounting, Engineering, Science

  • You possess a strategic and analytical mindset with a global perspective and excellent judgment

  • You are willing to take initiative and offer creative solutions and able to step out of your comfort zone

  • You possess resiliency to work in an environment of change and build and maintain excellent business relationships and are familiar with process improvement

  • You are committed to excellence with a sense of urgency and excitement

  • You have proficiency in analytical, coding or data mining tools (e.g. SAS, SQL, R, Python, Hadoop, Spark, MATLAB, Tableau, PowerBI)

  • You are technologically proficient in Excel, Word and Power Point

  • You have strong written and verbal communication and presentation skills

Please submit your application in English. Please remember that we will begin to review applications before the deadline and therefore encourage you to apply as soon as possible.

#PL_EarlyCareersCSC

——————————————————

Job Family Group:

Management Development Programs

——————————————————

Job Family:

Undergraduate

——————————————————

Time Type:

Full time

——————————————————

Citi is an equal opportunity and affirmative action employer.

Qualified applicants will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran.

Citigroup Inc. and its subsidiaries (“Citi”) invite all qualified interested applicants to apply for career opportunities. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi.

View the “EEO is the Law” poster. View the EEO is the Law Supplement.

View the EEO Policy Statement.

View the Pay Transparency Posting



Source link

07Oct

Exploring How the New OpenAI Realtime API Simplifies Voice Agent Flows | by Sami Maameri | Oct, 2024


Setting up a Voice Agent using Twilio and the OpenAI Realtime API

At the recent OpenAI Dev Day on October 1st, 2024, OpenAI’s biggest release was the reveal of their Realtime API:

“Today, we’re introducing a public beta of the Realtime API, enabling all paid developers to build low-latency, multimodal experiences in their apps.

Similar to ChatGPT’s Advanced Voice Mode, the Realtime API supports natural speech-to-speech conversations using the six preset voices already supported in the API.”

(source: OpenAI website)

As per their message, some of its key benefits include low latency, and its speech to speech capabilities. Let’s see how that plays out in practice in terms of building out voice AI agents.

It also has an interruption handling feature, so that the realtime stream will stop sending audio if it detects you are trying to speak over it, a useful feature for sure when building voice agents.

In this article we will:

  • Compare what a phone voice agent flow might have looked like before the Realtime API, and what it looks like now,
  • Review a GitHub project from Twilio that sets up a voice agent using the new Realtime API, so we can see what the implementation looks like in practice, and get an idea how the websockets and connections are setup for such an application,
  • Quickly review the React demo project from OpenAI that uses the Realtime API,
  • Compare the pricing of these various options.

Before the OpenAI Realtime API

To get a phone voice agent service working, there are some key services we require

  • Speech to Text ( e.g Deepgram),
  • LLM/Agent ( e.g OpenAI),
  • Text to Speech (e.g ElevenLabs).

These services are illustrated in the diagram below

(source https://github.com/twilio-labs/call-gpt, MIT license)

That of course means integration with a number of services, and separate API requests for each parts.

The new OpenAI Realtime API allows us to bundle all of those together into a single request, hence the term, speech to speech.

After the OpenAI Realtime API

This is what the flow diagram would look like for a similar new flow using the new OpenAI Realtime API.

Obviously this is a much simpler flow. What is happening is we are just passing the speech/audio from the phone call directly to the OpenAI Realtime API. No need for a speech to text intermediary service.

And on the response side, the Realtime API is again providing an audio stream as the response, which we can send right back to Twilio (i.e to the phone call response). So again, no need for an extra text to speech service, as it is all taken care of by the OpenAI Realtime API.

Let’s look at some code samples for this. Twilio has provided a great github repository example for setting up this Twilio and OpenAI Realtime API flow. You can find it here:

Here are some excerpts from key parts of the code related to setting up

  • the websockets connection from Twilio to our application, so that we can receive audio from the caller, and send audio back,
  • and the websockets connection to the OpenAI Realtime API from our application.

I have added some comments in the source code below to try and explain what is going on, expecially regarding the websocket connection between Twilio and our applicaion, and the websocket connection from our application to OpenAI. The triple dots (…) refere to sections of the source code that have been removed for brevity, since they are not critical to understanding the core features of how the flow works.

// On receiving a phone call, Twilio forwards the incoming call request to
// a webhook we specify, which is this endpoint here. This allows us to
// create programatic voice applications, for example using an AI agent
// to handle the phone call
//
// So, here we are providing an initial response to the call, and creating
// a websocket (called a MediaStream in Twilio, more on that below) to receive
// any future audio that comes into the call
fastify.all('/incoming', async (request, reply) => {
const twimlResponse = `

Please wait while we connect your call to the A. I. voice assistant, powered by Twilio and the Open-A.I. Realtime API
O.K. you can start talking!



`;

reply.type('text/xml').send(twimlResponse);
});

fastify.register(async (fastify) => {

// Here we are connecting our application to the websocket media stream we
// setup above. That means all audio that comes though the phone will come
// to this websocket connection we have setup here
fastify.get('/media-stream', { websocket: true }, (connection, req) => {
console.log('Client connected');

// Now, we are creating websocket connection to the OpenAI Realtime API
// This is the second leg of the flow diagram above
const openAiWs = new WebSocket('wss://api.openai.com/v1/realtime?model=gpt-4o-realtime-preview-2024-10-01', {
headers: {
Authorization: `Bearer ${OPENAI_API_KEY}`,
"OpenAI-Beta": "realtime=v1"
}
});

...

// Here we are setting up the listener on the OpenAI Realtime API
// websockets connection. We are specifying how we would like it to
// handle any incoming audio streams that have come back from the
// Realtime API.
openAiWs.on('message', (data) => {
try {
const response = JSON.parse(data);

...

// This response type indicates an LLM responce from the Realtime API
// So we want to forward this response back to the Twilio Mediat Stream
// websockets connection, which the caller will hear as a response on
// on the phone
if (response.type === 'response.audio.delta' && response.delta) {
const audioDelta = {
event: 'media',
streamSid: streamSid,
media: { payload: Buffer.from(response.delta, 'base64').toString('base64') }
};
// This is the actual part we are sending it back to the Twilio
// MediaStream websockets connection. Notice how we are sending the
// response back directly. No need for text to speech conversion from
// the OpenAI response. The OpenAI Realtime API already provides the
// response as an audio stream (i.e speech to speech)
connection.send(JSON.stringify(audioDelta));
}
} catch (error) {
console.error('Error processing OpenAI message:', error, 'Raw message:', data);
}
});

// This parts specifies how we handle incoming messages to the Twilio
// MediaStream websockets connection i.e how we handle audio that comes
// into the phone from the caller
connection.on('message', (message) => {
try {
const data = JSON.parse(message);

switch (data.event) {
// This case ('media') is that state for when there is audio data
// available on the Twilio MediaStream from the caller
case 'media':
// we first check out OpenAI Realtime API websockets
// connection is open
if (openAiWs.readyState === WebSocket.OPEN) {
const audioAppend = {
type: 'input_audio_buffer.append',
audio: data.media.payload
};
// and then forward the audio stream data to the
// Realtime API. Again, notice how we are sending the
// audio stream directly, not speech to text converstion
// as would have been required previously
openAiWs.send(JSON.stringify(audioAppend));
}
break;

...
}
} catch (error) {
console.error('Error parsing message:', error, 'Message:', message);
}
});

...

fastify.listen({ port: PORT }, (err) => {
if (err) {
console.error(err);
process.exit(1);
}
console.log(`Server is listening on port ${PORT}`);
});

So, that is how the new OpenAI Realtime API flow plays out in practice.

Regarding the Twilio MediaStreams, you can read more about them here. They are a way to setup a websockets connection between a call to a Twilio phone number and your application. This allows streaming of audio from the call to and from you application, allowing you to build programmable voice applications over the phone.

To get to the code above running, you will need to setup a Twilio number and ngrok also. You can check out my other article over here for help setting those up.

Since access to the OpenAI Realtime API has just been rolled, not everyone may have access just yet. I intially was not able to access it. Running the application worked, but as soon as it tries to connect to the OpenAI Realtime API I got a 403 error. So in case you see the same issue, it could be related to not having access yet also.

OpenAI have also provided a great demo for testing out their Realtime API in the browser using a React app. I tested this out myself, and was very impressed with the speed of response from the voice agent coming from the Realtime API. The response is instant, there is no latency, and makes for a great user experience. I was definitley impressed when testing it out.

Sharing a link to the source code here. It has intructions in the README.md for how to get setup

This is a picture of what the application looks like once you get it running on local

(source https://github.com/openai/openai-realtime-console, MIT license)

Let’s compare the cost the of using the OpenAI Realtime API versus a more conventional approach using Deepagram for speech to text (STT) and text to speech (TTS) and using OpenAI GPT-4o for the LLM part.

Comparison using the prices from their websites shows that for a 1 minute conversation, with the caller speaking half the time, and the AI agent speaking the other half, the cost per minute using Deepgram and GPT-4o would be $0.0117/minute, whereas using the OpenAI Realtime API would be $0.15/minute.

That means using the OpenAI Realtime API would be just over 10x the price per minute.

It does sound like a fair amount more expensive, though we should balance that with some of the benefits the OpenAI Realtime API could provide, including

  • reduced latencies, crucial for having a good voice experience,
  • ease of setup due to fewer moving parts,
  • conversation interruption handling provided out of the box.

Also, please do be aware that prices can change over time, so the prices you find at the time of reading this article, may not be the same as those reflected above.

Hope that was helpful! What do you think of the new OpenAI Realtime API? Think you will be using it in any upcoming projects?

While we are here, are there any other tutorials or articles around voice agents andvoice AI you would be interested in? I am deep diving into that field a bit just now, so would be happy to look into anything people find interesting.

Happy hacking!



Source link

07Oct

Senior Data Science Engineer at Publicis Groupe – Irving, TX, United States


Company Description

Publicis Sapient is a digital transformation partner helping established organizations get to their future, digitally-enabled state, both in the way they work and the way they serve their customers. We help unlock value through a start-up mindset and modern methods, fusing strategy, consulting and customer experience with agile engineering and problem-solving creativity. United by our core values and our purpose of helping people thrive in the brave pursuit of next, our 20,000+ people in 53 offices around the world combine experience across technology, data sciences, consulting and customer obsession to accelerate our clients’ businesses through designing the products and services their customers truly value.

Job Description

Your Impact:

The purpose of the Senior Associate, Data Science  is to work with clients undergoing a data-driven transformation (DDT) across the globe. The Senior Associate will accelerate and drive the DDT strategy for clients, working in partnership with digital directors, client teams, and practice capability. Furthermore, the role will be a key contributor in ensuring Publicis Sapient is an industry leader in digital thinking, execution, and value realization for clients through data-driven solutions.

You’re passionate about solving real world problems through the application of machine learning and AI – specifically GenAI

GenAI Skills:

  • Experience in developing and implementing generative AI models and algorithms, Retrieval Augment generation and prompt engineering.
  • Ability to work with large datasets and understand preprocessing techniques.
  • Experience in building and deploying multi-modal generative AI systems in real-world applications.
  • Experience with GCP ecosystem with ImageGen, Vector Search, Vertex AI is must

Machine Learning / Deep Learning Skills:

  • Lead the development and implementation of advanced machine learning models, ensuring a high standard of technical excellence
  • Design, implement and provide guidance on ML engineering workflows, streamlining the deployment of models and systems to production for optimal efficiency
  • Exceptional ML engineering knowledge, showcasing the ability to streamline model deployment and manage complex ML workflows
  • Strong understanding of machine learning and deep learning principles

 

Data Science Skills:

  • Extensive full-time experience in Data Science roles, with a proven background in leading and mentoring small data science project teams
  • Track record of successfully taking multiple data science models/systems into production with GCP ecosystem
  • Strong background and experience with Machine Learning Operations (MLOPs) and Large Language Model Operations (LLMOps)
  • Expertise in collaborative development tools and practices, including peer code reviews and version control (Git)

 

Additional Information

Pay Range: $134,000 – $161,000

The range shown represents a grouping of relevant ranges currently in use at Publicis Sapient. Actual range for this position may differ, depending on location and specific skillset required for the work itself.

Benefits of Working Here:

  • Flexible vacation policy; time is not limited, allocated, or accrued
  • 16 paid holidays throughout the year
  • Generous parental leave and new parent transition program
  • Tuition reimbursement
  • Corporate gift matching program

 

As part of our dedication to an inclusive and diverse workforce, Publicis Sapient is committed to Equal Employment Opportunity without regard for race, color, national origin, ethnicity, gender, protected veteran status, disability, sexual orientation, gender identity, or religion. We are also committed to providing reasonable accommodations for qualified individuals with disabilities and disabled veterans in our job application procedures. If you need assistance or an accommodation due to a disability, you may contact us at

hi****@pu*************.com











or you may call us at +1-617-621-0200.



Source link

07Oct

Senior Data Engineer – FinCrime (m/f/d) at Trade Republic – Headquarter


Please note that this position is based in Berlin, Germany – relocation support is provided if required. 

LIGHTHOUSE FOR TALENT 

 

At Trade Republic, we are on a mission to empower everyone to create wealth with easy, safe and free access to the financial system. With over four million customers we are the largest savings platforms in Europe, with users holding over €35 billion on our platform.

 

We are seen as the go-to-destination for top talent from across the globe. Everyday we strive to make Trade Republic a great environment to do the best work of your life, surrounded by exceptional, caring and passionate colleagues. In addition to learning and growing with a world-class team, you will build a destination for millions of people across Europe to create wealth.

ABOUT THE TEAM

 

To support Trade Republic’s mission, we — the Fincrime team — move swaths of data from different sources and make complex calculations in real time, to feed our crime detection and prevention algorithms. We are a mix of skills and cultures, having Data Scientists, Engineers of all specialties, and Product Managers, all working together with one goal: to stop crime.

We design, build and operate tools and pipelines in various domains such as orchestration and execution of data pipelines, database replications (CDC), MLOps and stream processing, all to achieve sub second end-to-end latency data processing.

As a Senior Engineer, you will help us define the standards for delivering high quality, future-proof products and outcomes. Join us in shaping technical roadmaps, scoping projects, overcoming challenges, and advocating for best practices within the team.

WHAT YOU WILL BE DOING

  • Design  data-intensive systems collaboratively with your team.
  • Design and build sub second data streams to feed our algorithms.
  • Build resilient systems at scale. We process millions of events daily to inform our decision-making.
  • Empower your teammates by defining, designing and implementing reusable patterns that shorten development time, while balancing flexibility with error proofing.
  • Collaborate with peers across functions, supporting and educating them in data engineering. Discuss. Disagree. Integrate your ideas to the product, take ownership and build for the future.
  • Elevate your teammates by providing continuous and constructive feedback, and mentorship

 

WHAT WE’RE LOOKING FOR

  • You have a background in Software/System Engineering and built production grade data-intensive systems. 
  • You’re proficient in system design, accounting for scaling strategies and trade-offs such as availability vs consistency, and latency vs throughput.
  • You love building real time data streams and achieving sub second end-to-end latency.
  • You are passionate about complex problems and flexible about tasks, tools, programming languages and frameworks. 
  • You enjoy playing with a variety of tools and data processing paradigms, including transactional data, event streams, and batch processing.
  • You have exceptional interpersonal and communication skills.
  • You’re a self-starter who thrives under a high level of autonomy.
  • You’re eager to teach, learn and grow together with your team, you promote knowledge sharing.

WHY YOU SHOULD APPLY NOW 

 

At Trade Republic you will get to do the best work of your career. We are a destination for people who are exceptional at what they do. Every day, we strive to build a world-class team and provide the space for people to do their best. We have a relentless ambition of raising the bar and expect the best from ourselves. Through our dedicated people-first management approach and transparent career paths, you will have the opportunity to develop and grow your career like never before. And because you are surrounded by a diverse team of high performers, you will be learning every day.

You will play an important role in fixing one of the largest challenges we face – closing the pension gap and democratizing wealth. Trade Republic is a place where your job, your career and your passions intersect. If this gets you fired up, just like it does for all of us at Trade Republic, then reach out!

Trade Republic embraces diversity and strives for equal opportunity for everyone. We are committed to building a team that represents a variety of backgrounds, characteristics, perspectives and skills. We encourage applicants of diverse gender, age, sexuality, religion, ethnicity, disability status and parental status to apply to our roles, or those from other intersecting minority groups not listed. The more diverse and inclusive we are as a team, the greater our work will be. If we can support you on DEI related questions during the interview process, please reach out to

ta***************@tr***********.com











.



Source link

07Oct

Top 5 Geospatial Data APIs for Advanced Analysis | by Amanda Iglesias Moreno | Oct, 2024


Explore Overpass, Geoapify, Distancematrix.ai, Amadeus, and Mapillary for Advanced Mapping and Location Data

Kyle Glenn in Unsplash (Source: https://unsplash.com/es/@kylejglenn)

Geographic data is important in many analyses, enabling us to decide based on location and spatial patterns. Examples of projects where geodata can come in handy include predicting house prices, route optimization in transportation, or establishing a marketing strategy for business.

However, as a data scientist, you will frequently face the challenge of where to obtain this data. In many cases, there are public sources with information on geographic data; however, in many cases, the information they provide needs to be revised for the analyses we want to perform.

This article will evaluate five of the most useful APIs for obtaining large-scale geographic data. We will assess their usage, advantages and disadvantages, and the main applications of the information they provide. Think of this article as a fundamental foundation for the use and applications of these APIs, so that you can later delve deeper into all the tools they offer.

The Overpass API allows access to the information available on the OpenStreetMap website. OpenStreetMap is an open geographic…



Source link

06Oct

Research Scientist in the Division of Engineering (Civil & Urban Engineering) – Dr. Borja Garcia de Soto at New York University – Abu Dhabi, U.A.E


Description

The S.M.A.R.T. Construction Research Group in the Division of Engineering at New York University Abu Dhabi (NYUAD) seeks to recruit a Research Scientist to work on automation and information technologies applied to construction and civil infrastructure.

About the S.M.A.R.T. Construction Research Group

The S.M.A.R.T. Construction Research Group is a dynamic and interdisciplinary research group led by Prof. Borja García de Soto. The core topics of the S.M.A.R.T. Construction Research Group are Sustainable and resilient construction, Modularization and lean construction, Artificial intelligence, Robotic systems and automation, Technology integration and information modeling.

Qualifications

Applicants must have received a Ph.D. in civil engineering, computer engineering, computer science, or a related field. Expertise in ROS and simulation platforms is required. Additionally, candidates should have experience with ROS2 and be familiar with the development of AI models, large language models (LLMs), and the use of machine learning (ML) techniques. Excellent communication skills in English and scientific creativity are essential. The applicant should have a strong publication record and be capable of supporting grant writing and proposal development to secure funding for research initiatives. Furthermore, extensive experience in laboratory management and research operations, along with in-depth knowledge of safety regulations and best practices in laboratory environments, is required. The ideal candidate will also possess strong organizational skills, a deep understanding of the scientific research process, and the ability to foster a collaborative and innovative research environment.

Application Instructions

Applications will be accepted immediately and candidates will be considered until the position is filled. To be considered, all applicants must submit a cover letter, curriculum vitae with full publication list, statement of research interests, and at least three reference letters. If you have any questions, please email

ga************@ny*.edu











About NYUAD

NYU Abu Dhabi is a degree-granting research university with a fully integrated liberal arts and science undergraduate program in the Arts, Sciences, Social Sciences, Humanities, and Engineering. NYU Abu Dhabi, NYU New York, and NYU Shanghai, form the backbone of NYU’s global network university, an interconnected network of portal campuses and academic centers across six continents that enable seamless international mobility of students and faculty in their pursuit of academic and scholarly activity. This global university represents a transformative shift in higher education, one in which the intellectual and creative endeavors of academia are shaped and examined through an international and multicultural perspective. As a major intellectual hub at the crossroads of the Arab world, NYUAD serves as a center for scholarly thought, advanced research, knowledge creation, and sharing, through its academic, research, and creative activities.

EOE/AA/Minorities/Females/Vet/Disabled/Sexual Orientation/Gender Identity Employer

UAE Nationals are encouraged to apply

Equal Employment Opportunity Statement

For people in the EU, click here for information on your privacy rights under GDPR: www.nyu.edu/it/gdpr

NYU is an Equal Opportunity Employer and is committed to a policy of equal treatment and opportunity in every aspect of its recruitment and hiring process without regard to age, alienage, caregiver status, childbirth, citizenship status, color, creed, disability, domestic violence victim status, ethnicity, familial status, gender and/or gender identity or expression, marital status, military status, national origin, parental status, partnership status, predisposing genetic characteristics, pregnancy, race, religion, reproductive health decision making, sex, sexual orientation, unemployment status, veteran status, or any other legally protected basis. Women, racial and ethnic minorities, persons of minority sexual orientation or gender identity, individuals with disabilities, and veterans are encouraged to apply for vacant positions at all levels.

Sustainability Statement 

NYU aims to be among the greenest urban campuses in the country and carbon neutral by 2040. Learn more at nyu.edu/sustainability



Source link

06Oct

Senior Staff Engineer (RAG) at Nagarro – Remote, India


Company Description

👋🏼We’re Nagarro.

We are a Digital Product Engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale across all devices and digital mediums, and our people exist everywhere in the world (18000+ experts across 36 countries, to be exact). Our work culture is dynamic and non-hierarchical. We are looking for great new colleagues. That’s where you come in!

Job Description

REQUIREMENTS:

  • Experience: 10+ Years
  • Engineering experience of at least 3-5 years, with a minimum of 3 years overall experience and 9-12 months specifically in Generative AI (GENAI).
  • Strong skills in RAG, including experience with vector databases, optimization and validation, and setting up end-to-end pipelines.
  • Knowledge of retrieval mechanisms, such as dense and sparse retrieval, and familiarity with indexing techniques and retrieval algorithms.
  • Experience with fine-tuning (SFT) and understanding of domain adoption and basic concepts in GENAI.
  • Proficiency in model training and evaluation, data preprocessing, and augmentation.
  • Understanding of transformer models and architectures.
  • Ability to integrate GENAI solutions into existing systems.
  • Knowledge of prompt engineering and experience with large language models (LLMs), including leveraging pre-trained models like GPT and BERT.
  • Technical skills in FastAPI and Python.
  • Strong communication skills to effectively collaborate with team members and stakeholders.

 

RESPONSIBILITIES:

  • Understanding the client’s business use cases and technical requirements and be able to convert them into technical design which elegantly meets the requirements.
  • Mapping decisions with requirements and be able to translate the same to developers.
  • Identifying different solutions and being able to narrow down the best option that meets the client’s requirements.
  • Defining guidelines and benchmarks for NFR considerations during project implementation
  • Writing and reviewing design document explaining overall architecture, framework, and high-level design of the application for the developers
  • Reviewing architecture and design on various aspects like extensibility, scalability, security, design patterns, user experience, NFRs, etc., and ensure that all relevant best practices are followed.
  • Developing and designing the overall solution for defined functional and non-functional requirements; and defining technologies, patterns, and frameworks to materialize it
  • Understanding and relating technology integration scenarios and applying these learnings in projects
  • Resolving issues that are raised during code/review, through exhaustive systematic analysis of the root cause, and being able to justify the decision taken.
  • Carrying out POCs to make sure that suggested design/technologies meet the requirements.

Qualifications

Bachelor’s or master’s degree in computer science, Information Technology, or a related field.



Source link

06Oct

Senior Systems Integration + Test Engineer at Anduril – Lexington, Massachusetts, United States


Anduril Industries is a defense technology company with a mission to transform U.S. and allied military capabilities with advanced technology. By bringing the expertise, technology, and business model of the 21st century’s most innovative companies to the defense industry, Anduril is changing how military systems are designed, built and sold. Anduril’s family of systems is powered by Lattice OS, an AI-powered operating system that turns thousands of data streams into a realtime, 3D command and control center. As the world enters an era of strategic competition, Anduril is committed to bringing cutting-edge autonomy, AI, computer vision, sensor fusion, and networking technology to the military in months, not years.

The Anduril Imaging team develops state-of-the-art imaging systems, deployed to tackle the most significant security challenges of America and its allies. The Tactical Systems team within Anduril Imaging is seeking an Integration & Test Engineer. The Tactical Systems team is responsible for all aspects of tactical EO/IR system development, from the ideation stage through initial production, with particular focus on airborne applications. In this role, you will work closely with an interdisciplinary technical team to build prototypes, perform lab, ground and flight testing, and mature the prototypes into products. 

WHAT YOU’LL DO

  • Plan, lead, and manage the integration and testing of high-performance imaging systems from first build through field deployment.
  • Perform hands-on integration of the electrical, firmware, mechanical, & software components of the system and provide feedback to the engineering team.
  • Identify and troubleshoot system hardware & software issues.
  • Perform experiments and analyze data to identify root causes and suggest courses of actions to the rest of the engineering team.
  • Own integration planning, system testing, anomaly resolution, and field test planning
  • Manage prototyping lab facility and activities including test hardware, test processes and procedure, environmental safety, and lab resources.
  • Travel to test sites as needed
  • Provide support and collaborate with multiple aircraft platform teams, both internal and external

REQUIRED QUALIFICATIONS

  • 5+ years of related experience.
  • A degree in a relevant technical discipline.
  • Demonstrated ability to work independently with minimal oversight in a fast paced, dynamic environment.
  • Knowledge of related disciplines such as electrical engineering, optical engineering, mechanical engineering, and thermal management.
  • Demonstrated ability to work in team or individual environments.
  • Demonstrated ability to give technical presentations and provide written reports and documentation.
  • Strong laboratory skills (e.g. design of experiments)
  • Experience in system development from initial concept through test and delivery to customer
  • Must be able to obtain and hold a U.S. security clearance
  • Willingness to travel to test sites, as needed.

PREFERRED QUALIFICATIONS

  • Direct experience with airborne systems and/or imaging optical sensors
  • Experience working in a R&D setting.
  • Demonstrated ability to manage complex field tests/demonstrations to budgetary/schedule constraints.
  • Advanced sensing domain experience (e.g., EO/IR)
  • A track record of successfully working on fast-paced, highly technical, interdisciplinary teams.
  • Experience with optical alignment and optical performance measurement techniques
  • Proficiency in an analytical toolchains/language (e.g., MATLAB, Python, C++, etc)

US Salary Range$142,000—$213,000 USD

 

The salary range for this role is an estimate based on a wide range of compensation factors, inclusive of base salary only. Actual salary offer may vary based on (but not limited to) work experience, education and/or training, critical skills, and/or business considerations. Highly competitive equity grants are included in the majority of full time offers; and are considered part of Anduril’s total compensation package. Additionally, Anduril offers top-tier benefits for full-time employees, including:

  • Platinum Healthcare Benefits: For U.S. roles, we offer top tier platinum coverage (medical, dental, vision) that are 100% covered by Anduril for you and 90% covered for your dependents.
    • For UK roles, Private Medical Insurance (PMI): Anduril will cover the full cost of the insurance premium for an employee and dependents.
    • For AUS roles, Private health plan through Bupa: Coverage is fully subsidized by Anduril.
  • Basic Life/AD&D and long-term disability insurance 100% covered by Anduril, plus the option to purchase additional life insurance for you and your dependents.
  • Extremely generous company holiday calendar including a holiday hiatus in December, and highly competitive PTO plans.
  • 16 weeks of paid Caregiver & Wellness Leave to care for a family member, bond with your baby, or tend to your own medical condition.
  • Family Planning & Parenting Support: Fertility (eg, IVF, preservation), adoption, and gestational carrier coverage with additional benefits and resources to provide support from planning to parenting.
  • Mental Health Resources: We provide free mental health resources 24/7 including therapy, life coaching, and more. Additional work-life services, such as free legal and financial support, available to you as well.
  • A professional development stipend is available to all Andurilians.
  • Daily Meals and Provisions: For many of our offices this means breakfast, lunch and fully stocked micro-kitchens.
  • Company-funded commuter benefits available based on your region.
  • Relocation assistance (depending on role eligibility).
  • 401(k) retirement savings plan – both a traditional and Roth 401(k). (US roles only)

The recruiter assigned to this role can share more information about the specific compensation and benefit details associated with this role during the hiring process.

Anduril is an equal-opportunity employer committed to creating a diverse and inclusive workplace. The Anduril team is made up of incredibly talented and unique individuals, who together are disrupting industry norms by creating new paths towards the future of defense technology. All qualified applicants will be treated with respect and receive equal consideration for employment without regard to race, color, creed, religion, sex, gender identity, sexual orientation, national origin, disability, uniform service, Veteran status, age, or any other protected characteristic per federal, state, or local law, including those with a criminal history, in a manner consistent with the requirements of applicable state and local laws, including the CA Fair Chance Initiative for Hiring Ordinance. We actively encourage members of recognized minorities, women, Veterans, and those with disabilities to apply, and we work to create a welcoming and supportive environment for all applicants throughout the interview process. If you are someone passionate about working on problems that have a real-world impact, we’d love to hear from you!

To view Anduril’s candidate data privacy policy, please visit https://anduril.com/applicant-privacy-notice/.



Source link

06Oct

Efficient Testing of ETL Pipelines with Python | by Robin von Malottki | Oct, 2024


How to Instantly Detect Data Quality Issues and Identify their Causes

Photo by Digital Buggu and obtained from Pexels.com

In today’s data-driven world, organizations rely heavily on accurate data to make critical business decisions. As a responsible and trustworthy Data Engineer, ensuring data quality is paramount. Even a brief period of displaying incorrect data on a dashboard can lead to the rapid spread of misinformation throughout the entire organization, much like a highly infectious virus spreads through a living organism.

But how can we prevent this? Ideally, we would avoid data quality issues altogether. However, the sad truth is that it’s impossible to completely prevent them. Still, there are two key actions we can take to mitigate the impact.

  1. Be the first to know when a data quality issue arises
  2. Minimize the time required to fix the issue

In this blog, I’ll show you how to implement the second point directly in your code. I will create a data pipeline in Python using generated data from Mockaroo and leverage Tableau to quickly identify the cause of any failures. If you’re looking for an alternative testing framework, check out my article on An Introduction into Great Expectations with python.



Source link

Protected by Security by CleanTalk