08Jun

Senior Software Engineer – Hadoop, Spark with Python, SQL, Tableau / PowerBI at Mastercard – Pune, India


Our Purpose

We work to connect and power an inclusive, digital economy that benefits everyone, everywhere by making transactions safe, simple, smart and accessible. Using secure data and networks, partnerships and passion, our innovations and solutions help individuals, financial institutions, governments and businesses realize their greatest potential. Our decency quotient, or DQ, drives our culture and everything we do inside and outside of our company. We cultivate a culture of inclusion for all employees that respects their individual strengths, views, and experiences. We believe that our differences enable us to be a better team – one that makes better decisions, drives innovation and delivers better business results.

Title and Summary

Senior Software Engineer – Hadoop, Spark with Python, SQL, Tableau / PowerBI

Overview:

Are you a skilled and visionary Data Analyst with a passion for shaping robust data environments and driving impacts? Join our team at Mastercard and play a pivotal role in defining, building, and optimizing data tools, technologies, and processes.

As the Senior Data Analyst, you will spearhead the creation of a reliable and efficient data ecosystem, enabling the delivery of exceptional Data & Analytics Services to both our customers and internal stakeholders.

Role:
As the Senior Data Analyst for Mastercard’s Internal Data lake, you will be the driving force behind the development and execution of cutting-edge data strategies and data environment frameworks. Your expertise will ensure the effective utilization of data, enabling the delivery of dependable Data & Analytics Services. You will collaborate with cross-functional teams, and establish data-related best practices in alignment with Mastercard standards.

• Design, develop, and maintain new data capabilities and infrastructure for Mastercard’s Internal Data lake (Workplace Intelligence Platform)
• Create new data pipelines, data transfers, and compliance-oriented infrastructure to facilitate seamless data utilization within on-premise/cloud environments
• Identify existing data capability and infrastructure gaps or opportunities within and across initiatives and provide subject matter expertise in support of remediation
• Collaborate with technical team and business stakeholders to understand data requirements and translate them into technical solutions
• Work with large datasets, ensuring data quality, accuracy, and performance
• Implement data transformation, integration, and validation processes to support analytics/BI and reporting needs
• Optimize and fine-tune data pipelines for improved speed, reliability, and efficiency
• Implement best practices for data storage, retrieval, and archival to ensure data accessibility and security
• Troubleshoot and resolve data-related issues, collaborating with the team to identify root causes
• Document data processes, data lineage, and technical specifications for future reference
• Participate in code reviews, ensuring adherence to coding standards and best practices
• Collaborate with DevOps teams to automate deployment and monitoring of data pipelines
• Additional tasks as required

Qualifications:
•  Bachelor’s degree in Computer Science, Engineering, Data Science, or a related field.
•  Proven experience as a Senior Data Analyst or similar role.
•  Deep understanding of Data visualization, statistics, hypothesis testing, business intelligence tools, SQL, data cleaning and data lifecycle management.
•  Proficiency in designing and implementing data tools, technologies, and processes.
•  Expertise in data engineering, ETL/ELT processes, data warehousing, and data modeling.
•  Strong command of data integration techniques and data quality management.
•  Hands-on experience with data technologies such as Hadoop, Spark, Python, SQL, alteryx, Nifi, SSIS, etc.
•  Familiarity with cloud platforms and services, such as AWS, GCP, or Azure.
•  Excellent problem-solving skills and ability to provide innovative data solutions.
•  Strong leadership skills with a proven track record of guiding and mentoring a team.

All About You:
•   5-8 years of experience in Data Warehouse related projects
•   Expertise in Data Engineering and Data Analysis: implementing multiple end-to-end DW projects in Big Data Hadoop environment
•   Experience of building data pipelines through Spark with Scala/Python/Java on Hadoop or Object storage
•   Experience of working with Databases like MS SQL Server, Oracle and have strong SQL knowledge
•  Experience in BI tools like Tableau, PowerBI
•   Experience of working on Alteryx, SSIS, NiFi added advantage
•   Experience of working on automation in data flow process in a Big Data environment.
•   Experience of working in Agile teams
•   Strong analytical skills required for debugging production issues, providing root cause and implementing mitigation plan
•   Strong communication skills – both verbal and written – and strong relationship, collaboration skills and organizational skills
•   Ability to multi-task across multiple projects, interface with external / internal resources and provide technical leadership to junior team members
•   Ability to be high-energy, detail-oriented, proactive and able to function under pressure in an independent environment along with a high degree of initiative and self-motivation to drive results
•   Ability to quickly learn and implement new technologies, and perform POC to explore best solution for the problem statement
•   Flexibility to work as a member of a matrix based diverse and geographically distributed project teams
•  Create and maintain a robust data environment that adheres to Mastercard standards and best practices.
•  Oversee the management and optimization of data warehouses and data lakes.

Corporate Security Responsibility

All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must:

  • Abide by Mastercard’s security policies and practices;

  • Ensure the confidentiality and integrity of the information being accessed;

  • Report any suspected information security violation or breach, and

  • Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines.



Source link

08Jun

Logistics Data Specialist at Red Bull – Munich, Germany


Unternehmensbeschreibung

Der Logistics Data Specialist ist für das operative Logistik-Management der einzelnen Logistikdienstleiter zuständig, um eine dauerhafte, uneingeschränkte und rechtzeitige Warenverfügbarkeit von Red Bull Produkten in Deutschland sicherzustellen.

 

Neben der Begeisterung für Themen rund um das Supply-Chain-Management zeichnet sich der Logistics Data Specialist durch eine Passion für Zahlen, Datenanalysen und Optimierungen aus. Grundlage für optimal ablaufende Prozesse ist hierbei die Kollaboration mit unseren externen Dienstleistern, dem Customer Service sowie dem Demand & Supply Planning Team.

Stellenbeschreibung

OPERATIVES LOGISTIK-MANAGEMENT

  • Operative Steuerung des Tagesgeschäftes
  • Verantwortlichkeit für die Richtigkeit der Warenbestände und -bewegungen
  • Eigenverantwortliche Prüfung und Kontierung der Rechnungen unserer Dienstleister
  • Hauptansprechpartner*in für unsere Dienstleister bei operativen Themen
  • Koordination von Lagerinventuren sowie von Monats- und Jahresabschlussprozessen
  • Gewährleistung eines reibungslosen Palettenflusses
  • Unterstützung bei der Vorbereitung und Durchführung von regelmäßigen Business Reviews und Besuchen bei Dienstleistern
  • Unterstützung bei der Analyse von Lieferverzügen und bei der Kommunikation zum Kunden
  • Monitoring von KPIs

ANALYSE DES SUPPLY CHAIN (SC) SETUP

  • Analyse von Prognosedaten (Nachfrage, Anpassungen des SC-Setups usw.), um ausreichende Lager- und Umschlagskapazitäten bei den 3PLs bereitzustellen
  • Überprüfung der optimalen Aufteilung der Lagerstandorte in Bezug auf Effizienzen, Kosten und Nachhaltigkeitsfaktoren
  • Validierung des logistischen Setups

KOSTENBERECHNUNGEN

Verantwortung sowohl für monatliche als auch für ad-hoc Kostenanalysen bezüglich:

  • Distributionskosten
  • Lagerkosten
  • Betriebskosten

PROJEKTARBEIT

  • Analyse gegebener Rahmenbedingungen und Untersuchung auf Optimierungspotentiale
  • Unterstützung bei abteilungsübergreifenden Projekten sowie bei Projekten mit externen Partnern

PLANUNG UND FORECASTING

  • Unterstützung bei der Einrichtung von Qualitätssperrungen sowie beim rechtzeitigen Versand von Ware mit bald ablaufendem Mindesthaltbarkeitsdatum
  • Koordinierung von Inboundmengen in Abstimmung mit den verfügbaren Kapazitäten bei den 3PL
  • Erstellung von Forecasts für die Dienstleister

CUSTOMER SERVICE

  • Unterstützung beim Bestell- und Auslieferungsmanagement und bei der Abwicklung von Event- und Freiwarenaufträgen
  • Mithilfe bei der Kundenadministration, bei der Bearbeitung von Reklamationen sowie im Retouren-Prozess

PROFESSIONELLES AUFTRETEN UND EXZELLENTE UMSETZUNG

  • Anpacken von Themen durch zielführende “Hands-on“-Mentalität
  • Sicherstellung einer partnerschaftlichen Zusammenarbeit mit allen Stakeholdern
  • Liebe zum Detail, wobei eine qualitative Umsetzung der Aufgaben eine Selbstverständlichkeit ist

Qualifikationen

  • Abgeschlossenes Bachelor- oder Masterstudium mit Schwerpunkt Logistik, Supply Chain, BWL oder abgeschlossene Berufsausbildung im Bereich Spedition und Logistik
  • Fließende Deutsch- sowie Englischkenntnisse in Wort und Schrift
  • Erste Berufserfahrung oder Praktika im Bereich der Logistik, des Logistikmanagements oder des Supply Chain-Managements
  • Idealerweise erste Berührungspunkte mit der FMCG-Handelsstruktur
  • Fundierte Kenntnisse in MS Office (insbesondere Excel) und idealerweise Erfahrungen in SAP R/3 (ERP) / S4HANA
  • Ehrgeizige, kommunikative und freundliche Persönlichkeit und die Fähigkeit, in einem teamorientierten Umfeld effektiv zu arbeiten
  • Ausgeprägte analytische Fähigkeiten und Erfahrung in der Anwendung von Datenanalysen, auf deren Basis fundierte Entscheidungen getroffen werden können
  • Die Fähigkeit, Probleme zu erkennen, zu analysieren und proaktiv Lösungen zu finden, um Engpässe und Risiken zu minimieren
  • Ausgeprägter Fokus auf Qualität und Kundenzufriedenheit sowie eine hohe Serviceorientierung
  • Selbständige und eigenverantwortliche Arbeitsweise, wobei mehrere Aufgaben gleichzeitig zu bearbeiten kein Problem darstellen
  • Spaß am Arbeiten in funktionsübergreifenden Teams mit unterschiedlichen Zielen und Persönlichkeiten
  • … sowie eine große Portion Selbstironie 🙂

Zusätzliche Informationen

Als Arbeitgeber schätzen wir Diversität und unterstützen Menschen dabei, ihre Potenziale und Stärken zu entfalten, ihre Ideen zu verwirklichen und Chancen wahrzunehmen. Die Stellenanzeige richtet sich an alle Menschen gleichermaßen, unabhängig von Weltanschauung, Alter, Religion, Behinderung, Geschlecht, sexueller Identität oder ethnischer Herkunft.



Source link

07Jun

Vice President, Digital Product Management (AI and Data) at Quicken Loans – Remote – Michigan


The Rock Family of Companies is made up of nearly 100 separate businesses spanning fintech, sports, entertainment, real estate, startups and more. We’re united by our culture – a drive to find a better way that fuels our commitment to our clients, our community and our team members. We believe in and build inclusive workplaces, where every voice is heard and diverse perspectives are welcomed. Working for a company in the Family is about more than just a job – it’s about having the opportunity to become the best version of yourself.

Rocket Mortgage, backed by Rocket Companies®, means more opportunities for you to carve your own career path forward. From our desire to revolutionize the way people get mortgages to addressing challenges big or small with outside-the-box solutions, we’re not your typical employer. We’ll provide you with everything you need to make sure you’re successful here.  

 

As a Technology team member, you’re empowered to make an impact, employ your entrepreneurial spirit and build a career customized by you because at Rocket, you can. We are creating digital products that solve life’s most complex moments. You’ll get the chance to shape the future of tech, have your voice heard, get ahead in your career and develop your skills. With a tech career here, there’s no limit to what you can achieve. 

 

Apply today to join a team that offers career growth, amazing benefits and the chance to work with leading industry professionals. 

 

Minimum Qualifications  

  • 15 years of experience or equivalent in a product management role 
  • Undergraduate degree or equivalent competency in information technology, business management, marketing or a related field 
  • Demonstrated expertise in leading and developing self-organizing teams and leaders 
  • Demonstrated expertise in leading and accomplishing large-scale initiatives that span multiple teams with shifting timelines, staffing and dependencies 
  • Demonstrated expertise influencing, negotiating with and gaining buy-in across all levels of the organization 
  • Demonstrated expertise advocating on behalf of team members and projects using deliberate, accurate and persuasive language 
  • Extensive knowledge of modern software development practices and concepts, such as product scalability 
  • Extensive knowledge of people leadership practices, such as change management, recruiting, hiring, performance management and compensation administration 
  • Extensive knowledge of software delivery, design and data roles and the responsibilities of each 

Preferred Qualifications  

  • 4 years of experience or equivalent in a product senior leader role 
  • Graduate degree or equivalent experience in information technology, business management, marketing or a related field 
  • Knowledge of supported business areas 
  • Experience with the financial services and/or fintech industry 

Job Summary 

As our Vice President of Digital Product Management you’ll enable self-organizing digital product management teams by providing vision, resources and a clear path to accomplish organizational goals.

Your specific focus will be leading our Platform Capabilities initiatives – solutions which span across multiple products or stages of the client journey. This is the true “electricity” powering our platform, where we focus on centralizing and expanding the highest value capabilities to scale.

You will develop strategic themes and objectives to ensure all product management initiatives and processes are aligned to the overarching strategy and positioned for success. You will also be responsible for growing and expanding digital product management and defining how it intersects with other business areas and companies. 

Responsibilities 

  • Lead and mentor teams of digital product managers and leaders 
  • Use operational trends and organizational goals to develop strategic themes and objectives 
  • Collaborate with digital product management teams to align product strategies with overarching organizational goals 
  • Promote a culture of product success through analysis, data-driven experimentation, hypothesis testing, measurement and client satisfaction 
  • Lead the maturation of product management best practices and processes in the organization 
  • Identify roadblocks preventing success of strategic themes or objectives and work with appropriate stakeholders to resolve 
  • Drive trends and changes in the broader design, technology, business and regulatory landscapes 
  • Conduct organizational planning, using insights from leadership and partnering teams to inform decisions 
  • Develop short-, mid- and long-term key results with multiple teams and senior leaders to achieve objectives 
  • Collaborate with leadership to support team member onboarding and growth 
  • Identify, implement and drive adoption of best practices within the leadership community 

Benefits and Perks 

Our team members fuel our strategy, innovation and growth, so we ensure the health and well-being of not just you, but your family, too! We go above and beyond to give you the support you need on an individual level and offer all sorts of ways to help you live your best life. We are proud to offer eligible team members perks and health benefits that will help you have peace of mind. Simply put: We’ve got your back. Check out our full list of Benefits and Perks

 

Who We Are 

Rocket Companies® is a Detroit-based company made up of businesses that provide simple, fast and trusted digital solutions for complex transactions. The name comes from our flagship business, now known as Rocket Mortgage®, which was founded in 1985. Today, we’re a publicly traded company involved in many different industries, including mortgages, fintech, real estate and more. We’re insistently different in how we look at the world and are committed to an inclusive workplace where every voice is heard. We’re passionate about the work we do, and it shows. We’ve been ranked #1 for Fortune’s Best Large Workplaces in Financial Services and Insurance List in 2022, named #5 on People Magazine’s Companies That Care List in 2022 and recognized as #7 on Fortune’s list of the 100 Best Companies to Work For in 2022. 

 

Disclaimer 

This is an outline of the primary responsibilities of this position. As with everything in life, things change. The tasks and responsibilities can be changed, added to, removed, amended, deleted and modified at any time by the leadership group. 

 

We are proud equal opportunity employers and committed to providing an inclusive environment based on mutual respect for all candidates and team members. Employment decisions, including hiring decisions, are not based on race, color, religion, national origin, sex, physical or mental disability, sexual orientation, gender identity or expression, age, military or veteran status or any other characteristic protected by state or federal law. We also provide reasonable accommodation to qualified individuals with disabilities in accordance with state and federal law.  

Colorado, New York City, California, and Washington Candidates Only. The salary range for this position is two hundred seven thousand dollars to four hundred thirty-fivthousand dollars. The position may also be eligible for an annual bonus and other employment-related benefits including, but not limited to, medical, dental, and vision benefits, 401K retirement plan, and paid-time off.  More information regarding these benefits and others can be found here. The information regarding compensation and other benefits included in this paragraph is only an estimate and is subject to revision from time to time as the Company, in its sole and exclusive discretion, deems appropriate. The Company may determine during its review of the proposed compensation and benefits provided for this position, that the compensation and benefits for such position should be reduced. In no event will the Company reduce the compensation for the position to a level below the applicable jurisdictional minimum wage rate for the position.   

The Company is an Equal Employment Opportunity employer, and does not discriminate in any hiring or employment practices. The Company provides reasonable accommodations to qualified individuals with disabilities in accordance with state and federal law. Applicants requiring reasonable accommodation in completing the application and/or participating in the employment application process should notify a representative of the Human Resources Team, The Pulse, at

Ca*****@my************.com











.



Source link

07Jun

Implementing Chain-of-Thought Principles in Fine-Tuning Data for RAG Systems | by Cobus Greyling | Jun, 2024


Considering that retrieved documents may not always answer the user’s question, the burden is placed on the LLM to discern if a given document contains the information to answer the question.

Inspired by the Chain-Of-Thought Reasoning (CoT), the study proposes to break down the instruction into several steps.

  1. Initially, the model should summarise the provided document for a comprehensive understanding.
  2. Then, it assesses whether the document directly addresses the question.

If so, the model generates a final response, based on the summarised information.

Otherwise, if the document is deemed irrelevant, the model issues the response as irrelevant.

Additionally, the proposed CoT fine-tuning method should effectively mitigate hallucinations in LLMs, enabling the LLM to answer questions based on the provided knowledge documents.

Below the instruction for CoT fine-tuning is shown…



Source link

07Jun

Data Scientist – Time Series Analysis at Qualco – Athens, Attica, Greece


With more than 20 years of proven experience, QUALCO is a leading Fintech solutions provider, offering a wide range of analytics-driven, highly scalable enterprise software solutions in over 35 countries worldwide. Our end-to-end technology solutions cover a wide range of needs for Banking, Financial Services, Utilities, Insurance, Retail organizations, and beyond.

Role Overview

We are looking for ML/AI specialists at all seniority levels to join the Qualco Centre for Applied Research and Technology (QART). We at QART are rapidly expanding our activities in ML/AI and among other initiatives, we are focusing on Time-series analysis, mathematical modelling of physical systems (Digital Twins), where we gather information from physical systems using sensors of various types. We are involved in signal conditioning (de-noising/smoothing), imputation of missing values, and the mathematical modelling of physical systems using deep learning (digital twins). Our ultimate objective may be anomaly detection, predictive maintenance, and process optimisation.

A Day in the Life of a Data Scientist at Qualco will include:

  • Exploratory data analysis and model-fitting to reveal data features of interest.
  • Identifying and analysing anomalous data (including metadata).
  • Developing conceptual design and models to address project requirements.
  • Developing qualitative and quantitative methods for characterizing datasets.
  • Performing analytic modelling, scripting, and/or programming.
  • Creating and using simulation environments.
  • Using Geographic Information Systems (GIS) for visualisation and user interaction.
  • Working collaboratively and iteratively throughout the data-science lifecycle.
  • Evaluating, documenting, and communicating research processes, analyses, and results to customers, peers, and leadership.
  • Ensuring that all activities and duties are carried out in full compliance with regulatory requirements and supporting the continued implementation of the Group Anti-Bribery and Corruption Policy.

Requirements

Benefits

Your Life @ Qualco

As a #Qmember, you’ll embody our values every day, fostering a culture of teamwork & integrity, passion for results, quality & excellence, client focus, and agility & innovation. Within a truly human-centred environment built on mutual respect and trust, your dedication to our shared vision will not only be recognized but also celebrated, offering boundless opportunities for your personal and professional growth.

Find out more about #LifeatQualco 👉🏼 qualco.group/life_at_qualco_group

Join the #Qteam and enjoy:

💸 Competitive compensation, ticket restaurant card, and annual bonus programs.

💻 Cutting-edge IT equipment, mobile, and data plan.

🏢 Modern facilities, free coffee, beverages, and indoor parking.

👨‍ Private health insurance, onsite occupational doctor, and workplace counselor.

🏝️ Flexible working model.

🤸‍ Onsite gym, wellness facilities, and ping pong room.

💡 Career and talent development tools.

🎓 Mentoring, coaching, personalised annual learning, and development plan.

🌱 Employee referral bonus, regular wellbeing, ESG, and volunteering activities.

At QUALCO, we value diversity and inclusivity. Your race, gender identity and expression, age ethnicity or disability make no difference in Qualco. We want to attract, develop, promote, and retain the best people based only on their ability and behavior.

Application Note: All CVs and application materials should be submitted in English.

Disclaimer: QUALCO collects and processes personal data in accordance with the EU General Data Protection Regulation (GDPR). We are bound to use the information provided within your job application for recruitment purposes only and not to share these with any third parties. For more details on the processing of your personal data during the Recruitment procedure, please be informed in the Recruitment Notice, before the submission of your application.



Source link

07Jun

Director, Global Success Business Intelligence at Salesforce – Texas – Austin


To get the best candidate experience, please consider applying for a maximum of 3 roles within 12 months to ensure you are not duplicating efforts.

Job Category

Operations

Job Details

About Salesforce

We’re Salesforce, the Customer Company, inspiring the future of business with AI+ Data +CRM. Leading with our core values, we help companies across every industry blaze new trails and connect with customers in a whole new way. And, we empower you to be a Trailblazer, too — driving your performance and career growth, charting new paths, and improving the state of the world. If you believe in business as the greatest platform for change and in companies doing well and doing good – you’ve come to the right place.

The Customer Success organization at Salesforce is responsible for customers onboarding, technical support, training, partner certification, and developing long-term relationships with Salesforce customers. The Global Success Business Intelligence group creates data-driven intelligence on customers’ usage, engagement, support effort, satisfaction scores, and other industry-standard KPI’s. These insights help customers take advantage of the world’s best CRM platform. Success leaders use these insights to help optimize the customers health and influence the best outcomes for Salesforce as well.

The Director, Global Success Business Intelligence will play a crucial role in supporting Senior VPs in the Account Success business within Customer Success organization. Account Success teams are dedicated to unlocking the full value of Salesforce’s Signature Success plan offering for salesforce customers. It consists of teams of experts driving customer advocacy and CRM product attrition by deeply understanding Signature customers and orchestrating an exceptional experience to help customers exceed their business goals.

As a leader on the analytics team, you will plan and execute to deliver data and insights that drive decision-making while managing stakeholder relationships. Success requires analytical savvy, problem-solving sophistication, a willingness to roll up your sleeves, and a dedication to make the highest impact possible.

Operational responsibilities include helping build BI tools and dashboards to support business readouts, create single source of truth and scalable reporting across various personas within the organization, custom performance analytics to highlight areas of opportunity, identify gaps and assist with prioritization. 

The ideal candidate has 10+ years of managing Data Analytics, Business Intelligence, and/or Data Science teams that have a proven track record for turning business questions into data problems into insights into impact. This is an opportunity to stretch & hone your strategic thinking and skills – lead a best in class team of analysts who provide deep analytical insights to inform tactics, strategic decisions and identify areas of opportunities for Customer Success Organization.

Responsibilities

  • Deliver recommendations and insights that support senior partners (VPs and SVPs) manage their critical metrics
  • Lead a high performing team of Data Analysts to develop a scalable and robust system for Account Success 
  • Establish strong partnerships with stakeholders in identifying opportunities, prioritizing roadmaps, defining and implementing execution excellence, and coaching and mentoring team members
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Manage a portfolio of Analytics dashboards and models catering to account success business.
  • Package and present recommendations to executive leadership through a combination of qualitative and quantitative feedback.
  • Play the role of a thought leader in the Analytics transition from descriptive to predictive analytics
  • Skill-Set/Technologies used daily: Tableau, Python/R, SQL, data visualization tools, data pipeline tools, cloud data sources, AI technology and use cases

Required Qualifications

  • 10+ years of professional analytics experience with 5+ years leading Data Analytics, BI, Data Science teams as people leader
  • Analytics experience in customer success or customer support domains
  • Strong business acumen and stakeholder management
  • Ability to build clear and concise presentations, and communicate effectively at every level of the organization
  • Strong project/program management skills and ability to quickly learn new tools within the Analytics space
  • Ability to work with different technical and business stakeholders across functional field teams to understand and deliver against their data needs
  • Ability to proactively identify opportunities for improvements in the business, and prioritize them based on impact and effort
  • Experience in hiring and growing a team. Providing mentorship/coaching to junior team members
  • Self-starter and high degree of motivation to go above and beyond the task at hand and succeed in a collaborative, fast-paced environment.
  • Degree or equivalent relevant experience required. Experience will be evaluated based on the Values & Behaviors for the role (e.g. extracurricular leadership roles, military experience, volunteer roles, work experience, etc.)

Preferred Qualifications

  • Familiarity with the CRM space will be an advantage
  • An advanced degree (MS, MBA) in a quantitative field (e.g. Computer Science, Engineering, Economics, Physics)

Accommodations

If you require assistance due to a disability applying for open positions please submit a request via this Accommodations Request Form.

Posting Statement

At Salesforce we believe that the business of business is to improve the state of our world. Each of us has a responsibility to drive Equality in our communities and workplaces. We are committed to creating a workforce that reflects society through inclusive programs and initiatives such as equal pay, employee resource groups, inclusive benefits, and more. Learn more about Equality at www.equality.com and explore our company benefits at www.salesforcebenefits.com.

Salesforce is an Equal Employment Opportunity and Affirmative Action Employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender perception or identity, national origin, age, marital status, protected veteran status, or disability status. Salesforce does not accept unsolicited headhunter and agency resumes. Salesforce will not pay any third-party agency or company that does not have a signed agreement with Salesforce.

Salesforce welcomes all.



Source link

07Jun

Coding Data Quality Auditor at Neuberger Berman – Work At Home-Georgia


Bring your heart to CVS Health. Every one of us at CVS Health shares a single, clear purpose: Bringing our heart to every moment of your health. This purpose guides our commitment to deliver enhanced human-centric health care for a rapidly changing world. Anchored in our brand — with heart at its center — our purpose sends a personal message that how we deliver our services is just as important as what we deliver.
 
Our Heart At Work Behaviors™ support this purpose. We want everyone who works at CVS Health to feel empowered by the role they play in transforming our culture and accelerating our ability to innovate and deliver solutions to make health care more personal, convenient and affordable.

Position Summary
Responsible for performing audit and abstraction of medical records (provider and/or vendor) to identify and submit ICD codes to the Centers for Medicare and Medicaid Services (CMS) for the purpose of risk adjustment processes. Diagnosis codes must be appropriate, accurate, and supported by clinical documentation in accordance with all State and Federal regulations and internal policies and procedures. Ability to support coding judgment and decisions using industry standard evidence and tools. Assists senior level staff in providing recommendations for process improvement so that productivity and quality goals can be met or exceeded and operational efficiency and financial accuracy can be achieved. Adhere to stringent timelines consistent with project deadlines and directives. Must possess high level of dependability and ability to meet coding accuracy and production standards. Monitors own work to help ensure quality. Required to act in ethical manner at all times as required under HIPAA’s Privacy and Security rules to handle patient data with uncompromised adherence to the law.

Required Qualifications
Computer proficiency including experience with Microsoft Office products (Word, Excel, Access, PowerPoint, Outlook, industry standard coding applications).

2-3 years Experience with International Classification of Disease (ICD) codes required.

Minimum of 3 years recent and related experience in medical record documentation review, diagnosis coding, and/or auditing.

Experience with Medicare and/or Commercial and/or Medicaid Risk Adjustment process and Hierarchical Condition Categories (HCC) preferred.
CPC (Certified Professional Coder), CCS ( Certified Coding Specialist) or CCS-P (Certified Coding Specialist-Physician) required.

Excellent analytical and problem solving skills.
Demonstrated communication, organizational, and interpersonal skills.

Preferred Qualifications
CRC (Certified Risk Adjustment Coder) preferred.

Education

AA/AS or equivalent experience. Completion of AAPC/AHIMA training program for core credential (CPC, CCS-P) with associated work history/on the job experience equal to approximately 3 years for CPC.

Pay Range

The typical pay range for this role is:

$18.50 – $35.29

This pay range represents the base hourly rate or base annual full-time salary for all positions in the job grade within which this position falls.  The actual base salary offer will depend on a variety of factors including experience, education, geography and other relevant factors.  This position is eligible for a CVS Health bonus, commission or short-term incentive program in addition to the base pay range listed above. 
 
In addition to your compensation, enjoy the rewards of an organization that puts our heart into caring for our colleagues and our communities.  The Company offers a full range of medical, dental, and vision benefits.  Eligible employees may enroll in the Company’s 401(k) retirement savings plan, and an Employee Stock Purchase Plan is also available for eligible employees.  The Company provides a fully-paid term life insurance plan to eligible employees, and short-term and long term disability benefits. CVS Health also offers numerous well-being programs, education assistance, free development courses, a CVS store discount, and discount programs with participating partners.  As for time off, Company employees enjoy Paid Time Off (“PTO”) or vacation pay, as well as paid holidays throughout the calendar year. Number of paid holidays, sick time and other time off are provided consistent with relevant state law and Company policies.  
 
For more detailed information on available benefits, please visit jobs.CVSHealth.com/benefits

We anticipate the application window for this opening will close on: 06/20/2024



Source link

06Jun

Your Own Free Plagiarism Checkers? | by Hasan Aboul Hasan


In this post, I will show you how to detect the percentage of plagiarism in a piece of text. A direct, practical solution I created and tested!

The idea is very simple, acting as a perfect starting point to check plagiarism for any piece of text. I will explain the approach step by step with a practical example, so let’s start!

Let’s keep things simple with a real practical example! Here is what we need:

1- A function that takes care of the chunking of our text

2- A function that surfs the web and checks if this chunk exists

3- Add up all the results and get the percentage

The first thing we need to do is split text into smaller chunks like phrases, sentences, and paragraphs; notice how I didn’t say “splitting into individual words.” That’s because words are independent, resulting in a less effective plagiarism test.

Now, let’s make it dynamic!

def chunk_text(text, chunk_by) -> List[str]:
if chunk_by == "sentence":
sentences = re.split(r'(? sentences = [sentence.strip() for sentence in sentences if sentence.strip()]
return sentences
elif chunk_by == "paragraph":
paragraphs = [paragraph.strip() for paragraph in text.split("\n") if paragraph.strip()]
return paragraphs
else:
raise ValueError("Invalid chunk_by value. Choose 'sentence' or 'paragraph'.")

This function takes as input the text and your chosen chunking method, then if you choose:

  • By Sentence: I used a very straightforward method: I split whenever I find a ‘.’ or ‘!’ or ‘?’ between sentences.
  • By Paragraph: I used a similar approach to the one above, which splits the input whenever there’s a new line between paragraphs. In Python, the new line is defined as \n.

This dynamic approach makes it easier to switch to whichever method is based on your liking. Plus, you can see the experiment yourself and see how the accuracy changes depending on the text and method used.

Now that we have split the text into chunks, we need to take each chunk, put it between double quotes like “[chunk]”, and search for if it matches something on the internet.

Here’s an example of a unique chunk:

As you can see, no results were found for “Learnwithhasan is the best website” although it’s a well-known fact 😂

💡 Tip 💡

When you’re searching for an exact match of something you should delimit it between double quotes. Like this search engine you’re using knows that you’re looking for this exact phrase and not normal searching.

Back to our code:

def search_chunk(chunk) -> bool:
try:
search_results = search_with_serpapi(f"\"{chunk}\"")
found = len(search_results) > 0
return found
except Exception as e:
print(f"An error occurred: {e}")
return False

In this function, I used my Library SimplerLLM, specifically a method that uses SerperAPI to search on Google from the code.

To access Google’s search engine from your code, you would need an API and its corresponding code. However, using SimplerLLM, the function is already built-in, and you just call it using the “search_with_serpapi” method.

But, you need to generate your API key from their website, create a .env file, and add your key like this:

SERPER_API_KEY = "YOUR_API_KEY"

So, using the above function, each chunk is searched for on Google, and if a result exists, it returns True; otherwise, it returns False.

Now it’s time to take these Trues and Falses and turn them into a percentage:

def calculate_plagiarism_score(text, chunk_by) -> float:
chunks = chunk_text(text, chunk_by)
total_chunks = len(chunks)
plagiarised_chunks = 0
for chunk in chunks:
if search_chunk(chunk):
plagiarised_chunks += 1

plagiarism_score = (plagiarised_chunks / total_chunks) * 100 if total_chunks > 0 else 0
return plagiarism_score

This function works by first calling the chunking method explained in Step 1, and then counting the total number of these chunks.

Using step 2, we determine whether each chunk is available on the web. If it returns True, it increases the count of plagiarized chunks.

After checking all chunks, the plagiarism score is calculated by dividing the number of plagiarized chunks by the total number of chunks, multiplying by 100 to get a percentage. Finally, it returns the plagiarism score as a decimal number(float).

All the above methods wouldn’t generate anything if you didn’t give it any input and print the result.

#MAIN SECTION
start_time = time.time()
text = "YOUR_TEXT" # The Input Text

chunk_by = "sentence" # "sentence" or "paragraph"
plagiarism_score = calculate_plagiarism_score(text, chunk_by)

end_time = time.time() # Record the end time
runtime = end_time - start_time # Calculate the runtime

print(f"Plagiarism Score: {plagiarism_score}%")
print(f"Runtime: {runtime} seconds") # Print the runtime

In this section of the code, you need to enter the text you want to run the plagiarism checker on, pick your preferred method of chunking, and print the results!

You’ll even get the time it took to generate the results (we’ll use it later🤫)

Get The Full Code

SimplerLLM is an open-source Python library designed to simplify interactions with large language models (LLMs). It offers a unified interface for different LLM providers and a suite of tools to enhance language model capabilities.

I created it to facilitate coding, and it did indeed save me a lot of time. But the main reason I’m using it in this script is that I’m planning on improving this code more and making it detect similarities, too, not just exact copies of the text. So, keep an eye out for the Semantic Plagiarism Checker Post!

Now, although the script we created is working properly, why don’t we improve it a little?

For example, when we find that the chunk is available on a webpage somewhere, we can fetch the URLs of these web pages. This simple tweak to the code would make the results of this script a lot more interesting, especially if you turned it into a tool with a nice UI.

Here’s what the new code will look like:

def search_chunk(chunk) -> List[str]:
list = []
try:
search_results = search_with_serpapi(f"\"{chunk}\"")
found = len(search_results) > 0
if (found):
list.append(found)
list.append(search_results[0].URL)
return list
else:
list.append(found)
list.append("None")
return list
except Exception as e:
print(f"An error occurred: {e}")
list.append(False)
list.append("None")
return list

def calculate_plagiarism_score(text, chunk_by) -> float:
chunks = chunk_text(text, chunk_by)
total_chunks = len(chunks)
plagiarised_chunks = 0
counter = 1
for chunk in chunks:
print(f"Chunk {counter} : {chunk} .... {search_chunk(chunk)[0]} .... {search_chunk(chunk)[1]}")
counter += 1
if search_chunk(chunk)[0]:
plagiarised_chunks += 1
plagiarism_score = (plagiarised_chunks / total_chunks) * 100 if total_chunks > 0 else 0
return plagiarism_score

As you can see, I edited the “search_chunk” function so that it returns a list containing a True/ False if it found an existing duplicate and the link to the webpage that contains the same chunk. Plus, I added a print statement in the “calculate_plagiarism_score” function to print each chunk, its number, True/False, and the URL of the webpage.

Here’s what the result will look like:

A major limitation of the above script is that running it on a large amount of data would be inefficient, like multiple blog posts at a time. What happens here is every chunk will have to be searched for on Google to see if there is existing content that matches it.

So, How can we fix this? There are two approaches we can try:

  1. Parallel Programming
  2. Search Result Indexing

I mentioned a little bit about both in the full article if you want to check it.

If you have other better approaches to improve the plagiarism detector, make sure to drop them in the comments below!



Source link

06Jun

Market Development Specialist – M2P & Automation ( Location – Bangalore/Mumbai)) at Danaher – IND – Bengaluru North – Beckman Coulter India Private Limited


Beckman Coulter Life Sciences’ mission is to empower those seeking answers to life’s most important scientific and healthcare questions. With a legacy spanning 80+ years, we have long been a trusted partner to our customers, who are working to transform science and healthcare with the next groundbreaking discovery. As part of our team of more than 2,900 associates across 130 countries, you’ll help drive our vision of accelerating answers—and our commitment to excellence.

Beckman Coulter Life Sciences is one of 10 Life Sciences companies of Danaher. Together, the 10 Life Sciences companies of Danaher accelerate discovery, development and delivery of solutions that safeguard and improve human health.

The Market Development Specialist role is –

  • Reporting to the India SWA Marketing Manager, he/she will be responsible for developing Sales of Bioreactor & Automation products in the region.
  • Customer Engagement: Building and maintaining relationships with key customers, industry influencers, and stakeholders in the MICRO BIOREACTOR (M2P) and Automation space. This will involve understanding their needs, gathering market insights, and conducting customer interviews, surveys, and focus groups specific to these products.
  • Market Analysis: Conducting in-depth market research and analysis to identify target markets, customer needs, and competitive landscape for MICRO BIOREACTOR (M2P) and Automation products. This will involve gathering and analyzing data, monitoring industry trends, and assessing market potential.
  • Strategic Planning: Developing strategic plans to penetrate new markets and drive business growth for MICRO BIOREACTOR (M2P) and Automation products. This will involve setting objectives, defining market entry strategies, identifying key success factors, and developing go-to-market strategies tailored to these product lines.
  • Sales Support: Providing support to the sales team by developing sales tools, conducting product training, and assisting with customer presentations/demos specific to MICRO BIOREACTOR (M2P) and Automation products. This will ensure effective communication of the value proposition and differentiation of these products.
  • Technical Support: Providing technical support and troubleshooting assistance to customers using MICRO BIOREACTOR (M2P) applications. This will involve analyzing and resolving technical issues, conducting root cause analysis, and working with the development team to implement solutions.
  • Application Training: Conducting training sessions and workshops for customers to ensure they have a thorough understanding of MICRO BIOREACTOR (M2P) applications and how to effectively utilize them. This may involve creating training materials, delivering presentations, and providing ongoing support and guidance.

Minimum Qualifications:

  • MSc/MTech/ in Biological Sciences /Chem Eng/Biotechnology
  • At least 10+ years of extensive Technical/Applications experience in an international environment or Doctoral degree in field with 0-5 years of experience.
  • Strong conceptual and practical knowledge on Fermentation, Bioprocess (Upstream & Downstream)/ Also, aware of the competition and the factors that differentiate them in the market.
  • Adept in understanding and presenting complex workflows involving products such as centrifuges, liquid handling systems, microbioreactors that could form a part of the segment related workflows.
  • Strong problem-solving skills by proven track record of delivering judgment based on the analysis of multiple sources of information.
  • Strong cultural difference awareness to understand what works within and across countries.
  • Good understanding of business and its operational and financial aspects
  • Strong interpersonal skills, able to work independently with minimal guidance & extend support as a resource for colleagues with less experience.
  • A self-motivated person who looks forward to significant field-based activity (up to 50-70% of their time)

When you join us, you’ll also be joining Danaher’s global organization, where 69,000 people wake up every day determined to help our customers win. As an associate, you’ll try new things, work hard, and advance your skills with guidance from dedicated leaders, all with the support of powerful Danaher Business System tools and the stability of a tested organization.

At Danaher we bring together science, technology and operational capabilities to accelerate the real-life impact of tomorrow’s science and technology. We partner with customers across the globe to help them solve their most complex challenges, architecting solutions that bring the power of science to life. Our global teams are pioneering what’s next across Life Sciences, Diagnostics, Biotechnology and beyond. For more information, visit www.danaher.com.

At Danaher, we value diversity and the existence of similarities and differences, both visible and not, found in our workforce, workplace and throughout the markets we serve. Our associates, customers and shareholders contribute unique and different perspectives as a result of these diverse attributes.



Source link

06Jun

DataOps – La redoute Porto @ Alter Solutions


Job Description

MAIN RESPONSABILITIES
• Incident Response
o Understand problems from a user perspective and communicate to clearly 
understand the issue.
o Reproduce bugs or issues that users are facing.
o Apply root cause analysis to quickly and efficiently 
o Find the root cause of the problem, patch it, test it, and communicate with the 
end user.
o Write postmortems summarizing every step of resolution and helping the team 
to track all issues.
o Monitor existing flows and infrastructure and perform the same tasks when 
discovering bugs/issues through monitoring and alerting.
• Maintenance
o Monitor flows and infrastructure to identify potential issues.
o Adapt configurations to keep flows and infrastructures working as expected, 
keeping the operations without incident.
• Database Optimization
o Track costs and time of processing through dedicated dashboards.
o Alert people who query tables the wrong way, involving high costs.
o Track down jobs, views, and tables that are running inefficiently and occur 
either high costs or low speed of execution.
o Optimize jobs, queries, and tables to optimize both costs and speed of 
execution.
• Infrastructure Management
o Manage infrastructure through Terraform.
o Share and propose good practices.
o Decommission useless infrastructures such as services, tables, or virtual 
machines.
• Deployments
o Track future deployments with a Data Architect and participate in Deployment 
Reviews.
o Share and propose good practices of deployment.
o Accompany Data Engineers during the entire process of deployments.
o Accompany Data Engineers in the following period of active monitoring.
o Ensure diligent application of deployment process, logging, and monitoring 
strategy.
o Take over newly deployed flows in the run process.
REQUESTED HARD SKILLS
• Google Cloud Platform: General knowledge of the platform and various services, 
and at least one year of experience with GCP.
• Apache Airflow: At least two years of experience with the Airflow orchestrator, 
experience with Google Composer is a plus.
• Google BigQuery: Extensive experience (at least 4 years) with GBQ, know how to 
optimize tables and queries, and able to design database architecture.
• Terraform: At least two years of experience with Terraform, and know good 
practices of GitOps.
• Apache Spark: this is an optional expertise we would value. Some of our pipelines 
use pySpark. 
• Additional Knowledge and Experience that are a Plus:
o Pub/Sub
o Kafka
o Azure Analysis Services
o Google Cloud Storage optimizatio



Source link

Protected by Security by CleanTalk