17Oct

A Novel Approach to Detect Coordinated Attacks Using Clustering | by Trupti Bavalatti | Oct, 2024


Unveiling hidden patterns: grouping malicious behavior

Clustering is a powerful technique within unsupervised machine learning that groups a given data based on their inherent similarities. Unlike supervised learning methods, such as classification, which rely on pre-labeled data to guide the learning process, clustering operates on unlabeled data. This means there are no predefined categories or labels and instead, the algorithm discovers the underlying structure of the data without prior knowledge of what the grouping should look like.

The main goal of clustering is to organize data points into clusters, where data points within the same cluster have higher similarity to each other compared to those in different clusters. This distinction allows the clustering algorithm to form groups that reflect natural patterns in the data. Essentially, clustering aims to maximize intra-cluster similarity while minimizing inter-cluster similarity. This technique is particularly useful in use-cases where you need to find hidden relationships or structure in data, making it valuable in areas such as fraud detection and anomaly identification.

By applying clustering, one can reveal patterns and insights that might not be obvious through other methods, and its simplicity and flexibility makes it adaptable to a wide variety of data types and applications.

A practical application of clustering is fraud detection in online systems. Consider an example where multiple users are making requests to a website, and each request includes details like the IP address, time of the request, and transaction amount.

Here’s how clustering can help detect fraud:

  • Imagine that most users are making requests from unique IP addresses, and their transaction patterns naturally differ.
  • However, if multiple requests come from the same IP address and show similar transaction patterns (such as frequent, high-value transactions), it could indicate that a fraudster is making multiple fake transactions from one source.

By clustering all user requests based on IP address and transaction behavior, we could detect suspicious clusters of requests that all originate from a single IP. This can flag potentially fraudulent activity and help in taking preventive measures.

An example diagram that visually demonstrates the concept of clustering is shown in the figure below.

Imagine you have data points representing transaction requests, plotted on a graph where:

  • X-axis: Number of requests from the same IP address.
  • Y-axis: Average transaction amount.

On the left side, we have the raw data. Without labels, we might already see some patterns forming. On the right, after applying clustering, the data points are grouped into clusters, with each cluster representing a different user behavior.

Example of clustering of fraudulent user behavior. Image source (CC BY 4.0)

To group data effectively, we must define a similarity measure, or metric, that quantifies how close data points are to each other. This similarity can be measured in multiple ways, depending on the data’s structure and the insights we aim to discover. There are two key approaches to measuring similarity — manual similarity measures and embedded similarity measures.

A manual similarity measure involves explicitly defining a mathematical formula to compare data points based on their raw features. This method is intuitive and we can use distance metrics like Euclidean distance, cosine similarity, or Jaccard similarity to evaluate how similar two points are. For instance, in fraud detection, we could manually compute the Euclidean distance between transaction attributes (e.g transaction amount, frequency of requests) to detect clusters of suspicious behavior. Although this approach is relatively easy to set up, it requires careful selection of the relevant features and may miss deeper patterns in the data.

On the other hand, an embedded similarity measure leverages the power of machine learning models to create learned representations, or embeddings of the data. Embeddings are vectors that capture complex relationships in the data and can be generated from models like Word2Vec for text or neural networks for images. Once these embeddings are computed, similarity can be measured using traditional metrics like cosine similarity, but now the comparison occurs in a transformed, lower-dimensional space that captures more meaningful information. Embedded similarity is particularly useful for complex data, such as user behavior on websites or text data in natural language processing. For example, in a movie or ads recommendation system, user actions can be embedded into vectors, and similarities in this embedding space can be used to recommend content to similar users.

While manual similarity measures provide transparency and greater control on feature selection and setup, embedded similarity measures give the ability to capture deeper and more abstract relationships in the data. The choice between the two depends on the complexity of the data and the specific goals of the clustering task. If you have well-understood, structured data, a manual measure may be sufficient. But if your data is rich and multi-dimensional, such as in text or image analysis, an embedding-based approach may give more meaningful clusters. Understanding these trade-offs is key to selecting the right approach for your clustering task.

In cases like fraud detection, where the data is often rich and based on behavior of user activity, an embedding-based approach is generally more effective for capturing nuanced patterns that could signal risky activity.

Coordinated fraudulent attack behaviors often exhibit specific patterns or characteristics. For instance, fraudulent activity may originate from a set of similar IP addresses or rely on consistent, repeated tactics. Detecting these patterns is crucial for maintaining the integrity of a system, and clustering is an effective technique for grouping entities based on shared traits. This helps the identification of potential threats by examining the collective behavior within clusters.

However, clustering alone may not be enough to accurately detect fraud, as it can also group benign activities alongside harmful ones. For example, in a social media environment, users posting harmless messages like “How are you today?” might be grouped with those engaged in phishing attacks. Hence, additional criteria is necessary to separate harmful behavior from benign actions.

To address this, we introduce the Behavioral Analysis and Cluster Classification System (BACCS) as a framework designed to detect and manage abusive behaviors. BACCS works by generating and classifying clusters of entities, such as individual accounts, organizational profiles, and transactional nodes, and can be applied across a wide range of sectors including social media, banking, and e-commerce. Importantly, BACCS focuses on classifying behaviors rather than content, making it more suitable for identifying complex fraudulent activities.

The system evaluates clusters by analyzing the aggregate properties of the entities within them. These properties are typically boolean (true/false), and the system assesses the proportion of entities exhibiting a specific characteristic to determine the overall nature of the cluster. For example, a high percentage of newly created accounts within a cluster might indicate fraudulent activity. Based on predefined policies, BACCS identifies combinations of property ratios that suggest abusive behavior and determines the appropriate actions to mitigate the threat.

The BACCS framework offers several advantages:

  • It enables the grouping of entities based on behavioral similarities, enabling the detection of coordinated attacks.
  • It allows for the classification of clusters by defining relevant properties of the cluster members and applying custom policies to identify potential abuse.
  • It supports automatic actions against clusters flagged as harmful, ensuring system integrity and enhancing protection against malicious activities.

This flexible and adaptive approach allows BACCS to continuously evolve, ensuring that it remains effective in addressing new and emerging forms of coordinated attacks across different platforms and industries.

Let’s understand more with the help of an analogy: Let’s say you have a wagon full of apples that you want to sell. All apples are put into bags before being loaded onto the wagon by multiple workers. Some of these workers don’t like you, and try to fill their bags with sour apples to mess with you. You need to identify any bag that might contain sour apples. To identify a sour apple you need to check if it is soft, the only problem is that some apples are naturally softer than others. You solve the problem of these malicious workers by opening each bag and picking out five apples, and you check if they are soft or not. If almost all the apples are soft it’s likely that the bag contains sour apples, and you put it to the side for further inspection later on. Once you’ve identified all the potential bags with a suspicious amount of softness you pour out their contents and pick out the healthy apples which are hard and throw away all the soft ones. You’ve now minimized the risk of your customers taking a bite of a sour apple.

BACCS operates in a similar manner; instead of apples, you have entities (e.g., user accounts). Instead of bad workers, you have malicious users, and instead of the bag of apples, you have entities grouped by common characteristics (e.g., similar account creation times). BACCS samples each group of entities and checks for signs of malicious behavior (e.g., a high rate of policy violations). If a group shows a high prevalence of these signs, it’s flagged for further investigation.

Just like checking the materials in the classroom, BACCS uses predefined signals (also referred to as properties) to assess the quality of entities within a cluster. If a cluster is found to be problematic, further actions can be taken to isolate or remove the malicious entities. This system is flexible and can adapt to new types of malicious behavior by adjusting the criteria for flagging clusters or by creating new types of clusters based on emerging patterns of abuse.

This analogy illustrates how BACCS helps maintain the integrity of the environment by proactively identifying and mitigating potential issues, ensuring a safer and more reliable space for all legitimate users.

The system offers numerous advantages:

  • Better Precision: By clustering entities, BACCS provides strong evidence of coordination, enabling the creation of policies that would be too imprecise if applied to individual entities in isolation.
  • Explainability: Unlike some machine learning techniques, the classifications made by BACCS are transparent and understandable. It is straightforward to trace and understand how a particular decision was made.
  • Quick Response Time: Since BACCS operates on a rule-based system rather than relying on machine learning, there is no need for extensive model training. This results in faster response times, which is important for immediate issue resolution.

BACCS might be the right solution for your needs if you:

  • Focus on classifying behavior rather than content: While many clusters in BACCS may be formed around content (e.g., images, email content, user phone numbers), the system itself does not classify content directly.
  • Handle issues with a relatively high frequancy of occurance: BACCS employs a statistical approach that is most effective when the clusters contain a significant proportion of abusive entities. It may not be as effective for harmful events that sparsely occur but is more suited for highly prevalent problems such as spam.
  • Deal with coordinated or similar behavior: The clustering signal primarily indicates coordinated or similar behavior, making BACCS particularly useful for addressing these types of issues.

Here’s how you can incorporate BACCS framework in a real production system:

Setting up BACCS in production. Image by Author
  1. When entities engage in activities on a platform, you build an observation layer to capture this activity and convert it into events. These events can then be monitored by a system designed for cluster analysis and actioning.
  2. Based on these events, the system needs to group entities into clusters using various attributes — for example, all users posting from the same IP address are grouped into one cluster. These clusters should then be forwarded for further classification.
  3. During the classification process, the system needs to compute a set of specialized boolean signals for a sample of the cluster members. An example of such a signal could be whether the account age is less than a day. The system then aggregates these signal counts for the cluster, such as determining that, in a sample of 100 users, 80 have an account age of less than one day.
  4. These aggregated signal counts should be evaluated against policies that determine whether a cluster appears to be anomalous and what actions should be taken if it is. For instance, a policy might state that if more than 60% of the members in an IP cluster have an account age of less than a day, these members should undergo further verification.
  5. If a policy identifies a cluster as anomalous, the system should identify all members of the cluster exhibiting the signals that triggered the policy (e.g., all members with an account age of less than one day).
  6. The system should then direct all such users to the appropriate action framework, implementing the action specified by the policy (e.g., further verification or blocking their account).

Typically, the entire process from activity of an entity to the application of an action is completed within several minutes. It’s also crucial to recognize that while this system provides a framework and infrastructure for cluster classification, clients/organizations need to supply their own cluster definitions, properties, and policies tailored to their specific domain.

Let’s look at the example where we try to mitigate spam via clustering users by ip when they send an email, and blocking them if >60% of the cluster members have account age less than a day.

Clustering and blocking in action. Image by Author

Members can already be present in the clusters. A re-classification of a cluster can be triggered when it reaches a certain size or has enough changes since the previous classification.

When selecting clustering criteria and defining properties for users, the goal is to identify patterns or behaviors that align with the specific risks or activities you’re trying to detect. For instance, if you’re working on detecting fraudulent behavior or coordinated attacks, the criteria should capture traits that are often shared by malicious actors. Here are some factors to consider when picking clustering criteria and defining user properties:

The clustering criteria you choose should revolve around characteristics that represent behavior likely to signal risk. These characteristics could include:

  • Time-Based Patterns: For example, grouping users by account creation times or the frequency of actions in a given time period can help detect spikes in activity that may be indicative of coordinated behavior.
  • Geolocation or IP Addresses: Clustering users by their IP address or geographical location can be especially effective in detecting coordinated actions, such as multiple fraudulent logins or content submissions originating from the same region.
  • Content Similarity: In cases like misinformation or spam detection, clustering by the similarity of content (e.g., similar text in posts/emails) can identify suspiciously coordinated efforts.
  • Behavioral Metrics: Characteristics like the number of transactions made, average session time, or the types of interactions with the platform (e.g., likes, comments, or clicks) can indicate unusual patterns when grouped together.

The key is to choose criteria that are not just correlated with benign user behavior but also distinct enough to isolate risky patterns, which will lead to more effective clustering.

Defining User Properties

Once you’ve chosen the criteria for clustering, defining meaningful properties for the users within each cluster is critical. These properties should be measurable signals that can help you assess the likelihood of harmful behavior. Common properties include:

  • Account Age: Newly created accounts tend to have a higher risk of being involved in malicious activities, so a property like “Account Age
  • Connection Density: For social media platforms, properties like the number of connections or interactions between accounts within a cluster can signal abnormal behavior.
  • Transaction Amounts: In cases of financial fraud, the average transaction size or the frequency of high-value transactions can be key properties to flag risky clusters.

Each property should be clearly linked to a behavior that could indicate either legitimate use or potential abuse. Importantly, properties should be boolean or numerical values that allow for easy aggregation and comparison across the cluster.

Another advanced strategy is using a machine learning classifier’s output as a property, but with an adjusted threshold. Normally, you would set a high threshold for classifying harmful behavior to avoid false positives. However, when combined with clustering, you can afford to lower this threshold because the clustering itself acts as an additional signal to reinforce the property.

Let’s consider that there is a model X, that catches scam and disables email accounts that have model X score > 0.95. Assume this model is already live in production and is disabling bad email accounts at threshold 0.95 with 100% precision. We have to increase the recall of this model, without impacting the precision.

  • First, we need to define clusters that can group coordinated activity together. Let’s say we know that there’s a coordinated activity going on, where bad actors are using the same subject line but different email ids to send scammy emails. So using BACCS, we will form clusters of email accounts that all have the same subject name in their sent emails.
  • Next, we need to lower the raw model threshold and define a BACCS property. We will now integrate model X into our production detection infra and create property using lowered model threshold, say 0.75. This property will have a value of “True” for an email account that has model X score >= 0.75.
  • Then we’ll define the anomaly threshold and say, if 50% of entities in the campaign name clusters have this property, then classify the clusters as bad and take down ad accounts that have this property as True.

So we essentially lowered the model’s threshold and started disabling entities in particular clusters at significantly lower threshold than what the model is currently enforcing at, and yet can be sure the precision of enforcement does not drop and we get an increase in recall. Let’s understand how –

Supposed we have 6 entities that have the same subject line, that have model X score as follows:

Entities actioned by ML model. Image by Author

If we use the raw model score (0.95) we would have disabled 2/6 email accounts only.

If we cluster entities on subject line text, and define a policy to find bad clusters having greater than 50% entities with model X score >= 0.75, we would have taken down all these accounts:

Entities actioned by clustering, using ML scores as properties. Image by Author

So we increased the recall of enforcement from 33% to 83%. Essentially, even if individual behaviors seem less risky, the fact that they are part of a suspicious cluster elevates their importance. This combination provides a strong signal for detecting harmful activity while minimizing the chances of false positives.

By lowering the threshold, you allow the clustering process to surface patterns that might otherwise be missed if you relied on classification alone. This approach takes advantage of both the granular insights from machine learning models and the broader behavioral patterns that clustering can identify. Together, they create a more robust system for detecting and mitigating risks and catching many more entities while still keeping a lower false positive rate.

Clustering techniques remain an important method for detecting coordinated attacks and ensuring system safety, particularly on platforms more prone to fraud, abuse or other malicious activities. By grouping similar behaviors into clusters and applying policies to take down bad entities from such clusters, we can detect and mitigate harmful activity and ensure a safer digital ecosystem for all users. Choosing more advanced embedding-based approaches helps represent complex user behavioral patterns better than manual methods of similarity detection measures.

As we continue advancing our security protocols, frameworks like BACCS play a crucial role in taking down large coordinated attacks. The integration of clustering with behavior-based policies allows for dynamic adaptation, enabling us to respond swiftly to new forms of abuse while reinforcing trust and safety across platforms.

In the future, there is a big opportunity for further research and exploration into complementary techniques that could enhance clustering’s effectiveness. Techniques such as graph-based analysis for mapping complex relationships between entities could be integrated with clustering to offer even higher precision in threat detection. Moreover, hybrid approaches that combine clustering with machine learning classification can be a very effective approach for detecting malicious activities at higher recall and lower false positive rate. Exploring these methods, along with continuous refinement of current methods, will ensure that we remain resilient against the evolving landscape of digital threats.

References

  1. https://developers.google.com/machine-learning/clustering/overview



Source link

16Oct

Sales Analytics Specialist at NTT DATA – Johannesburg, South Africa


Make an impact with NTT DATA
Join a company that is pushing the boundaries of what is possible. We are renowned for our technical excellence and leading innovations, and for making a difference to our clients and society. Our workplace embraces diversity and inclusion – it’s a place where you can grow, belong and thrive.

Your day at NTT DATA

The Client Data Analyst is an analytics subject matter expert, accountable for supporting sales with comprehensive data analysis, valuable insights, and strategic decision support. This role operates within a multifaceted environment, encompassing various sales analytics domains, and involves collaborating with cross functional teams. The primary responsibility of the Client Data Analyst is to use strong analytical experience and expertise, combined with advanced technical skills and high business acumen to provide sales insights for business planning and strategic decision-making.
Applicants must have advanced Power BI capabilities, which includes connecting to multiple data sources, developing optimised schemas, and creating complex DAX formulas.

What you’ll be doing

Key Responsibilities:

  • •    Participates in tactical and strategic projects with cross-functional virtual teams to achieve specific business objectives.
    •    Responsible for analyzing complex business problems and issues using internal and external data to provide insights to decision-makers.
    •    Creates documented specifications for reports and analysis based on business needs and required or available data elements.
    •    Defines, develops, enhances, and tracks metrics and dashboard requirements to deliver results and provide insight and recommendations on trends.
    •    Responsible for data validation using advanced data analysis and tools to ensure analytics are valid, meaningful, and provide actionable and comprehensive insights.
    •    Supports the team answer strategic questions, make insightful data-driven business decisions, and properly design new initiatives.
    •    Provides technical advice, consultation, and knowledge to others within the relevant teams.
    •    Creates relevant reports and present on trends to convey actionable insights to stakeholders.
    •    Performs any other related task as required.

Knowledge, Skills and Attributes:

  • •    Expert understanding of advanced data analysis techniques, and the ability to uncover strategic insights from data.
    •    Advanced Power BI capabilities, which includes connecting to multiple data sources, developing optimised schemas, and creating complex DAX formulas
    •    Advanced understanding of data analysis tools including advanced Excel, and at least one relevant coding language (for example SQL, R, Python etc.).
    •    Advanced understanding of relational databases.
    •    Advanced knowledge of techniques for transforming and structuring data for analysis and knowledge of ETL processes for data extraction and transformation.
    •    Advanced understanding of data security and privacy best practices, especially if working with sensitive or sensitive data.
    •    Solid understanding of the business domain, industry, and the ability to translate data insights into actionable business recommendations.
    •    Solid understanding of sales objectives, market dynamics, and industry trends, with the ability to align data analysis with strategic objectives.
    •    Strong collaboration skills, enabling effective teamwork with cross-functional teams and senior management.
    •    Ability to translate complex data insights into actionable, understandable strategies for non-technical stakeholders.
    •    Excellent communication and presentation skills to convey complex data findings in a clear and actionable manner to nontechnical stakeholders.

Academic Qualifications and Certifications:

  • •    Bachelor’s degree or equivalent in Data Science or related field.
    •    Formal training or certification in analytical tools including Power BI, Excel, analytical scripting languages

Required experience:

  • •    5+ years demonstrated experience in using Power BI, statistical and quantitative analysis techniques, data visualizations, data analysis.
    •    Proven track record in creating and optimizing reports and dashboards that contribute to strategic decision support.
    •    Proven experience in providing data-driven support for business planning and strategic decision-making.
    •    Demonstrated experience in a sales or marketing function as a data analyst

Additional Career Level Description

  • Accountability:
    • Accountable for own targets with work reviewed at critical points.
    • Work is done independently and is reviewed at critical points.

Workplace type:

Hybrid Working

About NTT DATA
NTT DATA is a $30+ billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long-term success. We invest over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure, and connectivity. We are also one of the leading providers of digital and AI infrastructure in the world. NTT DATA is part of NTT Group and headquartered in Tokyo.

Equal Opportunity Employer
NTT DATA is proud to be an Equal Opportunity Employer with a global culture that embraces diversity. We are committed to providing an environment free of unfair discrimination and harassment. We do not discriminate based on age, race, colour, gender, sexual orientation, religion, nationality, disability, pregnancy, marital status, veteran status, or any other protected category. Join our growing global team and accelerate your career with us. Apply today.



Source link

16Oct

Infrastructure Engineer Sr/Data Center Network Engineer at PNC – Two PNC Plaza (PA374)


Position OverviewAt PNC, our people are our greatest differentiator and competitive advantage in the markets we serve. We are all united in delivering the best experience for our customers. We work together each day to foster an inclusive workplace culture where all of our employees feel respected, valued and have an opportunity to contribute to the company’s success.

As an Infrastructure Engineer Sr/Data Center Network Engineer within PNC’s Data Center Network organization, you will be based in Pittsburgh, PA, Cleveland, OH, Birmingham, AL, Phoenix, AZ or Dallas, TX. The position is primarily based in a PNC location. Responsibilities require time in the office or in the field on a regular basis. Some responsibilities may be performed remotely, at the manager’s discretion.

Preferred Skills Include:
• Sr. Data Center Network Engineer with experience in design, implementation and 24×7 support of complex data center networks across multiple locations.

• Knowledge of the following Data Center network hardware/software:
* Arista 7010TX, 7060CX2, 7280CR2/3, 7280SR2/3 series of Data Center switches running EOS
* Cisco Nexus 9K/7K/5K/3K/2K series of Data Center switches running NX-OS
* Cisco CRS, ASR9K & NCS 55XX devices running IOS-XR
* Cisco ISR4K, ASR1K, Catalyst 8300/9400/4500 series of routers/switches running IOS-XE

• Knowledge of Internet Protocols/network/services/technologies such as TCP/IP, OSPF, BGP, SDN, VXLAN, EVPN, Leaf/Spine fabric, etc.

• Ability to identify network operational tasks and automate them via Python, Ansible, etc.

PNC will not provide sponsorship for employment visas or participate in STEM OPT for this position.

Job Description

  • Provides expertise on platform engineering, while overseeing the team’s effort and meeting customer needs. Uses technical knowledge and industry experience to design, build and maintain technology solutions.
  • Develops software components and hardware for complex projects; aligns these with business strategies and objectives.
  • Provides expertise on best practices, standards, engineering approaches and complex technical resolutions for processes.
  • Places emphasis on quality improvement efforts; ensures that deliverables are secure, scalable and reliable through estimation and correction.
  • Communicates with customers and integrates their needs with development, to meet business objectives.

PNC Employees take pride in our reputation and to continue building upon that we expect our employees to be:

  • Customer Focused – Knowledgeable of the values and practices that align customer needs and satisfaction as primary considerations in all business decisions and able to leverage that information in creating customized customer solutions.
  • Managing Risk – Assessing and effectively managing all of the risks associated with their business objectives and activities to ensure they adhere to and support PNC’s Enterprise Risk Management Framework.

Qualifications

Successful candidates must demonstrate appropriate knowledge, skills, and abilities for a role. Listed below are skills, competencies, work experience, education, and required certifications/licensures needed to be successful in this position.

Preferred Skills

Competitive Advantages, Customer Solutions, Design, Enterprise Architecture Framework, Machine Learning, Risk Assessments, Technical Knowledge

Competencies

Application Delivery Process, Consulting, Effectiveness Measurement, Industry Knowledge, IT Industry: Trends & Directions, IT Standards, Procedures & Policies, Planning: Tactical, Strategic, Problem Solving

Work Experience

Roles at this level typically require a university / college degree, with 5+ years of industry-relevant experience. Specific certifications are often required. In lieu of a degree, a comparable combination of education, job specific certification(s), and experience (including military service) may be considered.

Education

Bachelors

Certifications

No Required Certification(s)

Licenses

No Required License(s)BenefitsPNC offers a comprehensive range of benefits to help meet your needs now and in the future. Depending on your eligibility, options for full-time employees include: medical/prescription drug coverage (with a Health Savings Account feature), dental and vision options; employee and spouse/child life insurance; short and long-term disability protection; 401(k) with PNC match, pension and stock purchase plans; dependent care reimbursement account; back-up child/elder care; adoption, surrogacy, and doula reimbursement; educational assistance, including select programs fully paid; a robust wellness program with financial incentives.In addition, PNC generally provides the following paid time off, depending on your eligibility*: maternity and/or parental leave; up to 11 paid holidays each year; 8 occasional absence days each year, unless otherwise required by law; between 15 to 25 vacation days each year, depending on career level; and years of service.

To learn more about these and other programs, including benefits for full time and part-time employees, visit pncbenefits.com > New to PNC.

*For more information, please click on the following links:

Time Away from Work

PNC Full-Time Benefits Summary

PNC Part-Time Benefits Summary

Disability Accommodations Statement

If an accommodation is required to participate in the application process, please contact us via email at

Ac******************@pn*.com











. Please include “accommodation request” in the subject line title and be sure to include your name, the job ID, and your preferred method of contact in the body of the email. Emails not related to accommodation requests will not receive responses.  Applicants may also call 877-968-7762 and say “Workday” for accommodation assistance. All information provided will be kept confidential and will be used only to the extent required to provide needed reasonable accommodations.

At PNC we foster an inclusive and accessible workplace.  We provide reasonable accommodations to employment applicants and qualified individuals with a disability who need an accommodation to perform the essential functions of their positions.

Equal Employment Opportunity (EEO)

PNC provides equal employment opportunity to qualified persons regardless of race, color, sex, religion, national origin, age, sexual orientation, gender identity, disability, veteran status, or other categories protected by law.

California Residents

Refer to the California Consumer Privacy Act Privacy Notice to gain understanding of how PNC may use or disclose your personal information in our hiring practices.



Source link

16Oct

Senior Python Developer with SQL Expertise at Acronis – Athina, Greece


Acronis is revolutionizing cyber protection—providing natively integrated, all-in-one solutions that monitor, control, and protect the data that businesses and lives depend on. We are looking for a Python Developer/ Data Engineer to join our mission to create a #CyberFit future and protect all data, applications and systems across any environment. 

We are seeking an experienced Senior Python Developer with strong SQL skills to join our Advanced Analytics team. This role will focus on building, enhancing, and maintaining a scalable Data Warehouse solution. As a senior developer, you will play a key role in designing, implementing, and optimizing data-driven systems, ensuring high-performance data processing, and mentoring junior developers where needed.

WHAT YOU’LL DO

  • Develop New Features: Architect, develop, and implement new functionalities for the Data Warehouse, emphasizing clean, scalable, and maintainable Python code.
  • Data Pipeline Development: Design, build, and maintain efficient, scalable data pipelines and ETL workflows to integrate data from multiple sources into the data warehouse.
  • Performance Optimization: Analyze and optimize both data processing and retrieval systems to ensure performance improvements in large-scale environments.
  • Database Design & Extension: Lead the design and expansion of database schemas, tables, and data marts while ensuring the integrity and efficiency of the data warehouse architecture.
  • Code Reviews & Best Practices: Conduct thorough code reviews, enforce best coding practices, and ensure testable, reliable, and maintainable code with a focus on unit and integration testing.
  • Production Monitoring & Debugging: Proactively monitor production environments, troubleshoot complex issues, and optimize data operations for continuous improvement.
  • Data Quality Assurance: Implement robust data quality frameworks to ensure data accuracy, consistency, and reliability throughout the data lifecycle.

WHAT YOU BRING (EXPERIENCE & QUALIFICATIONS)

  • Python Expertise: 5+ years of hands-on experience with Python, focusing on back-end development, data processing, and automation within high-volume data environments.
  • SQL Mastery: Advanced proficiency in SQL, including writing complex queries, designing and optimizing databases, and fine-tuning query performance in relational databases.
  • Data Pipeline & Workflow Automation: Proven experience in building and optimizing data pipelines and automating workflows with Python-based frameworks.
  • Version Control & Collaboration Tools: Proficiency in Git and experience working with collaboration tools like JIRA for agile development.
  • Big Data Technologies: Familiarity with big data frameworks like Apache Spark, Hadoop, or distributed computing environments. 
  • System Architecture: Experience in designing and scaling data-driven systems with a strong understanding of software design principles, patterns, and testing methodologies. 
  • Strong English Communication: Good command of written and spoken English, with the ability to work in a collaborative, multi-disciplinary environment.

OPTIONAL BUT ADVANTAGEOUS:

  • Linux Expertise: Extensive experience with Linux environments for scripting, automation, and systems operations.
  • Leadership & Mentoring (Preferred): Experience mentoring junior developers and leading technical discussions is a plus.
  • Data Visualization Tools: Knowledge of BI tools like Tableau, Power BI, or similar is a plus but not required.

*Please send in your resume in English.

WHO WE ARE:

Acronis is a global cyber protection company that provides natively integrated cybersecurity, data protection, and endpoint management for managed service providers (MSPs), small and medium businesses (SMBs), enterprise IT departments and home users. Our all-in-one solutions are highly efficient and designed to identify, prevent, detect, respond, remediate, and recover from modern cyberthreats with minimal downtime, ensuring data integrity and business continuity. We offer the most comprehensive security solution on the market for MSPs with our unique ability to meet the needs of diverse and distributed IT environments. 

A Swiss company founded in Singapore in 2003, Acronis offers over twenty years of innovation with 15 offices worldwide and more than 1800 employees in 50+ countries. Acronis Cyber Protect is available in 26 languages in 150 countries and is used by over 20,000 service providers to protect over 750,000 businesses. 

Our corporate culture is focused on making a positive impact on the lives of each employee and the communities we serve. Mutual trust, respect and belief that we can contribute to the world everyday are the cornerstones of our team. Each member of our “A-Team” plays an instrumental role in driving the success of our innovative and expanding business. We seek individuals who excel in dynamic, global environments and have a never give up attitude, contributing to our collective growth and impact. 

 Acronis is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, color, marital status, national origin, physical or mental disability, medical condition, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, gender identity or expression, or any other characteristic protected by applicable laws, regulations and ordinances. 

#LI-RK1

 



Source link

16Oct

Senior Data Engineer (Homes.com) at CoStar Group – US-DC Washington DC


Senior Data Engineer (Homes.com)

Job Description

OverviewCoStar Group (NASDAQ: CSGP) is a leading global provider of commercial and residential real estate information, analytics, and online marketplaces. Included in the S&P 500 Index and the NASDAQ 100, CoStar Group is on a mission to digitize the world’s real estate, empowering all people to discover properties, insights and connections that improve their businesses and lives. We have been living and breathing the world of real estate information and online marketplaces for over 35 years, giving us the perspective to create truly unique and valuable offerings to our customers. We’ve continually refined, transformed and perfected our approach to our business, creating a language that has become standard in our industry, for our customers, and even our competitors. We continue that effort today and are always working to improve and drive innovation. This is how we deliver for our customers, our employees, and investors. By equipping the brightest minds with the best resources available, we provide an invaluable edge in real estate. Homes.com OverviewHomes.com is already one of the fastest growing real estate portals in the industry, we are driven to be #1. Just ask Brad Bellflower, Chief Change Officer at Apartments.com. After its acquisition in 2014, Apartments.com quickly turned into the most popular place to find a place. Proven success at the highest level – and we’re doing it again with the new Homes.com. Homes.com is a CoStar Group company with 20+ years’ experience in leading and growing digital marketplaces. We pride ourselves on continually improving, innovating, and setting the standard for property search and marketing experiences. With Homes.com we’re building a brand on the cusp of defining the industry. We’re looking for big thinkers, brave leaders, and creative advertising wizards ready to influence a new age of homebuying within a tried-and-true, award-winning company. Learn more about Homes.com. Homes.com is growing quickly and we’re looking for a Senior Data Engineer to help accelerate our growth. This role will be responsible for helping build out sitewide tracking architecture on Homes.com, and then feeding that data into user-friendly KPI dashboards which track site performance and product usage and provides day to day insights on consumer behavior. You will engineer various aspects of our data management, ranging from designing and maintaining automated data infrastructure and creating BI solutions to facilitating data access and analysis for key company teams.This position is located in Washington, DC and offers 3 days a week onsite with 2 days remote.Responsibilities

  • Design, build, test, deploy and maintain real-time and batch processed Data pipelines.
  • Developing and maintaining data storage and retrieval systems.
  • Ensuring data accuracy, consistency, and security.
  • Integrating data from multiple source systems.
  • Developing and maintaining data models.
  • Developing and maintaining ETL/ELT processes.
  • Collaborating with analytics team, data scientists and business analysts to ensure that data is available and accessible.
  • Collaborating with other cross-functional engineering teams.
  • Becoming a trusted team member in matters of technical architecture, workflow, design and code.
  • Advocating for evolution and improvement – both technical and non-technical – within our teams. Includes new tech, tools and best practices

Basic Qualifications

  • Bachelor’s Degree required from an accredited, not for profit university or college, with degree preferably in Computer Science, Data Science, or related field. MSc or PhD is a plus.
  • A track record of commitment to prior employers.
  • A demonstrable record of accomplishment of building and launching successful products that use terabytes of data.
  • 5+ years of data pipeline engineering experience, and deep database engineering experience.
  • Ability to analyze technical requirements and design new architectures, data models and ETL/ELT strategies
  • Hands-on experience with cloud-based relational and non-relational databases and proficiency in SQL.
  • Deliver work products that meet specifications, are free of defects, with excellent performance.
  • Define Architecture and Development best practices.

Preferred Skills

  • Proficiency in programming languages such as Python, R, and SQL.
  • Performance tuning of database queries (Snowflake, Databricks) .
  • Proficiency in data modeling techniques.
  • Experience with No-SQL databases (e.g. DynamoDB).
  • Experience with data pipeline tools (e.g. Glue, Apache Airflow, Lambda).
  • Experience using Confluent Kafka.
  • Working knowledge in cloud computing platforms such as AWS, Azure, and Google Cloud.
  • Knowledge and experience working with data frameworks (e.g. Apache Spark).
  • Monitoring & dashboard metric management (e.g. CloudWatch, Kibana).
  • Knowledge of data security and Privacy regulations.
  • Familiarity with Data visualization tools such as Tableau, and Power BI.

What’s in it for YouWhen you join CoStar Group, you’ll experience a collaborative and innovative culture working alongside the best and brightest to empower our people and customers to succeed.We offer you generous compensation and performance-based incentives. CoStar Group also invests in your professional and academic growth with internal training, tuition reimbursement, and an inter-office exchange program.Our benefits package includes (but is not limited to):

  • Comprehensive healthcare coverage: Medical / Vision / Dental / Prescription Drug
  • Life, legal, and supplementary insurance
  • Virtual and in person mental health counseling services for individuals and family
  • Commuter and parking benefits
  • 401(K) retirement plan with matching contributions
  • Employee stock purchase plan
  • Paid time off
  • Tuition reimbursement
  • On-site fitness center and/or reimbursed fitness center membership costs (location dependent), with yoga studio, Pelotons, personal training, group exercise classes
  • Access to CoStar Group’s Diversity, Equity, & Inclusion Employee Resource Groups
  • Complimentary gourmet coffee, tea, hot chocolate, fresh fruit, and other healthy snacks

We welcome all qualified candidates who are currently eligible to work full-time in the United States to apply. However, please note that CoStar Group is not able to provide visa sponsorship for this position.

This position offers a base salary range of $110,200 -$190,000, based on relevant skills and experience and includes a generous benefits plan.

#LI-AR

CoStar Group is an Equal Employment Opportunity Employer; we maintain a drug-free workplace and perform pre-employment substance abuse testing



Source link

15Oct

2025 Intern – Research Scientist at Adobe – London


Our Company

Changing the world through digital experiences is what Adobe’s all about. We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences! We’re passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen. 

We’re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours!

 

The Opportunity:

  • Location: London, UK
  • Duration: 3 months
  • Start date: Between January and August 2025
  • Application: open between 14th of October until 8th of November 2024
  • Eligibility: Candidates with a Ph.D., Master’s degree, or equivalent experience are encouraged to apply.
  • Hybrid: 3 days per week in the office, 2 days per week work from home

The Opportunity:

Adobe Research is seeking forward-thinking minds who are passionate about developing research areas that make a difference in Adobe’s products and future endeavors. Specifically, you will be working on problems related but not limited to generative models, inverse rendering, capture, and 3D analysis.

We are currently accepting applications for Research and Engineering internships in Spring, Summer, and Fall 2025. Candidates with a Ph.D., master’s degrees, or equivalent experience are encouraged to apply. Outstanding undergraduates will also be considered. Working directly with a manager and a mentor, you will have access to robust computational resources and world-class product and design teams, empowering you to see your work reach millions of users across the globe.

Summary:

Adobe Researchers are actively working on novel computer vision and graphics problems based on classical paradigms as well as generative models. Our goal is to improve the content creation workflows by providing control, ease of use, and speed to our users. Our research labs are actively publishing in leading journals and conferences, work with product teams on-high-profile features, and explore new product opportunities across various domains. We are especially interested in encouraging ongoing teamwork that extends beyond the internship and become integral to your PhD or Master’s thesis.

What you will do:

Under the direction of an Adobe Research mentor you will work on a research problem. In addition to investigating the existing solutions, you will experiment with novel ideas, develop proof of concept demos, and present your findings. During the internship, you will have to opportunity to get along with other interns and researchers inside Adobe as well as get insights about Adobe’s products.

What you will need:

  • Current Ph.D. or Master’s student in computer science, or a related field.
  • Sufficient research skills and background in machine learning and diffusion models.
  • Strong communication skills and teamwork experience
  • Passion for solving real-world problems with web-scale data using and inventing state-of-the-art Machine Learning algorithms.

What to expect from the recruitment process:

Our selection process consists of two stages as follows:

  • 60 minutes Technical Interview with the team.
  • 45 minutes Soft-Skills Interview with the Hiring Manager.

Adobe is proud to be an Equal Employment Opportunity and affirmative action employer. We do not discriminate based on gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. Learn more.
 

Adobe aims to make Adobe.com accessible to any and all users. If you have a disability or special need that requires accommodation to navigate our website or complete the application process, email 

ac************@ad***.com











 or call (408) 536-3015.

Adobe values a free and open marketplace for all employees and has policies in place to ensure that we do not enter into illegal agreements with other companies to not recruit or hire each other’s employees.



Source link

15Oct

Assistant Professor of Teaching and/or Assistant Teaching Professor at Worcester Polytechnic Institute – Worcester


JOB TITLE

Assistant Professor of Teaching and/or Assistant Teaching Professor

LOCATION

Worcester

DEPARTMENT NAME

Computer Science Department

DIVISION NAME

Worcester Polytechnic Institute – WPI

JOB DESCRIPTION SUMMARY

The Computer Science Department and the Artificial Intelligence Program at Worcester Polytechnic Institute (WPI) seek applicants for a tenure-track Assistant Professor of Teaching and for a secure-contract Assistant Teaching Professor faculty position in Artificial Intelligence.

Looking for faculty colleagues who engage deeply in both high-impact research and high-quality teaching within a curriculum that embraces student projects and independent learning? With both tenure-track teaching and secure-contract teaching positions, WPI champions faculty and supports them in building careers dedicated to high-quality teaching. We invite you to join our WPI community and help us enrich our diverse and inclusive environment.

JOB DESCRIPTION

About the Position. The Computer Science Department and the Artificial Intelligence Program invite applications for a tenure-track Assistant Professor of Teaching and a secure-contract Assistant Teaching Professor. These are both full-time career teaching-mission faculty positions with targeted starts in either January 2025 or Fall of 2025. We are looking for candidates whose areas of expertise intersect with Artificial Intelligence. Specific themes within Artificial Intelligence include, but are not limited to, Responsible AI, Machine Learning Operations (MLOps), Human-Centric AI, Generative AI, Large-Language Models, Human-Centric AI, Data Systems for AI, Personalized Assistants, AI Education, Human-AI Interaction, and Multi-Agent systems. We also welcome experts in AI applications including, but not limited to, AI for Social Good, AI for Health, AI for Virtual Worlds, AI for Education, and AI for Science and Engineering Advances. In addition to these specific areas, outstanding candidates in any AI area will receive full consideration. Candidates should have a PhD in Computer Science or a closely related field, and the potential for excellence in teaching. The successful candidates will work with colleagues at WPI to build up AI programs and initiatives.

These appointments are both expected to lead to long-term security of employment. One tenure-track teaching-mission position is available and one secure-contract teaching-mission position is available, both at the Assistant Professor level. The promotion path for the tenure-track Assistant Professor of Teaching position includes the Associate Professor of Teaching rank followed by the Professor of Teaching rank. The promotion path for the secure-contract Assistant Teaching Professor position includes Associate Teaching Professor rank followed by the Teaching Professor rank. After a successful year of service, faculty on the secure-contracts path are offered two renewable three-year contracts in sequence. After seven years of service, secure-contract faculty are renewable five-year contracts to provide long-term security of employment. Successful candidates will join our faculty community composed of teaching professors and tenure-track dual-mission (teaching and research) faculty who jointly support our curriculum and projects.

Successful candidates will teach and advise projects at the undergraduate and graduate levels. With a course load of five classes annually and four undergraduate terms in the academic year, teaching-mission faculty often teach a single class at a time along with project advising.

About the Department. The Computer Science Department has 40 full-time faculty with research and teaching expertise in core Computer Science and CS-related interdisciplinary fields. Computer Science faculty collaborating in interdisciplinary programs (Artificial Intelligence, Bioinformatics and Computational Biology, Cyber Security, Data Science, Interactive Media and Game Development, Learning Sciences, and Neuroscience), each including faculty from diverse disciplines beyond Computer Science. Most of these programs are housed in a new state-of-the-art academic building near the building housing Computer Science. You would be joining a community of strong researchers and caring educators with expertise in AI and fields closely related to AI, including machine/deep learning, NLP, graph mining, data science, computer vision, reinforcement learning, generative AI, AI in health, educational data mining, scalable data systems, and more, focused on tackling real-world challenges with societal impact.  Faculty research is supported by the NSF, NIH, DoE, and other federal and private funding sources with recent annual new research funding averaging around $10 million. Computer Science itself has over 1,250 undergraduate students, over 65 Ph.D. students, and around 190 students seeking master’s-level degrees. Our interdisciplinary programs include faculty from other disciplines beyond Computer Science, and they are home to students in the respective interdisciplinary majors at both the undergraduate and graduate levels.

About the University.  WPI is a selective private university with an innovative curriculum centered on science, engineering, arts, business, and global studies. Ranked highly by US News & World Report among national comprehensive universities, WPI has roughly 5,500 undergraduates and 2,600 graduate students. WPI is an internationally recognized leader in project-based learning and global education. Most undergraduate students at WPI participate in a global project experience completing academic projects at WPI’s 50+ project centers across six continents.  We are most proud of our No. 1 ranking for “faculty who best combine research and teaching” from the Wall Street Journal and Times Higher Ed. Located one hour west of Boston, the university’s campus is in Worcester, Massachusetts, a thriving 21st century college city recognized as a growing hub of scientific and technological innovation, and known for a thriving economy, rich culture, and quality of life. The University of Massachusetts Medical School, numerous technology companies and many colleges and universities are in the immediate area, making it ideal for two-career families. We support our faculty to help them succeed and provide faculty mentoring support for our early and mid-career colleagues.

Questions about the hiring process should be sent to

re*****@cs.edu











. More information about the positions and instructions for applying are available at https://www.wpi.edu/academics/departments/computer-science/hiring . Applicants will need to include detailed teaching and diversity statements; a Curriculum Vitae; and contact information for at least three references.

The positions will start in January or August 2025. The deadline for applications is November 15, 2024, with applications continuing to be considered after that date until the positions are filled.

FLSA STATUS

United States of America (Exempt)

WPI is an Equal Opportunity Employer that actively seeks to increase the diversity of its workplace. All qualified candidates will receive consideration for employment without regard to race, color, age, religion, sex, sexual orientation, gender identity, national origin, veteran status, or disability. It seeks individuals with​ diverse backgrounds and experiences who will contribute to a culture of creativity, collaboration, inclusion, problem solving, innovation, high performance, and change making. It is committed to maintaining a campus environment free of harassment and discrimination.



Source link

15Oct

AI Feels Easier Than Ever, But Is It Really? | by Anna Via | Oct, 2024


The 4 Big Challenges of building AI products

Picture by ynsplt on Unsplash

A few days ago, I was speaking at an event about how to move from using ChatGPT at a personal level to implementing AI-powered technical solutions for teams and companies. We covered everything from prompt engineering and fine-tuning to agents and function calling. One of the questions from the audience stood up to me, even though it was one I should have expected: “How long does it take to get an AI-powered feature into production?”

In many ways, integrating AI into features can be incredibly easy. With recent progress, leveraging a state-of-the-art LLM can be as simple as making an API call. The entry barriers to use and integrate AI are now really low. There is a big but though. Getting an AI feature into production while accounting for all risks linked with this new technology can be a real challenge.

And that’s the paradox: AI feels easier and more accessible than ever, but its open-ended (free input / free output…



Source link

15Oct

Global Data & Analytics Manager at Publicis Groupe – London, United Kingdom


Company Description

Spark Foundry make up part of a thriving global media network and are part of the Publicis Groupe, one of the world’s leading communications groups. We are globally connected with over 8,000 employees in 110 offices across 70 countries.

Who We Are in the UK? 

Spark Foundry, the Acceleration Agency.

We help brands to identify, learn and respond to opportunities faster than the competition.

Every client has an area of their business they need to accelerate, from short-term goals to long-term transformation.

We’ve proven our approach during the most difficult year on record. Now we’re using it to provide a launchpad for their future.

Come be an accelerator with us.

How we accelerate

  • Planning: an approach that works in practice rather than theory, arming planners with the ability to create cutting edge campaigns
  • Intelligence: a suite of tools that give definitive answers to big questions, and uncovers actionable insights about real people
  • Trading: a model built on flexibility and trusted relationships, underpinned with bold guarantees
  • Relationships: a culture of asking challenging questions to better understand the brief – we are not a ‘yes’ agency
  • People: a strong history of recruiting talent from diverse backgrounds and accelerating their careers

Our Commitment

We are diverse though our experience, people and the clients we look after – and we celebrate that diversity. Our people hold us accountable to our beliefs and via regular surveys and our grass roots D&I team, The Collective, and internal next generation board, Firestarters, we hold regular events and work continually towards generating ideas, initiatives and educating our people to ensure we are a diverse and inclusive agency.

As part of our dedication to create an inclusive and diverse workforce, Spark Foundry is committed to equal access to opportunity for people without regard to race, age, sex, disability, neurodiversity, sexual orientation, gender identity or religion. 

Job Description

Are you passionate about solving data challenges and turning numbers into actionable insights? If you have a proven track record in data analysis and a keen eye for detail, we would love to have you on our team!

Experience in the marketing industry is a plus, but we’re looking for someone who is not only technically skilled but also motivated to grow and learn. 

What You’ll Be Doing Day to Day:

  • Analysing, visualising, and storytelling with data: You’ll turn complex datasets into clear, actionable insights and communicate them effectively to various stakeholders.
  • Building data pipelines: Develop efficient and scalable data pipelines in BigQuery to automate processes and ensure data accuracy.
  • Extracting and transforming data: Use SQL to find and manipulate the data you need for in-depth analysis.
  • Solving challenging problems: Tackle complex data problems with creativity and an analytical mindset.
  • Building predictive models: Leverage machine learning and statistical methods to forecast trends and outcomes.
  • Running statistical tests: Apply statistical analysis techniques, such as A/B testing, regression analysis, and hypothesis testing, to validate your findings.
  • Collaborating with a talented team: Work closely with a diverse team of experts, sharing insights and contributing to collective problem-solving efforts.

Qualifications

To be successful in this role you will need:

  • Advanced Data Analysis Skills: You have an understanding of data analysis techniques, with a proficiency in Python for data manipulation (or excited to learn to Python) You have heard of statistical methods like OLS regression, hypothesis testing, and probability theory.
  • SQL: BigQuery is your tool of choice, and can write efficient SQL queries to extract, transform, and analyse data quickly and accurately (or you can’t wait to learn about all things SQL)
  • Data Visualisation: Whether using Looker, Google Sheets, or Python libraries, you can create visually impactful reports that make complex data easy to understand. Curiosity and Problem-Solving: You enjoy tackling difficult problems and are always eager to learn new skills and tools to improve your work.
  • Quick Learner: You adapt quickly to new tools and environments, always eager to expand your skillset and stay ahead of industry trends.
  • Team Collaboration: You thrive in a collaborative setting, contributing your expertise while valuing the perspectives of others.
  • Clear Communication: You can translate complex data into meaningful insights and communicate them effectively to both technical and non-technical audiences.
  • Client-Focused Mindset: You are skilled at building strong relationships with clients, delivering insightful presentations, and adapting to their evolving needs.

Bonus Points If You:

  • Have experience with decision trees or time series modelling.
  • Are well-versed in marketing media industry KPIs.
  • Have expertise in test-and-learn strategies.
  • Are experienced in causal analysis and A/B testing.

Additional Information

Spark Foundry has fantastic benefits on offer to all of our employees. In addition to the classics, Pension, Life Assurance, Private Medical and Income Protection Plans we also offer;

  • WORK YOUR WORLD opportunity to work anywhere in the world, where there is a Publicis office, for up to 6 weeks a year.
  • REFLECTION DAYS – Two additional days of paid leave to step away from your usual day-to-day work and create time to focus on your well-being and self-care.
  • HELP@HAND BENEFITS 24/7 helpline to support you on a personal and professional level. Access to remote GPs, mental health support and CBT. Wellbeing content and lifestyle coaching.
  • FAMILY FRIENDLY POLICIES We provide 26 weeks of full pay for the following family milestones: Maternity. Adoption, Surrogacy and Shared Parental Leave.
  • FLEXIBLE WORKING, BANK HOLIDAY SWAP & BIRTHDAY DAY OFF You are entitled to an additional day off for your birthday, from your first day of employment.
  • GREAT LOCAL DISCOUNTS This includes membership discounts with Soho Friends, local restaurants and retailers in Westfield White City and Television Centre.

Full details of our benefits will be shared when you join us!

Publicis Groupe operates a hybrid working pattern with full time employees being office-based three days during the working week. 

We are supportive of all candidates and are committed to providing a fair assessment process. If you have any circumstances (such as neurodiversity, physical or mental impairments or a medical condition) that may affect your assessment, please inform your Talent Acquisition Partner. We will discuss possible adjustments to ensure fairness. Rest assured, disclosing this information will not impact your treatment in our process.

Please make sure you check out the Publicis Career Page which showcases our Inclusive Benefits and our EAG’s (Employee Action Groups).

#LI-Hybrid



Source link

15Oct

Economiste stagiaire – Economic Advisory (F/H) at Deloitte – Casablanca (DFC)


Stage (6 mois) à pourvoir en janvier 2025!

Tous nos postes sont ouverts au télétravail.

Vous souhaitez évoluer au sein d’un des leaders mondiaux en Audit, Conseil, Financial Advisory, Risk Advisory et Juridique & Fiscal ?

Rejoindre Deloitte, c’est :

  • Faire le choix d’une carrière professionnelle riche, innovante et stimulante
  • Participer à une grande diversité de missions responsabilisantes
  • Se développer continuellement et acquérir de nouvelles compétences

En rejoignant notre département Economic Advisory au sein de l’activité Strategy Transformation, Evaluation Economic and Restructuring, vous collaborerez avec nos équipes en France sur des missions d’analyse économique et de données appliquées au monde de l’entreprise.  

Notre département Economic Advisory intervient notamment sur les sujets suivants :

Analyse économique appliquée au droit de la concurrence:

  • Conseil sur les projets M&A et dossiers d’Antitrust
  • Evaluation des préjudices économiques
  • Modélisation des marchés (simuler leur fonctionnement et analyser l’impact de décisions ou réglementations sur leur équilibre).

Etude d’impact économiques

  • Evaluation des impacts socio-économiques de projets d’investissement (routes, programme d’électrification, santé, éducation, etc.) et de politiques publiques en France et en Afrique francophone.
  • Analyse des interactions entre secteurs d’activité, leur compétitivité et leurs enjeux de restructuration.

La modélisation des marchés énergétiques et de la transition climatique :

  • Soutien à nos clients industriels pour anticiper les défis et valoriser les opportunités qu’elle présente.
  • Soutien aux décideurs publics dans la conception de mesures visant à rendre les marchés et les systèmes énergétiques plus efficaces du point de vue économique, plus sûrs et résilients, et mieux adaptés aux enjeux climatiques.

En rejoignant l’équipe Deloitte Economic Advisory vous vous assurez qu’en plus de votre sujet de recherche, vous participez activement à l’activité de l’équipe. Vous interviendrez notamment en appui sur :

  • La collecte et l’analyse d’informations et de données économiques ;
  • La revue de la littérature économique pertinente et son application à des situations concrètes ;
  • L’analyse des données et la construction de modèles pour les exploiter ;
  • L’analyse des marchés et la compréhension de leur fonctionnement concurrentiel ;
  • La participation à la rédaction de rapports d’expertise ;
  • La préparation de présentations des conclusions des travaux effectués.

Sujets de stages possibles:

Nous proposons actuellement de deux sujets passionnants pour votre PFE, cependant nous sommes pleinement à l’écoute de toutes les propositions que vous pourriez faire :

  • Application des méthodes de Deep Learning à la prévision de la demande d’électricité dans les pays en développement
  • Prise en compte des écosystèmes biologiques dans la valorisation des patrimoines naturels

Votre profil:

  • Etudiant(e) d’une grande école d’ingénieurs, de commerce ou d’un 3ème cycle universitaire, vous avez une formation solide en microéconomie, en économétrie ou en analyse de données.
  • Vous disposez de compétences avec des outils d’analyse de données (R, Python, Stata) ou de modélisation (GAMS, Matlab).
  • Vous avez de solides compétences analytiques.
  • Rigoureux(se), organisé(e), autonome et d’un naturel curieux, vous aimez travailler en équipe.

  • Vous parlez et rédigez couramment en anglais et en français.

En rejoignant Deloitte, vous aurez l’opportunité de développer un set de compétences, partagées avec notre réseau international, et structurées autour des dimensions suivantes : leadership, métier et spécialité. Grâce aux missions variées auxquelles vous participerez et au programme de formations proposé, vous pourrez renforcer progressivement ces compétences, en acquérir de nouvelles et progresser ainsi au sein de notre firme.

Poste basé à Casablanca!



Source link

Protected by Security by CleanTalk