06Jun


Job Description

MAIN RESPONSABILITIES
• Incident Response
o Understand problems from a user perspective and communicate to clearly 
understand the issue.
o Reproduce bugs or issues that users are facing.
o Apply root cause analysis to quickly and efficiently 
o Find the root cause of the problem, patch it, test it, and communicate with the 
end user.
o Write postmortems summarizing every step of resolution and helping the team 
to track all issues.
o Monitor existing flows and infrastructure and perform the same tasks when 
discovering bugs/issues through monitoring and alerting.
• Maintenance
o Monitor flows and infrastructure to identify potential issues.
o Adapt configurations to keep flows and infrastructures working as expected, 
keeping the operations without incident.
• Database Optimization
o Track costs and time of processing through dedicated dashboards.
o Alert people who query tables the wrong way, involving high costs.
o Track down jobs, views, and tables that are running inefficiently and occur 
either high costs or low speed of execution.
o Optimize jobs, queries, and tables to optimize both costs and speed of 
execution.
• Infrastructure Management
o Manage infrastructure through Terraform.
o Share and propose good practices.
o Decommission useless infrastructures such as services, tables, or virtual 
machines.
• Deployments
o Track future deployments with a Data Architect and participate in Deployment 
Reviews.
o Share and propose good practices of deployment.
o Accompany Data Engineers during the entire process of deployments.
o Accompany Data Engineers in the following period of active monitoring.
o Ensure diligent application of deployment process, logging, and monitoring 
strategy.
o Take over newly deployed flows in the run process.
REQUESTED HARD SKILLS
• Google Cloud Platform: General knowledge of the platform and various services, 
and at least one year of experience with GCP.
• Apache Airflow: At least two years of experience with the Airflow orchestrator, 
experience with Google Composer is a plus.
• Google BigQuery: Extensive experience (at least 4 years) with GBQ, know how to 
optimize tables and queries, and able to design database architecture.
• Terraform: At least two years of experience with Terraform, and know good 
practices of GitOps.
• Apache Spark: this is an optional expertise we would value. Some of our pipelines 
use pySpark. 
• Additional Knowledge and Experience that are a Plus:
o Pub/Sub
o Kafka
o Azure Analysis Services
o Google Cloud Storage optimizatio



Source link

Protected by Security by CleanTalk