Related skills
aws sql python kubernetes airflowπ Description
- Owner of core data pipelines in mapping; scale data processing to meet growth
- Develop SME in managed systems; set and manage SLAs for pipelines
- Evolve data models and schemas to meet business requirements
- Develop tools for self-service ETL pipelines and schema evolution; SQL tuning
- Write clean, well-tested, scalable code
- Conduct code reviews to uphold quality and share knowledge
π― Requirements
- Bachelor's degree in Computer Science, Engineering, Mathematics, Statistics, or related field
- 4+ years of relevant professional experience
- Strong Spark experience; scripting in Python (or Ruby, Bash)
- Experience with data quality tools: Great Expectations, dbt, Monte Carlo
- Experience with databases/streaming tech such as S3, DynamoDB, HDFS, Hive, Presto, Kafka
- SQL proficiency (MySQL, PostgreSQL, SqlServer, Oracle) with geospatial queries and tuning
- Workflow tools: Airflow, Oozie, Prefect; infra tools: Terraform, Docker, Kubernetes in AWS
- Experience defining API schemas and backend services in a microservices environment
π Benefits
- Extended health and dental coverage along with life insurance and disability benefits
- Mental health benefits
- Family building benefits
- Child care and pet benefits
- Lyft Health Savings Account
- RRSP plan with company match
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest β finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!