Related skills
aws etl sql python hadoop📋 Description
- Build and maintain scalable data pipelines for driver insights and routing
- Improve data models and schema design to meet evolving needs
- Monitor and improve data quality, consistency, and performance
- Support internal users with self-serve ETL tools and real-time analytics pipelines
- Collaborate cross-functionally with product, engineering, and data science teams
🎯 Requirements
- Bachelor's degree in Computer Science, Engineering, Mathematics, Statistics, or related field
- 3+ years of experience in data engineering, ideally with large-scale distributed systems
- Strong skills in Spark, Python (or similar scripting language), and SQL performance tuning
- Experience with AWS, Hadoop/S3, Hive, Presto, Airflow, and related tools
- Solid understanding of ETL processes, workflow orchestration, and data warehousing
- A collaborative mindset—you’re comfortable working across teams to solve real-world problems
🎁 Benefits
- Extended health and dental coverage, life insurance and disability benefits
- Mental health benefits
- Family building benefits
- Child care and pet benefits
- Lyft-funded Health Care Savings Account
- RRSP plan with company match
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest — finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!