Related skills
aws etl sql python hadoopπ Description
- Own core data pipeline; scale data processing for Lyft growth
- Evolve data model and schema per business/engineering needs
- Implement systems to track data quality and consistency
- Build tools for self-service data pipelines (ETL)
- Tune SQL and MapReduce for better data processing performance
- Write clean, tested, maintainable code
π― Requirements
- Bachelor's degree in CS, Eng, Math, Stats, or related field
- 4+ years data engineering experience with large-scale systems
- Strong Spark, Python, and SQL performance tuning
- Experience with AWS, Hadoop/S3, Hive, Presto, Airflow
- Solid ETL, workflow orchestration, and data warehousing
- Collaborative mindset; comfortable cross-team problem solving
π Benefits
- Extended health, dental, life, and disability coverage
- Mental health benefits
- Family building benefits
- Child care and pet benefits
- Health Care Savings Account funded by Lyft
- RRSP plan with company match
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest β finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!