Related skills
aws etl sql python airflowπ Description
- Build and maintain reliable, scalable data pipelines for Growth use cases
- Improve data models and schemas to meet evolving needs
- Monitor and improve data quality, consistency, and performance
- Support internal users with self-serve ETL tools and real-time analytics pipelines
- Collaborate cross-functionally with product, engineering, data science, and marketing
- Power Lyft's growth campaigns and infrastructure with data engineering
π― Requirements
- 3+ years of experience in data engineering, ideally with large-scale distributed systems
- Strong skills in PySpark, Python (or similar scripting language), and SQL performance tuning
- Experience with AWS, S3, Presto, Airflow, and related tools
- Solid understanding of ETL processes, workflow orchestration, and data warehousing
- Comfortable working cross-functionally to solve real-world problems
π Benefits
- Extended health and dental coverage, life insurance, and disability
- Mental health benefits
- Family building benefits
- Child care and pet benefits
- Hybrid in-office schedule with up to 4 weeks remote per year
- 18 weeks of paid parental leave
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest β finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!