Related skills
dynamodb sql s3 hadoop airflow📋 Description
- Own core data pipelines; scale data processing to handle growth.
- Evolve data models and schemas based on business needs.
- Implement systems to track data quality and consistency.
- Develop tools for self-service data pipelines (ETL).
- Tune SQL and MapReduce jobs to improve performance.
- Write clean, well-tested, maintainable code.
- Participate in code reviews to ensure quality and share knowledge.
- Participate in on-call rotations for high availability of workflows and data.
- Unblock and communicate with internal and external partners.
🎯 Requirements
- Bachelor's degree in CS, Engineering, Math, Stats, or related field.
- 2+ years of relevant professional experience.
- Strong experience with Spark.
- Experience with Hadoop ecosystem: S3, DynamoDB, MapReduce, Yarn, HDFS, Hive, Spark, Parquet.
- Scripting languages: Python, Ruby, Bash.
- Strong SQL engine understanding and performance tuning.
- Proficient in MySQL, PostgreSQL, SQL Server or Oracle.
- Experience with Airflow, Oozie, Azkaban, UC4.
- Comfortable working with data and business partners to align Lyft's business goals with data engineering.
🎁 Benefits
- Extended health and dental coverage, life and disability insurance.
- Mental health benefits.
- Family building benefits.
- Child care and pet benefits.
- Lyft-funded Health Care Savings Account.
- RRSP plan to help save for your future.
- Flexible PTO: salaried; 15 days for hourly +1/yr.
- 18 weeks paid parental leave for all parents.
- Subsidized commuter benefits.
- Hybrid policy: in-office 3 days/wk; up to 4 weeks remote.
- CAD 108,000 - 135,000 base pay in Toronto.
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest — finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!