Added
less than a minute ago
Type
Full time
Salary
Salary not provided

Related skills

dynamodb sql s3 python hadoop

πŸ“‹ Description

  • Own and scale the core data pipeline to handle Lyft's data growth
  • Evolve data models and schemas to meet business needs
  • Implement systems for data quality and consistency
  • Build tools for self-service ETL and data pipelines
  • Tune SQL engines and MapReduce jobs for performance
  • Write clean, tested, maintainable code

🎯 Requirements

  • 4+ years in data engineering with large-scale distributed systems
  • Strong experience with Spark
  • Hadoop ecosystem, S3, DynamoDB, Hive, Parquet
  • Scripting in Python or Go
  • SQL engine understanding and performance tuning
  • Experience with workflow tools (Airflow, Oozie, Azkaban)

🎁 Benefits

  • Stable working environment
  • Latest tech and equipment
  • Potential for remote work internationally
  • 28 days vacation
  • 18 weeks paid parental leave
  • Mental health and family benefits
Share job

Meet JobCopilot: Your Personal AI Job Hunter

Automatically Apply to Data Jobs. Just set your preferences and Job Copilot will do the rest β€” finding, filtering, and applying while you focus on what matters.

Related Data Jobs

See more Data jobs β†’