Added
18 minutes ago
Type
Full time
Salary
Upgrade to Premium to se...

Related skills

aws sql python kubernetes airflow

πŸ“‹ Description

  • Owner of core data pipelines in mapping; scale data processing to meet growth
  • Develop SME in managed systems; set and manage SLAs for pipelines
  • Evolve data models and schemas to meet business requirements
  • Develop tools for self-service ETL pipelines and schema evolution; SQL tuning
  • Write clean, well-tested, scalable code
  • Conduct code reviews to uphold quality and share knowledge

🎯 Requirements

  • Bachelor's degree in Computer Science, Engineering, Mathematics, Statistics, or related field
  • 4+ years of relevant professional experience
  • Strong Spark experience; scripting in Python (or Ruby, Bash)
  • Experience with data quality tools: Great Expectations, dbt, Monte Carlo
  • Experience with databases/streaming tech such as S3, DynamoDB, HDFS, Hive, Presto, Kafka
  • SQL proficiency (MySQL, PostgreSQL, SqlServer, Oracle) with geospatial queries and tuning
  • Workflow tools: Airflow, Oozie, Prefect; infra tools: Terraform, Docker, Kubernetes in AWS
  • Experience defining API schemas and backend services in a microservices environment

🎁 Benefits

  • Extended health and dental coverage along with life insurance and disability benefits
  • Mental health benefits
  • Family building benefits
  • Child care and pet benefits
  • Lyft Health Savings Account
  • RRSP plan with company match
Share job

Meet JobCopilot: Your Personal AI Job Hunter

Automatically Apply to Data Jobs. Just set your preferences and Job Copilot will do the rest β€” finding, filtering, and applying while you focus on what matters.

Related Data Jobs

See more Data jobs β†’