Added
15 days ago
Type
Full time
Salary
Salary not provided

Related skills

redshift aws python apache spark apache airflow

πŸ“‹ Description

  • Build and maintain scalable data pipelines for analytics and product data.
  • ETL/ELT development to ingest, transform, and deliver data.
  • Workflow orchestration with Apache Airflow for reliable scheduling and monitoring.
  • Leverage Trino/Presto, Spark, and AWS data tools to enable analytics.
  • Data modeling and warehousing: schema design and scalable data architecture.
  • Monitor pipelines, improve performance, and ensure data quality and reliability.

🎯 Requirements

  • 5+ years of experience building data pipelines or in data engineering or related roles.
  • Hands-on with Airflow, Spark, Iceberg, Trino/Presto.
  • AWS: S3, Redshift, Glue, Athena, Lambda.
  • Python for data processing and pipeline development.
  • Data warehousing concepts, schema design, and data modeling.
  • Production deployment and support of data pipelines; bonus: Looker/Tableau, governance, CI/CD.

🎁 Benefits

  • Generous health coverage for you and family.
  • 401(k) match after 60 days, fully vesting after 1 year.
  • HSA contribution and an HRA to offset deductibles.
  • Flexible vacation policy you can take as needed.
  • Volunteer Time Off (VTO) and community events.
  • Rest, Relax, and Recharge time for mental health.
Share job

Meet JobCopilot: Your Personal AI Job Hunter

Automatically Apply to Engineering Jobs. Just set your preferences and Job Copilot will do the rest β€” finding, filtering, and applying while you focus on what matters.

Related Engineering Jobs

See more Engineering jobs β†’