Added
less than a minute ago
Location
Type
Full time
Salary
Salary not provided

Related skills

snowflake sql python dbt airflow

📋 Description

  • Design, build, and maintain data pipelines (ETL/ELT) with Spark on AWS EMR.
  • Build and optimize batch and near real-time Spark jobs on EMR.
  • Write and refine SQL queries; use Python for data processing.
  • Implement data quality checks to ensure data integrity.
  • Develop and optimize data warehouse schemas; define pipeline contracts.
  • Collaborate with data analysts, scientists, and engineers to meet needs.

🎯 Requirements

  • Bachelor’s or Master’s in CS, Math, or Physics.
  • 3+ years in data engineering or backend data development.
  • Strong SQL and data modeling for data warehouses.
  • Python for data processing and pipeline automation.
  • Familiarity with ETL tools and workflow schedulers (Airflow).
  • Experience with data quality checks and large datasets.

🎁 Benefits

  • Stock grant opportunities dependent on role, status and location.
  • Additional perks and benefits based on status and country.
  • Remote work flexibility, including optional WeWork access.
Share job

Meet JobCopilot: Your Personal AI Job Hunter

Automatically Apply to Data Jobs. Just set your preferences and Job Copilot will do the rest — finding, filtering, and applying while you focus on what matters.

Related Data Jobs

See more Data jobs →