Related skills
snowflake sql python dbt airflow📋 Description
- Design, build, and maintain data pipelines (ETL/ELT) with Spark on AWS EMR.
- Build and optimize batch and near real-time Spark jobs on EMR.
- Write and refine SQL queries; use Python for data processing.
- Implement data quality checks to ensure data integrity.
- Develop and optimize data warehouse schemas; define pipeline contracts.
- Collaborate with data analysts, scientists, and engineers to meet needs.
🎯 Requirements
- Bachelor’s or Master’s in CS, Math, or Physics.
- 3+ years in data engineering or backend data development.
- Strong SQL and data modeling for data warehouses.
- Python for data processing and pipeline automation.
- Familiarity with ETL tools and workflow schedulers (Airflow).
- Experience with data quality checks and large datasets.
🎁 Benefits
- Stock grant opportunities dependent on role, status and location.
- Additional perks and benefits based on status and country.
- Remote work flexibility, including optional WeWork access.
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest — finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!