Added
less than a minute ago
Location
Type
Full time
Salary
Upgrade to Premium to se...
Related skills
aws sql python dbt airflow๐ Description
- Build and maintain scalable data pipelines using PySpark, Airflow, and dbt.
- Tune Spark jobs and storage for low-latency enterprise analytics.
- Treat data as a product: data contracts, documentation, and trust.
- Ensure data freshness, volume, and schema changes across dashboards.
- Partner with Product, Engineering, and AI/ML teams to support features.
๐ฏ Requirements
- 2+ years in data engineering or data-intensive software.
- Fluent in SQL and Python for data manipulation.
- Hands-on with Spark (PySpark) and AWS/EMR.
- dbt modeling; partitioning, schema evolution, lakehouse (Iceberg).
- Latency-focused; materialized views and caching.
- Collaborative and curious; partner with Product and Data Science.
๐ Benefits
- Health, welfare, and wellbeing benefits.
- Equity options and sign-on rewards.
- AI-forward, collaborative engineering culture.
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest โ finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!