Related skills
sql python dbt kafka spark📋 Description
- Design and evolve canonical data models (bronze/silver/gold) for governance.
- Build and optimize ETL/ELT pipelines with Airflow, Spark, Trino.
- Develop data marts and semantic layers for analytics and ML.
- Architect streaming and analytical systems with Kafka and ClickHouse.
- Define standards for data modeling, quality, and lineage.
- Mentor engineers and lead design reviews for scalability.
🎯 Requirements
- Bachelor’s or Master’s in Computer Science, Data Engineering, or related.
- 10+ years in Data Engineering; 2+ years in an architectural role.
- Expertise in SQL, data modeling, and data mart design.
- Hands-on with Apache Airflow, dbt, Spark, Kafka, and ClickHouse.
- Python or Scala; experience with AWS, GCP, or Azure data ecosystems.
- Strong data governance, lineage, and quality frameworks.
🎁 Benefits
- Time off and generous vacation.
- Insurance coverage.
- Competitive pay.
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest — finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!