Related skills
sql python airflow kafka spark๐ Description
- Lead the technical outcomes for a team of engineers with mentorship.
- Deliver cutting-edge data pipelines that scale to users' needs.
- Develop subject matter expertise and manage SLAs for data pipelines and full-stack apps.
- Collaborate with product teams to create canonical datasets and trustworthy data.
- Leverage AI/LLM tools to produce and analyze high-quality data.
- Drive the execution of key data initiatives from planning to delivery.
๐ฏ Requirements
- 10+ years engineering experience; 5+ years building data pipelines and leading small teams.
- Strong data engineering background and passion for data.
- Experience with Spark/Hadoop/Trino for distributed data pipelines.
- Backend language (Scala/Java/Go) and strong SQL experience.
- Experience with Iceberg, Kafka, Flink, Spark, Airflow, Trino, and AWS.
- Experience creating/maintaining Data Marts / Data Warehouses.
๐ Benefits
- Competitive compensation and equity.
- Health benefits and wellness programs.
- Flexible work arrangements.
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest โ finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!