Related skills
java sql scala airflow kafka📋 Description
- Deliver scalable data pipelines with reliability and efficiency.
- Build SME expertise and manage SLAs for pipelines and apps.
- Collaborate to create canonical datasets and trustworthy data.
- Leverage AI/LLM and Agents to analyze data on ambiguous problems.
- Work with Spark, Flink, Kafka, Trino, Pinot, Airflow, Scala/Java/SQL/Python.
- Drive end-to-end data initiatives with quality and timely delivery.
🎯 Requirements
- 2-5 years building/operating data systems, pipelines, and data warehouses.
- Strong engineering background and passion for data.
- Experience with data pipelines using Spark/Hadoop/Trino.
- Backend language (Scala/Java/Go) and strong SQL.
- Extreme customer focus and cross-functional collaboration with PMs/leaders/Stripe engineers.
- Iceberg, Kafka, Flink, Airflow, AWS; OSS contributions preferred.
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest — finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!