Related skills
python dbt apache spark clickhouse apache kafka📋 Description
- Define and drive data platform vision (storage, compute, streaming, analytics).
- Lead platform strategy: build vs buy, cost modeling, migrations.
- Own data lakehouse architecture, governance, multi-engine compute.
- Drive real-time and streaming infra for key use cases.
- Pioneer AI-native data infra and AI-augmented workflows.
- Elevate engineering excellence: standards, reviews, mentoring.
- Partner with stakeholders to translate needs into reliable infra.
🎯 Requirements
- 10+ years in software engineering, data infra, or distributed systems.
- Expertise in data lakehouse architectures; Iceberg/Delta Lake.
- Experience with Trino, Spark, ClickHouse; Kafka and Flink.
- Proven track record leading infra platform migrations and build-vs-buy.
- Experience creating cost-benefit analyses and TCO models.
- Bachelor’s, Master’s, or PhD in CS/CE; strong technical communication.
🎁 Benefits
- Flex First remote work with location flexibility.
- Learn more about our flexible approach to where we work.
- New hire equity grant and annual refresh grants.
- Equity and benefits programs tied to location and role.
- Remote-first culture with regular in-person events.
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Engineering Jobs. Just set your
preferences and Job Copilot will do the rest — finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!