Related skills
python scala apache spark clickhouse apache kafka📋 Description
- Define and drive data infrastructure vision and multi-year roadmap.
- Lead platform strategy: build vs buy, cost modeling, migrations.
- Own the data lakehouse foundation: multi-engine compute, governance, scaling.
- Drive real-time and streaming infra for critical use cases.
- Pioneer AI-native data infra: AI/LLM tools across lifecycle.
- Elevate engineering excellence: set standards, reviews, mentor engineers.
- Partner with stakeholders to translate needs into reliable, self-serve infra.
🎯 Requirements
- 10+ years in software engineering, data infra, or distributed systems at scale.
- Expertise in data lakehouse architectures and Iceberg/Delta Lake.
- Experience with Trino, Spark, ClickHouse, Kafka, Flink for performance.
- Proven track record migrating large infra platforms; build vs buy decisions.
- Cost-benefit modeling and TCO analysis for infra investments.
- Strong technical communication and cross-team leadership.
- Preferred: governance (SOX/CPRA/GDPR), FinOps, SQL, Python/Scala, Airflow, dbt.
- Bachelor’s, Master’s, or PhD in CS/CE or equivalent.
🎁 Benefits
- Flexible remote-first work policy (Flex First).
- Equity grants for new hires and annual refresh grants.
- Access to Instacart benefits and company events.
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Engineering Jobs. Just set your
preferences and Job Copilot will do the rest — finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!