Related skills
docker terraform sql python kafkaπ Description
- Build streaming and batch pipelines ingesting market, trading, portfolio data; ensure resilience.
- Build self-serve tooling (SDKs, templates, AI agents) for data products.
- Own data contracts and schema evolution; avoid multi-team coordination events.
- Design lakehouse and time-series layer around consumer query patterns.
- Build Data Governance and Data Quality: validation, lineage, idempotent writes.
- Build derived analytics for desks: spreads, VWAP, exposure.
π― Requirements
- 8+ years building production data systems.
- Strong Python and SQL; reason about engine behavior.
- Readable, testable, maintainable code.
- Deep data modelling for streaming and analytics.
- Experience with streaming systems (Kafka/Redpanda/MSK/Kinesis) and time-series stores.
- Lakehouse architecture; table layout, partitioning, and governance.
π Benefits
- From-scratch mandate; shape the platform, standards, and culture.
- Strong partnerships with Architecture, Infrastructure, and Platform.
- Autonomy; remote-first, flexible hours, on-call shared across team.
- Competitive salary package with benefits.
- Yearly onsite meetup where everyone is in the same room.
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest β finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!