Added
less than a minute ago
Type
Full time
Salary
Salary not provided

Related skills

docker terraform sql python kafka

πŸ“‹ Description

  • Build streaming and batch pipelines ingesting market, trading, portfolio data; ensure resilience.
  • Build self-serve tooling (SDKs, templates, AI agents) for data products.
  • Own data contracts and schema evolution; avoid multi-team coordination events.
  • Design lakehouse and time-series layer around consumer query patterns.
  • Build Data Governance and Data Quality: validation, lineage, idempotent writes.
  • Build derived analytics for desks: spreads, VWAP, exposure.

🎯 Requirements

  • 8+ years building production data systems.
  • Strong Python and SQL; reason about engine behavior.
  • Readable, testable, maintainable code.
  • Deep data modelling for streaming and analytics.
  • Experience with streaming systems (Kafka/Redpanda/MSK/Kinesis) and time-series stores.
  • Lakehouse architecture; table layout, partitioning, and governance.

🎁 Benefits

  • From-scratch mandate; shape the platform, standards, and culture.
  • Strong partnerships with Architecture, Infrastructure, and Platform.
  • Autonomy; remote-first, flexible hours, on-call shared across team.
  • Competitive salary package with benefits.
  • Yearly onsite meetup where everyone is in the same room.
Share job

Meet JobCopilot: Your Personal AI Job Hunter

Automatically Apply to Data Jobs. Just set your preferences and Job Copilot will do the rest β€” finding, filtering, and applying while you focus on what matters.

Related Data Jobs

See more Data jobs β†’