Added
less than a minute ago
Type
Full time
Salary
Salary not provided

Related skills

aws sql python spark iceberg

πŸ“‹ Description

  • Own and evolve the Data Lake infra and ETL pipelines from day one
  • Migrate key workloads (model calibration, analytics, reporting) into the Data Lake
  • Partner with Data Science to enable scalable data usage
  • Redesign data structures to improve query performance and freshness
  • Optimize AWS infra to reduce costs while staying reliable
  • Build robust production-grade data systems using modern AWS tooling

🎯 Requirements

  • Strong software engineering with SOLID principles for data systems
  • Experience building and maintaining CDC pipelines, ideally with AWS DMS
  • Hands-on with Spark and AWS Glue for scalable pipelines
  • Proficient SQL in transactional RDS environments
  • Governance, standards, and best practices across data platforms
  • Comfortable owning end-to-end systems and cross-functional work

🎁 Benefits

  • Equity ownership in the company
  • Hybrid work: 3 days a week in the office
  • 25 days holiday per year plus 8 bank holidays
  • 2 paid volunteering days per year
  • One month paid sabbatical after 4 years
  • Free gym membership
Share job

Meet JobCopilot: Your Personal AI Job Hunter

Automatically Apply to Data Jobs. Just set your preferences and Job Copilot will do the rest β€” finding, filtering, and applying while you focus on what matters.

Related Data Jobs

See more Data jobs β†’