Related skills
aws sql python spark icebergπ Description
- Own and evolve the Data Lake infra and ETL pipelines from day one
- Migrate key workloads (model calibration, analytics, reporting) into the Data Lake
- Partner with Data Science to enable scalable data usage
- Redesign data structures to improve query performance and freshness
- Optimize AWS infra to reduce costs while staying reliable
- Build robust production-grade data systems using modern AWS tooling
π― Requirements
- Strong software engineering with SOLID principles for data systems
- Experience building and maintaining CDC pipelines, ideally with AWS DMS
- Hands-on with Spark and AWS Glue for scalable pipelines
- Proficient SQL in transactional RDS environments
- Governance, standards, and best practices across data platforms
- Comfortable owning end-to-end systems and cross-functional work
π Benefits
- Equity ownership in the company
- Hybrid work: 3 days a week in the office
- 25 days holiday per year plus 8 bank holidays
- 2 paid volunteering days per year
- One month paid sabbatical after 4 years
- Free gym membership
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest β finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!