Related skills
java aws python scala kafka๐ Description
- Design and build end-to-end data pipelines using AWS-native tools.
- Collaborate with clients to gather requirements and deliver production-grade systems.
- Apply AWS Well-Architected Principles for scalability, security, resilience.
- Lead the development of robust, fault-tolerant data engineering solutions.
- Mentor junior engineers and share knowledge across the team.
๐ฏ Requirements
- Proficient in Python, Scala or Java; strong Spark/Hadoop experience.
- Real-time streaming pipelines with Kafka, Spark Streaming, or Kinesis.
- Proficiency in AWS cloud environments.
- Experience with Data Lakehouse and Data Warehousing architectures.
- CI/CD, DevOps tooling, and GDPR data governance understanding.
๐ Benefits
- Core Benefits: discretionary bonus, pension, health, life, and critical illness cover.
- Mental health support via CareFirst, Unmind, Aviva, and first aid.
- Family-friendly leave: maternity, adoption, parental leave, plus sick leave.
- Holiday flexibility: 5 weeks annual leave with buy/sell option.
- Continuous learning: 40+ hours of training annually and a day-one coach.
- Healthcare access: online GP services.
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest โ finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!