Related skills
java aws python scala hadoopπ Description
- Design and build end-to-end data pipelines leveraging AWS-native tools.
- Collaborate with clients to gather requirements and deliver production-grade systems.
- Apply AWS Well-Architected Principles for scalability, security, resilience.
- Lead in developing robust, fault-tolerant data engineering solutions.
- Mentor junior engineers and share knowledge across the team.
π― Requirements
- Proficient in Python, Scala or Java; strong Spark and Hadoop experience.
- Real-time pipelines using Kafka, Spark Streaming, and Kinesis.
- Proficiency in AWS cloud environments.
- Experience with Data Lakehouse and Data Warehousing architectures.
- CI/CD, DevOps tooling, and data governance including GDPR.
- Experience mentoring junior engineers.
π Benefits
- Core Benefits: discretionary bonus, pension, health, life, and critical illness.
- Mental Health: CareFirst, Unmind, Aviva, and in-house first aiders.
- Family-Friendly: maternity, adoption, shared parental leave, paid leave.
- Family Care: 8 backup care sessions for emergency childcare.
- Holiday Flexibility: 5 weeks leave with buy/sell option.
- Continuous Learning: 40 hours of training annually and a day-one coach.
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest β finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!