Related skills
aws python databricks hadoop kafka๐ Description
- Design and build end-to-end data pipelines using AWS tools.
- Collaborate with clients to gather requirements and deliver production-grade systems.
- Apply AWS Well-Architected Principles for scalability, security, and resilience.
- Lead development of robust, tested, fault-tolerant data solutions.
- Mentor junior engineers and share knowledge across the team.
๐ฏ Requirements
- Proficient in Python, Scala or Java with Spark and Hadoop experience.
- Experience building real-time streaming pipelines with Kafka, Spark Streaming or Kinesis.
- Proficiency in AWS cloud environments.
- Experience with data lakehouse and data warehousing architectures.
- Understanding CI/CD, DevOps tooling, and GDPR data governance.
๐ Benefits
- Discretionary bonus, pension, health and life insurance.
- Mental health support and in-house first aiders.
- Family-friendly leave including maternity, parental leave.
- Backup care for emergency childcare or elder care.
- Holiday flexibility: 5 weeks leave with buy/sell option.
- Continuous learning: 40 hours training annually plus coaching.
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Engineering Jobs. Just set your
preferences and Job Copilot will do the rest โ finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!