Added
1 day ago
Location
Type
Full time
Salary
Salary not provided
Related skills
java aws python scala kafka๐ Description
- Design and build end-to-end data pipelines on AWS.
- Work on greenfield and enterprise-scale data pipelines for ingestion/transform.
- Lead robust, fault-tolerant data engineering solutions.
- Mentor junior engineers and share knowledge across the team.
- Based in Scotland with potential to work in Edinburgh or Glasgow.
๐ฏ Requirements
- Proficiency in Python, Scala or Java with Spark/Hadoop experience.
- Experience building real-time streaming pipelines (Kafka, Spark Streaming, Kinesis).
- Proficiency in AWS cloud environments.
- Experience with Data Lakehouse and Data Warehousing architectures.
- Strong CI/CD, DevOps tooling, and GDPR data governance.
- Bonus: data modelling, schema design, and regulated industries.
๐ Benefits
- Core Benefits: discretionary bonus, pension, health, life, and critical illness cover.
- Mental Health: CareFirst, Unmind, Aviva, and in-house first aiders.
- Family-Friendly: Maternity, adoption, shared parental leave, and paid leave options.
- Family Care: 8 backup care sessions for emergency childcare/elder care.
- Holiday Flexibility: 5 weeks annual leave with buy/sell option.
- Continuous Learning: 40+ hours of training annually plus a business coach from Day One.
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest โ finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!