Related skills
java aws sql python scala๐ Description
- Design and build scalable data pipelines (streaming and batch)
- Collaborate with cross-functional teams on data transformation initiatives
- Deliver production-grade data solutions across cloud and on-premise environments
๐ฏ Requirements
- Strong hands-on experience with Python, Java, or Scala
- Proficiency in AWS and big data tech (Spark, Hadoop, Airflow)
- SQL, ETL/ELT, and data modelling
- Experience building CI/CD pipelines with Jenkins or CircleCI
- Data security and distributed systems design
- Messaging systems experience (Kafka, Spark Streaming, Kinesis) - Bonus
๐ Benefits
- Core benefits: discretionary bonus, pension, health, life and critical illness
- Mental health support via CareFirst and Unmind
- Family-friendly leave: maternity, adoption, parental leave
- Backup care for emergency childcare or elder care
- Holiday flexibility: 5 weeks leave with buy/sell option
- Continuous learning: 40 hours training and coaching
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest โ finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!