Related skills
java aws sql scala kafkaπ Description
- Analyse and decompose complex database-centric business logic into scalable components.
- Design and build event-driven architectures and streaming workflows.
- Implement, deploy and operate production-grade streaming jobs with Kafka, Spark, Flink.
- Improve scalability, reliability and maintainability of data pipelines.
- Collaborate with domain experts to validate outputs and smooth migration.
- Ensure parity and data quality by validating streaming outputs vs legacy.
π― Requirements
- 5+ years' experience in data engineering or distributed systems.
- Experience designing/operating large-scale data pipelines in production.
- Hands-on with Kafka, Spark and/or Flink.
- Proficient in Scala or Java, and SQL.
- Experience building data platforms in AWS (IAM, S3).
- Experience with Kubernetes and GitOps workflows.
π Benefits
- Experience migrating RDBMS logic to streaming systems.
- Experience applying TDD in streaming apps.
- Domain modelling for maritime/shipping data.
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest β finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!