Related skills
jenkins java docker kubernetes kafkaπ Description
- Design, build and optimize real-time and batch data pipelines.
- Develop data processing components for transformation, normalization, and enrichment.
- Build dashboards and visualizations for internal stakeholders.
- Deploy containerized workloads in Kubernetes with CI/CD automation.
- Ensure reliability via monitoring, observability, and alerting.
- Troubleshoot distributed data systems and resolve bottlenecks.
π― Requirements
- Strong programming fundamentals in OO or functional languages.
- Experience with data lake analytics tech (Iceberg, Druid, Hive, Object Storage).
- Familiarity with streaming platforms like Apache Kafka.
- Experience implementing CI/CD pipelines (e.g., Jenkins).
- Strong analytical, troubleshooting and problem-solving skills.
- Ability to collaborate with cross-functional stakeholders.
π Benefits
- Dynamic, flexible work environment with competitive benefits.
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest β finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!