Added
12 days ago
Type
Full time
Salary
Salary not provided

Related skills

jenkins java docker kubernetes kafka

πŸ“‹ Description

  • Design, build and optimize real-time and batch data pipelines.
  • Develop data processing components for transformation, normalization, and enrichment.
  • Build dashboards and visualizations for internal stakeholders.
  • Deploy containerized workloads in Kubernetes with CI/CD automation.
  • Ensure reliability via monitoring, observability, and alerting.
  • Troubleshoot distributed data systems and resolve bottlenecks.

🎯 Requirements

  • Strong programming fundamentals in OO or functional languages.
  • Experience with data lake analytics tech (Iceberg, Druid, Hive, Object Storage).
  • Familiarity with streaming platforms like Apache Kafka.
  • Experience implementing CI/CD pipelines (e.g., Jenkins).
  • Strong analytical, troubleshooting and problem-solving skills.
  • Ability to collaborate with cross-functional stakeholders.

🎁 Benefits

  • Dynamic, flexible work environment with competitive benefits.
Share job

Meet JobCopilot: Your Personal AI Job Hunter

Automatically Apply to Data Jobs. Just set your preferences and Job Copilot will do the rest β€” finding, filtering, and applying while you focus on what matters.

Related Data Jobs

See more Data jobs β†’