Related skills
data engineering kubernetes kafka spark data pipelinesπ Description
- Design and evolve batch and streaming data pipelines for refinery analytics.
- Own ingestion, transformation, validation, and delivery across sources.
- Improve pipeline reliability, scalability, observability.
- Lead schema design, versioning, and evolution for stable data contracts.
- Build and maintain backend components and APIs for data delivery.
- Collaborate with data scientists and PMs to translate domain needs.
π― Requirements
- Deliver high-quality, well-tested, maintainable code.
- Own parts of the data platform end-to-end.
- Contribute to data processing, storage, and delivery architectures.
- Improve CI/CD pipelines, automation, and tooling.
- Instrument services with metrics, logs, and alerts.
- Participate in incident response and root-cause analysis.
π Benefits
- Exposure to Kafka, Spark, or streaming architectures.
- Experience with Kubernetes.
- Familiarity with event-driven or microservices architectures.
- Exposure to analytical datastores (Elasticsearch).
- Full-stack awareness for PR reviews (frontend/API).
- Data product experience in energy, commodities, or industrial domains.
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Engineering Jobs. Just set your
preferences and Job Copilot will do the rest β finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!