Related skills
aws sql python airflow kafkaπ Description
- Design, develop, and maintain scalable real-time data pipelines.
- Integrate data sources to enable seamless real-time data flow.
- Build fault-tolerant, highly available data ingestion processes.
- Monitor pipeline performance for low latency and high throughput.
- Collaborate with software engineers, data scientists, and stakeholders on data infrastructure goals.
π― Requirements
- Hands-on data engineering for real-time pipelines.
- Distributed streaming: Kafka, Spark Streaming, Flink.
- Cloud platforms: AWS, GCP, or Azure for real-time ingestion and storage.
- Programming in Python, Java, or Scala.
- SQL, NoSQL, time-series databases, and data modelling.
- Airflow and Kubernetes for data pipeline orchestration.
π Benefits
- Competitive base salary
- Company bonus scheme
- Hybrid working (3 days in office)
- Private health care (Bupa)
- Travel Insurance
- Annual company conference
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Engineering Jobs. Just set your
preferences and Job Copilot will do the rest β finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!