Related skills
java python airflow kafka spring boot๐ Description
- Design streaming and batch data pipelines for metrics, logs, and AI workflows.
- Develop ETL and feature-extraction pipelines using Python and Java microservices.
- Integrate data ingestion and enrichment from multiple observability sources.
- Build data orchestration using Kafka, Airflow, and Redis Streams.
- Develop data indexing and semantic search for large-scale observability.
- Work with data lakes and warehouses (Delta Lake, Iceberg, ClickHouse).
๐ฏ Requirements
- Bachelorโs degree in Computer Science, Data Engineering, or a related field.
- 4-5 years of experience in backend or data systems engineering.
- Experience building streaming data pipelines (Kafka / Spark or similar).
- Strong programming in Java and Python, including microservices.
- Experience with ETL, data modeling, and distributed storage systems.
- Familiarity with LLM pipelines, embeddings, and vector retrieval.
๐ Benefits
- Great Place To Work certification.
- Culture focused on performance, recognition, and collaboration.
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Engineering Jobs. Just set your
preferences and Job Copilot will do the rest โ finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!