Related skills
java docker sql python kubernetes๐ Description
- Design and maintain scalable data pipelines for ingestion and storage
- Build ETL workflows with Apache Spark, NiFi, or similar tools
- Integrate data from multiple sources into centralized warehouses or data lakes
- Ensure data quality and reliability via validation and monitoring
- Optimize pipelines for performance, scalability
- Collaborate with data scientists and engineers to translate business needs into solutions
๐ฏ Requirements
- Bachelor's degree in CS, Engineering, or a related field (or equivalent)
- 3+ years of data or backend engineering experience
- Proficiency in Java, Python, or Scala
- Strong SQL skills with the ability to write complex queries
- Experience with Airflow or similar workflow tools
- Hands-on experience with Spark, Kafka, or Flink
๐ Benefits
- Opportunity to make a difference and work on meaningful projects
- Challenging, collaborative, customer-focused culture
- Work with technology that serves people
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest โ finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!