Related skills
java aws python databricks scalaπ Description
- Design and build mission-critical data pipelines with scalable architecture (streaming, batch).
- Improve reporting and analysis, enabling self-service for stakeholders.
- Create reusable frameworks to ingest, integrate, and provision data.
- Automate end-to-end data pipelines with metadata and data quality checks.
- Build and support a cloud-based big data platform.
- Define and automate jobs and testing.
π― Requirements
- 3-5 years in data warehouse / data lake architecture.
- 3+ years Python/Scala/Java/C#.
- 3 years with Big Data tools: batch (Hadoop/Spark) or real-time (Kafka/Flink).
- 2+ years AWS or other cloud environments.
- Strong knowledge of Databricks SQL/Scala for data pipelines.
- Familiar with batch processing and workflow tools: dbt, Airflow, NiFi.
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest β finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!