Related skills
azure sql python gcp databricksπ Description
- Design, build, and operate end-to-end data pipelines on Azure or GCP.
- Implement lakehouse patterns (Delta Lake, medallion) for data products.
- Deliver batch and streaming pipelines (Kafka, Pub/Sub, Event Hubs).
- Write high-quality Python and SQL for data processing.
- Apply data modeling, quality, lineage, and governance practices.
- Set up CI/CD for data pipelines and infrastructure.
π― Requirements
- 5+ years as a Data Engineer in cloud environments.
- Azure (ADF, Databricks) or GCP (Dataflow, BigQuery) expertise.
- Python and SQL for data processing.
- Delta Lake and medallion lakehouse architecture knowledge.
- Batch and streaming pipelines (Kafka, Pub/Sub, Event Hubs).
- CI/CD for data pipelines and infrastructure.
π Benefits
- High-impact engineering role in cloud and lakehouse.
- Build data products for analytics, AI/ML, and initiatives.
- Hybrid collaboration with cross-functional teams.
- Strong engineering culture focused on quality and ownership.
- Access to Sytac's data community and development programs.
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest β finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!