Related skills
azure etl sql python databricks📋 Description
- Designing, implementing and managing data pipelines.
- Collaborate with Data Office on design and hands-on development.
- Foster architecture best practices, ops excellence, security and cost control.
- Focus on day-to-day Megaport data warehouse operations.
- Support the BI reporting and analytics functions.
🎯 Requirements
- Strong SQL with ETL/ELT and data modelling.
- DataOps in large data warehouse; automate pipelines.
- CI/CD for data pipelines; Python/PySpark scripting.
- Data quality checks, error handling & alerts.
- Azure stack, Data Lake/EDW, data integration.
- Databricks Unity Catalog, dbt, Git, Fivetran, APIs.
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest — finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!