Related skills
sql pyspark azure data factory delta lake azure databricks📋 Description
- Support production data platforms, ensuring high availability.
- Monitor pipelines and jobs; resolve failures and data issues.
- Perform root cause analysis for incidents and preventive actions.
- Apply DataOps best practices: automation, monitoring, dashboards.
- Collaborate with cross-functional teams for reporting.
- Maintain pipelines and support runbooks and docs.
🎯 Requirements
- 3–5 years experience in Data Engineering / DataOps.
- Azure Databricks (PySpark, Spark SQL, Delta Lake).
- Azure Data Factory (ADF) – pipelines, monitoring.
- ADLS Gen2 (Azure Data Lake Storage).
- ETL/ELT frameworks, batch and incremental processing.
- Strong SQL skills for data analysis and troubleshooting.
- Production support, incident management, SLA-driven environments.
- Azure Monitor and Log Analytics familiarity.
🎁 Benefits
- Exposure to Microsoft Fabric (Lakehouse, Pipelines, Notebooks).
- Power BI basics and semantic models.
- Experience with 24/7 support or rotational shifts.
- Opportunity to work with Azure data services.
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest — finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!