This job is no longer available

The job listing you are looking has expired.
Please browse our latest remote jobs.

See open jobs →
← Back to all jobs

Data Engineer (Infra/DevOps Focus)

Added
4 days ago
Location
Type
Full time
Salary
Not Specified

Use AI to Automatically Apply!

Let your AI Job Copilot auto-fill application questions
Auto-apply to relevant jobs from 300,000 companies

Auto-apply with JobCopilot Apply manually instead
Save job

We are looking for a highly skilled Azure Data Engineer with expert knowledge in cloud infrastructure and DevOps automation. This critical hybrid role will be responsible for designing, building, optimizing, and automating our entire end-to-end data platform within the Microsoft Azure ecosystem. The ideal candidate will ensure our data solutions are scalable, reliable, and deployed using modern Infrastructure as Code (IaC) and CI/CD practices.

Key Responsibilities

Data Platform Development & Engineering

Design & Implement ETL/ELT: Develop, optimize, and maintain scalable data pipelines using Python, SQL, and core Azure data services.

Azure Data Services Management: Architect and manage key Azure data components, including:

Data Lakes: Provisioning and structuring data within Azure Data Lake Storage (ADLS Gen2).

Data Processing: Implementing data transformation and analysis logic using Azure Data Factory (ADF), Azure Synapse Pipelines, and Azure Databricks (using Spark/PySpark).

Data Warehousing: Designing and optimizing the enterprise Data Warehouse in Azure Synapse Analytics (SQL Pool).

Data Modeling and Quality: Define and enforce data modeling standards and implement data quality checks within the pipelines.

Cloud Infrastructure & DevOps Automation

Infrastructure as Code (IaC): Design, manage, and provision all Azure data resources (ADLS, Synapse, ADF, Databricks Clusters) using Terraform or Azure Resource Manager (ARM) Templates/Bicep.

CI/CD Implementation: Build and maintain automated Continuous Integration/Continuous Deployment (CI/CD) pipelines for all code (data, infrastructure, and application) using Azure DevOps or GitHub Actions.

Containerization & Compute: Utilize Docker and manage deployment environments using Azure Kubernetes Service (AKS) or Azure Container Instances (ACI) when required for data applications.

Monitoring, Logging, & Security: Configure comprehensive monitoring and alerting using Azure Monitor and Log Analytics. Implement network security and access controls (RBAC) across the data platform.

Required Skills & Qualifications

Azure Cloud: Strong hands-on experience designing and deploying end-to-end data solutions specifically within the Azure ecosystem.

Programming: High proficiency in Python (including PySpark) and expert knowledge of SQL.

DevOps & IaC: Proven, production-level experience with Terraform (preferred) or ARM/Bicep for automating Azure infrastructure deployment.

CI/CD: Experience setting up CI/CD workflows using Azure DevOps Pipelines or GitHub Actions.

Data Tools: Deep working knowledge of Azure Data Factory, Azure Databricks, and Azure Synapse Analytics.

Orchestration: Experience with workflow orchestration tools like Azure Data Factory or Apache Airflow.

Preferred Qualifications

Azure certifications such as Azure Data Engineer Associate (DP-203) or Azure DevOps Engineer Expert (AZ-400).

Familiarity with Data Governance tools such as Azure Purview.

Experience with real-time data ingestion using Azure Event Hubs or Azure Stream Analytics.

Use AI to Automatically Apply!

Let your AI Job Copilot auto-fill application questions
Auto-apply to relevant jobs from 300,000 companies

Auto-apply with JobCopilot Apply manually instead
Share job

Meet JobCopilot: Your Personal AI Job Hunter

Automatically Apply to Hybrid Data Jobs. Just set your preferences and Job Copilot will do the rest—finding, filtering, and applying while you focus on what matters.

Related Data Jobs

See more Data jobs →