Related skills
sql power bi pyspark azure data factory microsoft fabricπ Description
- Translate objectives into scalable data architectures using Microsoft Fabric.
- Design star/snowflake and data vault models for batch and streaming ingestion.
- Govern and secure data designs on Azure and Fabric with cost optimization.
- Build ingestion/transformation pipelines with Fabric Data Pipelines, PySpark/SQL.
- Develop data quality frameworks and metadata-driven processing.
- Lead workshops and propose target-state roadmaps for clients.
π― Requirements
- 3β7+ years in data engineering with the Microsoft stack.
- Hands-on with Fabric: OneLake, Lakehouse, Data Pipelines, Dataflows Gen2, Spark Notebooks.
- Strong SQL (T-SQL), PySpark/Python, and data modelling for analytics.
- Ingestion & transformation patterns: CDC, SCD, incremental ETL/ELT.
- Azure Data Lake Storage, Synapse/SQL DW, Data Factory, Key Vault, Event Hub.
- CI/CD and DevOps: Git, deployment pipelines, environment strategies.
- Excellent client-facing communication; management consulting experience preferred.
π Benefits
- Medical, dental, and vision insurance; 401(k) plan.
- Tuition reimbursement; growth opportunities.
- Collaborative, inclusive culture with global Capco clients.
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest β finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!