Related skills
sql data modeling power bi pyspark azure data factoryπ Description
- Translate business objectives into scalable data architectures using Microsoft Fabric and Azure
- Define data models (star/snowflake, data vault) and patterns for batch and streaming ingestion
- Establish governance, security, and cost-optimized design on Azure and Fabric
- Build ingestion/transformation pipelines with Fabric Data Pipelines, Dataflows Gen2, Spark Notebooks, and ADF/Synapse
- Develop reusable frameworks for data quality, schema management, and metadata-driven processing
- Implement CDC, SCD, and incremental processing in Lakehouse layers
π― Requirements
- 3-7+ years in data engineering with the Microsoft stack
- Hands-on with Microsoft Fabric components: OneLake, Lakehouse, Data Pipelines, Dataflows Gen2, Spark Notebooks, Shortcuts, Power BI integration
- Strong SQL (T-SQL), PySpark/Python, and data modeling for analytics
- Ingestion and transformation patterns: CDC, SCD, incremental ETL/ELT
- Performance tuning, partitioning strategies, and file formats
- Experience with Azure Data Lake Storage, Azure Synapse, Azure Data Factory, Azure Key Vault, Event Hub/IoT Hub
- CI/CD and DevOps: Git, deployment pipelines, environment strategies
- Excellent communication skills with client-facing experience in consulting
π Benefits
- Medical, dental, and vision insurance
- 401(k) plan with company match
- Tuition reimbursement
- Collaborative, inclusive culture with growth
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest β finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!