Related skills
sql python databricks dbt airflow📋 Description
- Design, develop, and optimize data pipelines for structured and unstructured data at scale.
- Build and maintain data models (fact, dim, SCD2, marts).
- Create reports and dashboards for business performance.
- Collaborate with product, data science, and analytics to enable AI features.
- Support Databricks data architecture transformation and migration.
- Own projects end-to-end: planning to delivery.
🎯 Requirements
- 5+ years of software/data engineering experience.
- SQL expertise (window functions, CTEs, complex joins).
- Intermediate–advanced Python (OOP, testing, package mgmt).
- Databricks experience (streaming, Unity Catalog, resource mgmt).
- Spark experience (partitioning, skew handling, delta ops, Spark UI).
- DBT, Airflow, Kafka, API/S3 ingestion experience.
- Comfort deploying/monitoring ML models in production.
- Ability to explain technical concepts to non-technical teammates.
- Ownership mindset: self-driven, proactive, accountable.
🎁 Benefits
- Ownership and Savings: Stock options + TFSA/RRSP with 4% company match
- Health and Wellness: Medical, dental, and vision for you and dependents
- Time Flexibility: Flex time off + company holidays + designated focus periods
- Family Support: Maternity/Parental Leave EI top-up support offered (after 6 months of service)
- Work From Anywhere Month + meeting-free weeks yearly
- Protection Plans: Life insurance + disability coverage
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest — finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!