Added
2 days ago
Type
Full time
Salary
Upgrade to Premium to se...

Related skills

aws sql python databricks apache spark

πŸ“‹ Description

  • Design, implement, and optimize big data pipelines in Databricks.
  • Develop scalable ETL workflows to process large datasets.
  • Leverage Apache Spark for distributed data processing and real-time analytics.
  • Implement data governance, security policies, and compliance standards.
  • Optimize data lakehouse architectures for performance and cost-efficiency.
  • Collaborate with data scientists, analysts, and engineers to enable AI/ML workflows.

🎯 Requirements

  • Bachelor's or Master's degree in Computer Science, Data Engineering, or a related field.
  • 5+ years of hands-on experience with Databricks and Apache Spark.
  • Proficiency in SQL, Python, or Scala for data processing and analysis.
  • Experience with cloud platforms (AWS, Azure, or GCP) for data engineering.
  • Strong knowledge of ETL frameworks, data lakes, and Delta Lake architecture.
  • Experience with CI/CD tools and DevOps best practices.

🎁 Benefits

  • Hybrid flexibility: 3 days/wk in downtown Toronto; Calgary applicants welcome.
  • Fully covered health, dental, and vision insurance from day one.
  • Growth & Learning: continuous opportunities and technical direction.
  • Inclusive culture and accommodation support.
Share job

Meet JobCopilot: Your Personal AI Job Hunter

Automatically Apply to Data Jobs. Just set your preferences and Job Copilot will do the rest β€” finding, filtering, and applying while you focus on what matters.

Related Data Jobs

See more Data jobs β†’