Databricks / PySpark Data Engineer

Added
18 minutes ago
Type
Full time
Salary
Salary not provided

Related skills

aws sql python gcp databricks

📋 Description

  • Design, develop, and maintain scalable data pipelines using PySpark on Databricks
  • Migrate legacy ETL workloads from Informatica and Teradata to Databricks
  • Develop Databricks-native dashboards and analytics applications
  • Build lightweight Python-based data applications (e.g., FastAPI) to expose data
  • Integrate Databricks pipelines with APIs and application services
  • Optimize Spark workloads for performance, scalability, and cost efficiency

🎯 Requirements

  • 5+ years of hands-on experience building data pipelines using PySpark in production environments
  • Strong experience with the Databricks platform (workspaces, clusters, Jobs & Workflows, Unity Catalog)
  • Experience building analytics dashboards within Databricks (Databricks SQL)
  • Proven experience designing and building scalable ETL/ELT data pipelines
  • Strong Python development skills, including building REST APIs or data services
  • Experience building or supporting data-driven applications (not just traditional ETL pipelines)
  • Solid understanding of data modeling, including dimensional modeling and transformation patterns
  • Experience using AI-assisted development tools (e.g., Copilot, ChatGPT) in engineering workflows
Share job

Meet JobCopilot: Your Personal AI Job Hunter

Automatically Apply to Data Jobs. Just set your preferences and Job Copilot will do the rest — finding, filtering, and applying while you focus on what matters.

Related Data Jobs

See more Data jobs →