Added
18 days ago
Type
Full time
Salary
Upgrade to Premium to se...

Related skills

dynamodb postgresql sql python databricks

πŸ“‹ Description

  • Build scalable data pipelines using Databricks, Python, and SQL.
  • Own the transformation layer with dbt for modular data models.
  • Develop end-to-end ELT/ETL with Databricks Workflows or Airflow.
  • Optimize Spark jobs and SQL for efficiency and lower latency.
  • Implement data quality checks to ensure source-of-truth accuracy.
  • Collaborate with product and engineering to define data needs.

🎯 Requirements

  • 3+ years in data ingestion, transformation, and pipeline orchestration (Databricks/Airflow).
  • Experience in a larger data team with CI/CD using GitHub and Agile.
  • Programming: Python, SQL, Spark (PySpark).
  • Declarative languages: YAML, Terraform.
  • Databases: DynamoDB, Unity Catalog, PostgreSQL, SQL Server.
  • BS in Computer Science or equivalent.

🎁 Benefits

  • Competitive medical, dental, and vision insurance.
  • Mental health resources.
  • Generous paid time off with holidays.
  • Paid parental leave for biological and adoptive parents.
  • Education stipend for continued learning.
  • Fitness and wellness reimbursement.
Share job

Meet JobCopilot: Your Personal AI Job Hunter

Automatically Apply to Data Jobs. Just set your preferences and Job Copilot will do the rest β€” finding, filtering, and applying while you focus on what matters.

Related Data Jobs

See more Data jobs β†’