Related skills
dynamodb postgresql sql python databricksπ Description
- Build scalable data pipelines using Databricks, Python, and SQL.
- Own the transformation layer with dbt for modular data models.
- Develop end-to-end ELT/ETL with Databricks Workflows or Airflow.
- Optimize Spark jobs and SQL for efficiency and lower latency.
- Implement data quality checks to ensure source-of-truth accuracy.
- Collaborate with product and engineering to define data needs.
π― Requirements
- 3+ years in data ingestion, transformation, and pipeline orchestration (Databricks/Airflow).
- Experience in a larger data team with CI/CD using GitHub and Agile.
- Programming: Python, SQL, Spark (PySpark).
- Declarative languages: YAML, Terraform.
- Databases: DynamoDB, Unity Catalog, PostgreSQL, SQL Server.
- BS in Computer Science or equivalent.
π Benefits
- Competitive medical, dental, and vision insurance.
- Mental health resources.
- Generous paid time off with holidays.
- Paid parental leave for biological and adoptive parents.
- Education stipend for continued learning.
- Fitness and wellness reimbursement.
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest β finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!