Related skills
snowflake sql python databricks scalaπ Description
- Design and maintain scalable data pipelines for ingestion, transformation, and delivery.
- Collaborate with Analytics Engineers and Product teams to curate datasets and data contracts.
- Develop data architectures (lakehouses/medallion) with Databricks, Delta Lake, or Iceberg.
- Optimize Snowflake usage and performance, ensuring data quality and cost efficiency.
- Support and scale orchestration platforms (Dagster) and metadata systems (Unity Catalog/Collibra); monitor with Datadog.
- Collaborate across data engineering, analytics engineering, and security teams to deliver stable infrastructure for diverse workflows.
- Build tools, alerting, and documentation that enable reliable operation and self-service across the data stack.
π― Requirements
- 8+ years in data engineering or infra roles focusing on scaling tools.
- Python or Scala expertise; strong SQL for modeling/optimizing.
- Deep Snowflake experience: clustering, tuning, profiling, access control.
- Lakehouse tech (Databricks/Delta Lake/Iceberg/Hudi) and engines (Athena/Presto).
- Design/implement scalable ETL with dbt and Databricks.
- Infra-as-code, orchestration (Dagster/Airflow), CI/CD.
- Proactive mindset; strong problem-solving for complex infra issues.
- Excellent collaboration and communication with cross-functional teams.
π Benefits
- Flexible work environment
- Unlimited Vacation
- 100% paid employee health benefits (medical, dental, vision)
- Commuter Benefits
- 401(k) with employer match
- Sabbatical leave for 5+ years
- Parental leave and fertility/family planning reimbursement
- Cell phone reimbursement
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest β finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!