Added
7 minutes ago
Type
Full time
Salary
Upgrade to Premium to se...

Related skills

redshift snowflake sql python databricks

๐Ÿ“‹ Description

  • Build and maintain ETL/ELT data pipelines in Databricks and Spark for analytics and AI use cases.
  • Develop and evolve data models to support reporting, experimentation, and GenAI workflows.
  • Implement monitoring, alerts, and testing for data quality, timeliness, and lineage.
  • Orchestrate data workflows at scale with Databricks Jobs and DBT.
  • Contribute to data pipelines for retrieval-augmented generation (RAG) and embeddings.
  • Partner with AI engineers and data scientists to enable experimentation and deployments.

๐ŸŽฏ Requirements

  • 2-3 years of industry experience in data engineering, with significant experience building large-scale data platforms.
  • Hands-on experience with Databricks, DBT, Redshift, RDS, Snowflake or similar solutions.
  • Proficiency in Python and SQL, with experience in designing robust ETL/ELT pipelines.
  • Experience orchestrating data workflows at scale and enabling machine learning or AI use cases.
  • Strong understanding of data modeling, performance optimization, and cost-efficient infrastructure design.
  • Located in and authorized to work in the United States (this is a fully remote role).

๐ŸŽ Benefits

  • Flexible, employee-led remote model.
  • Professional development stipend.
  • Comprehensive health and parental leave plans.
  • Equity (for eligible roles).
Share job

Meet JobCopilot: Your Personal AI Job Hunter

Automatically Apply to Engineering Jobs. Just set your preferences and Job Copilot will do the rest โ€” finding, filtering, and applying while you focus on what matters.

Related Engineering Jobs

See more Engineering jobs โ†’