Related skills
redshift snowflake sql python databricks๐ Description
- Build and maintain ETL/ELT data pipelines in Databricks and Spark for analytics and AI use cases.
- Develop and evolve data models to support reporting, experimentation, and GenAI workflows.
- Implement monitoring, alerts, and testing for data quality, timeliness, and lineage.
- Orchestrate data workflows at scale with Databricks Jobs and DBT.
- Contribute to data pipelines for retrieval-augmented generation (RAG) and embeddings.
- Partner with AI engineers and data scientists to enable experimentation and deployments.
๐ฏ Requirements
- 2-3 years of industry experience in data engineering, with significant experience building large-scale data platforms.
- Hands-on experience with Databricks, DBT, Redshift, RDS, Snowflake or similar solutions.
- Proficiency in Python and SQL, with experience in designing robust ETL/ELT pipelines.
- Experience orchestrating data workflows at scale and enabling machine learning or AI use cases.
- Strong understanding of data modeling, performance optimization, and cost-efficient infrastructure design.
- Located in and authorized to work in the United States (this is a fully remote role).
๐ Benefits
- Flexible, employee-led remote model.
- Professional development stipend.
- Comprehensive health and parental leave plans.
- Equity (for eligible roles).
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Engineering Jobs. Just set your
preferences and Job Copilot will do the rest โ finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!