Related skills
sql python gcp databricks dbtπ Description
- Own end-to-end data ecosystem from ingestion to trusted analytics datasets.
- Design, implement, and scale data pipelines and data warehouses.
- Build and maintain analytics data models for metrics and performance.
- Establish semantic layer with source-of-truth dimensions and measures.
- Partner with analytics, engineering, and business to align goals with architecture.
- Ensure data quality, governance, monitoring, and alerting across the stack.
π― Requirements
- 6+ years of experience in data engineering or analytics engineering
- Strong hands-on Databricks experience in production (Spark, Delta Lake)
- Proven experience building and maintaining dbt models and transformations
- Advanced SQL skills for analytical modeling and performance optimization
- Solid Python experience for data processing and pipeline orchestration
- Experience working on Google Cloud Platform (GCP)-based data platforms
π Benefits
- 100% Remote Work from anywhere
- Highly competitive USD pay
- Paid time off
- Work with autonomy
- Work with top U.S. companies
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest β finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!