Related skills
sql python gcp databricks dbt๐ Description
- Own end-to-end data ecosystem from ingestion to modeled datasets.
- Design, implement, scale data pipelines and data warehouse infrastructure.
- Build and maintain core analytics data models for metrics.
- Establish and evolve semantic layer with source-of-truth dimensions and measures.
- Partner with analytics, engineering, and business to translate goals into architecture.
- Build and extend internal data apps and tools for analytics use cases.
๐ฏ Requirements
- 6+ years of experience in data or analytics engineering.
- Strong hands-on Databricks production experience (Spark, Delta Lake).
- Proven experience building and maintaining dbt models and transformations.
- Advanced SQL for analytical modeling and performance optimization.
- Solid Python experience for data processing and pipeline orchestration.
- Experience with Google Cloud Platform (GCP) based data platforms.
๐ Benefits
- 100% Remote Work: work from anywhere with internet
- Highly Competitive USD Pay
- Paid Time Off
- Work with Autonomy
- Work with Top American Companies
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest โ finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!