Related skills
sql python gcp databricks dbtπ Description
- Own end-to-end data ecosystem from ingestion to analytics-ready datasets.
- Design, implement, and scale data pipelines and data warehouses.
- Build core analytics data models for clear, performant metrics.
- Establish semantic layer with source-of-truth dimensions and measures.
- Partner with stakeholders to translate goals into architecture decisions.
- Ensure data quality, governance, and monitoring across the stack.
π― Requirements
- 6+ years in data or analytics engineering.
- Hands-on Databricks production experience (Spark, Delta Lake).
- Proven experience building and maintaining dbt models.
- Advanced SQL for analytical modeling and optimization.
- Strong Python for data processing and orchestration.
- Experience with GCP-based data platforms.
π Benefits
- 100% Remote Work: work from anywhere with a laptop and internet.
- Highly Competitive USD Pay: market-leading USD compensation.
- Paid Time Off: balance and time to recharge.
- Work with Autonomy: manage your time to deliver results.
- Work with Top American Companies: high-impact projects.
- A Culture That Values You: well-being and balance.
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest β finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!