Related skills
sql python gcp databricks dbtπ Description
- Own end-to-end data ecosystem from ingestion to trusted datasets.
- Design, implement, and scale data pipelines and data warehouse infra.
- Build and maintain core analytics data models for metrics.
- Establish semantic layer with source-of-truth dimensions and measures.
- Partner with analytics, engineering, and business to translate goals into architecture.
π― Requirements
- 6+ years of experience in data engineering or analytics engineering.
- Strong hands-on Databricks in production (Spark, Delta Lake).
- Proven experience building and maintaining dbt models and transformations.
- Advanced SQL skills for analytical modeling and performance optimization.
- Solid Python experience for data processing and pipeline orchestration.
- Experience working on GCP-based data platforms.
π Benefits
- 100% Remote Work: work from anywhere with internet access.
- Competitive USD pay.
- Paid Time Off to recharge.
- Autonomy: manage your time and outcomes.
- Work with top U.S. companies on high-impact projects.
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest β finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!