Related skills
aws sql python gcp databricksπ Description
- Design, develop, and maintain robust ETL/ELT pipelines (SQL, Python) with large datasets
- Build scalable data workflows with Spark, PySpark, and cloud platforms
- Translate business requirements into well-structured analytics datasets
- Apply data modeling principles for analytics-ready data
- Optimize pipelines for performance, reliability, and cost
- Ensure data quality, validation, and monitoring across pipelines
π― Requirements
- 5+ years of experience in data engineering or ETL development
- Strong proficiency in SQL and Python with production-grade ETL pipelines
- Hands-on experience with Databricks, Spark, and PySpark
- Experience with cloud data platforms (AWS, Azure, GCP)
- Strong understanding of data warehousing concepts and dimensional modeling
- Experience building batch and incremental data pipelines
π Benefits
- Generous time off policies
- Top shelf benefits
- Education, wellness and lifestyle support
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest β finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!