Related skills
snowflake etl sql python databricksπ Description
- Build tools to streamline customer integrations and onboarding.
- Create robust ETLs in PySpark and DBT to process billions of records.
- Investigate and adopt new data tech to solve pain points.
- Collaborate with product, engineering, and go-to-market teams to deliver data solutions.
- Optimize ETL runtime and data processing at scale.
- Tackle real-world data quality issues with messy data.
π― Requirements
- Significant experience designing and maintaining ETLs for large-scale datasets.
- Proficiency with Python, PySpark, SQL; Databricks, Snowflake, or DBT.
- Strong problem-solving with ambiguous requirements.
- Focus on practical outcomes; balance rigor with delivery.
- Experience with complex, unclean datasets.
- Ability to identify tooling or automation to simplify workflows.
- Excellent communication and mentoring in technical projects.
π Benefits
- Join a mission-driven company reducing millions of pounds of food waste.
- Work on real-world problems with direct customer impact.
- Collaborative, supportive team where ideas are valued and acted on.
- Use cutting-edge tools and platforms to solve meaningful data challenges.
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Engineering Jobs. Just set your
preferences and Job Copilot will do the rest β finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!