Related skills
snowflake etl python databricks dbtπ Description
- Build and maintain data pipelines ingesting customer data with PySpark, Python, dbt.
- Improve integrations with new customers using tooling for faster, repeatable processes.
- Contribute to AI tooling adoption (LLM-assisted data cleaning, semantic validation).
- Collaborate with product, engineering, and GTM teams to deliver data solutions.
- Identify and optimize ETL to improve runtime and scalability.
- Solve real-world data quality challenges with messy data to extract signals.
π― Requirements
- 2+ years building ETLs or data workflows with Python, PySpark, SQL, or similar.
- Comfortable turning messy datasets into structured data.
- Ability to identify automation opportunities to simplify workflows.
- Experience or interest in Databricks, Snowflake, and dbt.
- Strong problem-solving with ambiguous requirements.
- Detail-oriented with robust, maintainable solutions.
- Collaborative, communicative, and eager to learn.
π Benefits
- Remote-friendly with eligibility in listed US states.
- Equal employment opportunities (EEO) for all employees and applicants.
- Growth, learning, and collaboration opportunities.
- Inclusive, mission-driven culture focused on reducing food waste.
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest β finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!