Related skills
snowflake etl sql python databricksπ Description
- Build and maintain data pipelines with PySpark, Python, and dbt for billions of records.
- Improve customer integrations with tooling to speed onboarding.
- Adopt AI tooling like LLM-assisted cleaning, semantic validation, and anomaly detection.
- Collaborate across product, engineering, and GTM to deliver data solutions.
- Optimize ETL runtimes and scalability to speed integrations.
- Resolve data quality issues by working with messy customer data to extract signals.
π― Requirements
- 2+ years building ETLs/data workflows with Python, PySpark, or SQL.
- Experience turning messy data into structured, usable datasets.
- Experience identifying automation to simplify workflows.
- Experience or interest in Databricks, Snowflake, and dbt.
- Strong problem-solving; works with ambiguous requirements to deliver impact.
- Collaborative and communicative across teams.
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Engineering Jobs. Just set your
preferences and Job Copilot will do the rest β finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!