Related skills
snowflake sql python dbt kafkaπ Description
- Design, build, and operate scalable ELT pipelines using Python and PySpark
- Own and improve batch and streaming data systems with Spark and Kafka
- Develop and optimize Snowflake data models and DBT transformations
- Partner with data scientists, analysts, and product teams to translate requirements
- Improve observability, data quality, and engineering best practices
- Leverage AI tools to accelerate development and automate workflows
π― Requirements
- Bachelorβs degree in Computer Science, Engineering, or related field
- 3-5 years of professional experience building and operating ETL/ELT pipelines
- Strong proficiency in SQL and data warehousing concepts
- Python experience for data engineering, with clean, reusable code
- Experience with DBT for data modeling, testing, and documentation is preferred
- Experience with Spark and Kafka for batch or streaming data processing is preferred
π Benefits
- Equity package as part of total compensation
- Competitive benefits package
- Opportunity to work with AWS, Snowflake, DBT, Kafka, Spark
- Relocation assistance to Boston office
- Collaborative and inclusive culture
π Relocation support
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest β finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!