Data Engineer II

Added
6 days ago
Type
Full time
Salary
Upgrade to Premium to se...

Related skills

snowflake sql python dbt kafka

πŸ“‹ Description

  • Design, build, and operate scalable ELT pipelines using Python and PySpark
  • Own and improve batch and streaming data systems with Spark and Kafka
  • Develop and optimize Snowflake data models and DBT transformations
  • Partner with data scientists, analysts, and product teams to translate requirements
  • Improve observability, data quality, and engineering best practices
  • Leverage AI tools to accelerate development and automate workflows

🎯 Requirements

  • Bachelor’s degree in Computer Science, Engineering, or related field
  • 3-5 years of professional experience building and operating ETL/ELT pipelines
  • Strong proficiency in SQL and data warehousing concepts
  • Python experience for data engineering, with clean, reusable code
  • Experience with DBT for data modeling, testing, and documentation is preferred
  • Experience with Spark and Kafka for batch or streaming data processing is preferred

🎁 Benefits

  • Equity package as part of total compensation
  • Competitive benefits package
  • Opportunity to work with AWS, Snowflake, DBT, Kafka, Spark
  • Relocation assistance to Boston office
  • Collaborative and inclusive culture

🚚 Relocation support

Share job

Meet JobCopilot: Your Personal AI Job Hunter

Automatically Apply to Data Jobs. Just set your preferences and Job Copilot will do the rest β€” finding, filtering, and applying while you focus on what matters.

Related Data Jobs

See more Data jobs β†’