Added
6 days ago
Type
Full time
Salary
Upgrade to Premium to se...

Related skills

aws sql python kafka pyspark

πŸ“‹ Description

  • Design and implement batch and streaming pipelines with PySpark, SQL, AWS.
  • Translate high level direction into scoped, high quality technical solutions.
  • Collaborate with product, design, analysts, and engineers to ship data features.
  • Contribute to architectural discussions and system improvements.
  • Ensure pipeline reliability and data quality with tests and monitoring.
  • Investigate and resolve production issues for owned services.

🎯 Requirements

  • 6–7+ years of experience in data engineering with hands-on execution.
  • Proficiency with PySpark, Python, and SQL, including debugging and optimization.
  • Experience building large-scale pipelines (TB+), with Kafka exposure.
  • Strong data modeling, relational DBs, and NoSQL knowledge.
  • Experience with AWS services such as EMR, Glue, Athena, Lambda, and S3.
  • Iceberg or lakehouse technologies experience (nice to have).

🎁 Benefits

  • Hybrid work with hub locations Denver, San Francisco, Nashville, Santiago.
  • In-office perks: lunch, commuter stipend, snacks.
  • Relocation stipend may be available.

🚚 Relocation support

Share job

Meet JobCopilot: Your Personal AI Job Hunter

Automatically Apply to Engineering Jobs. Just set your preferences and Job Copilot will do the rest β€” finding, filtering, and applying while you focus on what matters.

Related Engineering Jobs

See more Engineering jobs β†’