Added
less than a minute ago
Type
Full time
Salary
Salary not provided

Related skills

sql python kubernetes scala hadoop

๐Ÿ“‹ Description

  • Design and implement large-scale distributed data processing systems (Hadoop, Spark, Flink)
  • Build robust data pipelines and infrastructure to transform data into insights
  • Architect data lakes, warehouses, and real-time streaming platforms
  • Implement security and optimize performance across the data stack
  • Leverage containerization (Docker, Kubernetes) and streaming tech (Kafka, Confluent) to drive innovation
  • Collaborate with analysts and engineers; mentor juniors and raise the team's technical bar

๐ŸŽฏ Requirements

  • Deep knowledge of data architecture, ETL/ELT pipelines, and distributed systems
  • Proactive problem-solving and ownership from concept to production
  • Strong coding in Python, SQL, or Scala; champion best practices (versioning, testing, CI/CD)
  • Collaborate across analysts, data scientists, and engineers; mentor juniors
  • Data governance, data lineage, and security focus
  • Experience with Airflow, dbt, Spark, Flink, Kafka; knowledge of Confluent and AWS

๐ŸŽ Benefits

  • Remote First, Remote Always
  • PTO in accordance with local labor requirements
  • 2 corporate apartment accommodations for team use (San Diego & Sรฃo Paulo)
  • Monthly Wellness Fridays
  • Full Paid Parental Leave
  • Home office stipend based on country of residency
Share job

Meet JobCopilot: Your Personal AI Job Hunter

Automatically Apply to Engineering Jobs. Just set your preferences and Job Copilot will do the rest โ€” finding, filtering, and applying while you focus on what matters.

Related Engineering Jobs

See more Engineering jobs โ†’