Added
less than a minute ago
Location
Type
Full time
Salary
Salary not provided
Related skills
sql python kubernetes scala hadoop๐ Description
- Design and implement large-scale distributed data processing systems (Hadoop, Spark, Flink)
- Build robust data pipelines and infrastructure to transform data into insights
- Architect data lakes, warehouses, and real-time streaming platforms
- Implement security and optimize performance across the data stack
- Leverage containerization (Docker, Kubernetes) and streaming tech (Kafka, Confluent) to drive innovation
- Collaborate with analysts and engineers; mentor juniors and raise the team's technical bar
๐ฏ Requirements
- Deep knowledge of data architecture, ETL/ELT pipelines, and distributed systems
- Proactive problem-solving and ownership from concept to production
- Strong coding in Python, SQL, or Scala; champion best practices (versioning, testing, CI/CD)
- Collaborate across analysts, data scientists, and engineers; mentor juniors
- Data governance, data lineage, and security focus
- Experience with Airflow, dbt, Spark, Flink, Kafka; knowledge of Confluent and AWS
๐ Benefits
- Remote First, Remote Always
- PTO in accordance with local labor requirements
- 2 corporate apartment accommodations for team use (San Diego & Sรฃo Paulo)
- Monthly Wellness Fridays
- Full Paid Parental Leave
- Home office stipend based on country of residency
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Engineering Jobs. Just set your
preferences and Job Copilot will do the rest โ finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!