Added
1 hour ago
Type
Full time
Salary
Salary not provided

Related skills

aws etl data warehouse python spark

πŸ“‹ Description

  • Deliver complex data platform components and migrations.
  • Troubleshoot performance, scalability, reliability issues.
  • Contribute to design and engineering standards.
  • Communicate technical topics clearly to peers and stakeholders.
  • Produce high-quality docs and implementation reviews.

🎯 Requirements

  • Cloud-native data platforms and pipelines (Spark/PySpark)
  • Data lake/lakehouse practices and schema design
  • Python (OOP, testing) with solid engineering
  • Relational DBs, schema design, query optimization
  • Migrations, modernization, HA/DR, security, cost optimization
  • Infra-as-code, CI/CD, automation; English: B2+

🎁 Benefits

  • 100% remote work
  • Generous holidays and flexible PTO
  • Competitive phantom equity
  • Paid for exams and certifications
  • Equipment & Office Stipend
  • Worldwide team and corporate culture
Share job

Meet JobCopilot: Your Personal AI Job Hunter

Automatically Apply to Data Jobs. Just set your preferences and Job Copilot will do the rest β€” finding, filtering, and applying while you focus on what matters.

Related Data Jobs

See more Data jobs β†’