Added
4 hours ago
Type
Full time
Salary
Salary not provided

Related skills

aws etl python spark pyspark

πŸ“‹ Description

  • Deliver complex data platform components or migration solutions independently
  • Troubleshoot performance, scalability, and reliability issues
  • Contribute to solution design and engineering standards
  • Communicate technical topics clearly to peers and stakeholders
  • Produce high-quality documentation and implementation reviews

🎯 Requirements

  • 4+ years RDBMS or data engineering experience
  • 2+ years AWS-based implementations
  • Strong experience building scalable cloud-native data platforms
  • Advanced data lake/lakehouse practices
  • Hands-on Spark/PySpark; Python (OOP, testing)
  • English: Upper-Intermediate (B2)

🎁 Benefits

  • 100% remote work
  • Generous holidays and flexible PTO
  • Competitive phantom equity
  • Paid for exams and certifications
  • Peer bonus awards
  • State of the art laptop and tools
Share job

Meet JobCopilot: Your Personal AI Job Hunter

Automatically Apply to Data Jobs. Just set your preferences and Job Copilot will do the rest β€” finding, filtering, and applying while you focus on what matters.

Related Data Jobs

See more Data jobs β†’