Related skills
aws etl python spark pysparkπ Description
- Deliver complex data platform components or migration solutions independently
- Troubleshoot performance, scalability, and reliability issues
- Contribute to solution design and engineering standards
- Communicate technical topics clearly to peers and stakeholders
- Produce high-quality documentation and implementation reviews
π― Requirements
- 4+ years RDBMS or data engineering experience
- 2+ years AWS-based implementations
- Strong experience building scalable cloud-native data platforms
- Advanced data lake/lakehouse practices
- Hands-on Spark/PySpark; Python (OOP, testing)
- English: Upper-Intermediate (B2)
π Benefits
- 100% remote work
- Generous holidays and flexible PTO
- Competitive phantom equity
- Paid for exams and certifications
- Peer bonus awards
- State of the art laptop and tools
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest β finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!