Related skills
aws etl data warehouse python sparkπ Description
- Deliver complex data platform components and migrations.
- Troubleshoot performance, scalability, reliability issues.
- Contribute to design and engineering standards.
- Communicate technical topics clearly to peers and stakeholders.
- Produce high-quality docs and implementation reviews.
π― Requirements
- Cloud-native data platforms and pipelines (Spark/PySpark)
- Data lake/lakehouse practices and schema design
- Python (OOP, testing) with solid engineering
- Relational DBs, schema design, query optimization
- Migrations, modernization, HA/DR, security, cost optimization
- Infra-as-code, CI/CD, automation; English: B2+
π Benefits
- 100% remote work
- Generous holidays and flexible PTO
- Competitive phantom equity
- Paid for exams and certifications
- Equipment & Office Stipend
- Worldwide team and corporate culture
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest β finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!