Related skills
sql python apache spark pyspark iceberg๐ Description
- Design, build, and optimize scalable PySpark data pipelines.
- Develop and maintain data models for lake and warehouse integration.
- Implement data transformations; optimize for scale and reliability.
- Create analytics data solutions: SQL transforms, Athena views, Iceberg, QuickSight.
- Collaborate with cross-functional teams to ensure data quality.
๐ฏ Requirements
- Hands-on Apache Spark, PySpark, and AWS Glue for distributed pipelines.
- Designing and maintaining AWS data architectures (S3, Athena, warehouses).
- Python and SQL proficiency for large-scale data processing.
- ETL/ELT, data lakes, and event-driven data workflows.
- Collaboration and problem-solving in Agile, BI with QuickSight.
๐ Benefits
- 100% Remote Work: laptop and internet connection provided.
- Competitive USD pay.
- Paid time off.
- Autonomy to manage your time and outcomes.
- Culture that values you and supports work-life balance.
- Diverse, global network across 25+ countries.
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest โ finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!