Related skills
sql python apache spark pyspark iceberg๐ Description
- Design, build, and optimize scalable PySpark data pipelines.
- Develop and maintain data models for centralized data lake and warehouse.
- Implement efficient transformations for performance and scalability.
- Create analytics data solutions: SQL transforms, Athena views, Iceberg, QuickSight.
- Collaborate with cross-functional teams and ensure data quality.
๐ฏ Requirements
- Hands-on Apache Spark, PySpark, and AWS Glue experience.
- AWS-based data architectures (S3, Athena, data warehouses) design.
- Advanced Python and SQL for large datasets.
- ETL/ELT pipelines, data lakes, and event-driven architectures.
- Agile collaboration and BI/Reporting with QuickSight.
๐ Benefits
- 100% Remote Work: work from anywhere.
- Highly Competitive USD Pay.
- Paid Time Off.
- Autonomy: manage your time and deliver results.
- Work with Top American Companies.
- Diverse, Global Network.
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest โ finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!