Related skills
sql python apache spark pyspark iceberg๐ Description
- Design, build, and optimize scalable data pipelines with PySpark
- Develop data models; centralized data lake and data warehouse
- Implement efficient data transformations; optimize for performance
- Create analytics data solutions: SQL, AWS Athena views, Iceberg, QuickSight dashboards
- Collaborate with cross-functional teams; monitor data workflows and quality
๐ฏ Requirements
- Strong hands-on Apache Spark, PySpark, and AWS Glue
- Designing AWS-based data architectures (S3, Athena, data warehouses)
- Python and SQL expertise for large datasets
- ETL/ELT pipelines, data lakes, and event-driven architectures
- Agile environment experience; BI/Reporting with QuickSight
๐ Benefits
- 100% remote work
- USD pay; competitive
- Paid time off
- Work with autonomy
- Work with top American Companies
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest โ finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!