Related skills
sql python apache spark pyspark iceberg๐ Description
- Design and build scalable data pipelines with PySpark.
- Develop data models and consolidate sources into data lake and warehouse.
- Optimize transformations for performance and scalability.
- Create analytics data solutions using SQL, Athena, Iceberg, QuickSight.
- Collaborate across teams to monitor data quality and improve workflows.
๐ฏ Requirements
- Hands-on with Apache Spark, PySpark, and AWS Glue.
- Experience designing AWS-based data architectures (S3, Athena, data warehouses).
- Proficient in Python and SQL for large-scale data processing.
- ETL/ELT pipelines, data lakes, and event-driven architectures.
- Collaborative, analytical, and Agile with BI/QuickSight.
๐ Benefits
- 100% Remote Work: work from anywhere.
- Competitive USD salary and market-leading compensation.
- Paid time off to recharge.
- Autonomy to manage your time and deliver results.
- Work with top American companies on high-impact projects.
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest โ finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!