Related skills
sql python apache spark pyspark iceberg๐ Description
- Design, build, and optimize scalable data pipelines with PySpark and Spark.
- Develop and maintain data models; integrate sources into data lake and data warehouse.
- Transform data efficiently; optimize pipelines for performance and reliability.
- Create analytics solutions: SQL transforms, AWS Athena views, Iceberg, QuickSight.
- Collaborate with cross-functional teams to ensure data quality.
๐ฏ Requirements
- Hands-on Apache Spark, PySpark, and AWS Glue; build scalable data pipelines.
- Experience designing and maintaining AWS data architectures (S3, Athena, data warehouses).
- Proficient in Python and SQL for processing large datasets.
- Experience with ETL/ELT, data lakes, and event-driven architectures.
- Strong collaboration, analytical, and problem-solving; Agile and BI (QuickSight).
๐ Benefits
- 100% remote work; work from anywhere with laptop and internet.
- Competitive USD pay.
- Paid time off to recharge.
- Autonomy to manage time; focus on results.
- Opportunities with leading U.S. companies.
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest โ finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!