Related skills
sql python graphql ci/cd airflow📋 Description
- Design data mesh architecture exposing domain data via GraphQL APIs.
- Build and maintain an AWS data lake on S3, Glue, Lake Formation, and Redshift.
- Develop and optimize ETL/ELT pipelines with AWS Glue and PySpark.
- Implement AWS DMS pipelines to replicate data into Aurora PostgreSQL for real-time analytics.
- Support data governance, quality, observability, and API design best practices.
- Collaborate with product, engineering, and analytics teams to deliver robust data solutions.
🎯 Requirements
- Bachelor’s degree in Computer Science, Mathematics, or related field.
- At least 4 years of data engineering experience with ownership of complex data systems.
- Strong experience with AWS data lake technologies (S3, Glue, Lake Formation, Athena, Redshift).
- Understanding of data mesh principles and decentralized data architecture.
- Proficiency in Python, SQL.
- Experience with data modeling, orchestration tools (Airflow), and CI/CD pipelines.
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest — finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!