Related skills
redshift s3 graphql airflow pyspark๐ Description
- Design and implement a data mesh using GraphQL to expose domain data products.
- Build and maintain an AWS data lake with S3, Glue, Lake Formation, Athena, Redshift.
- Develop and optimize ETL/ELT pipelines with AWS Glue and PySpark for batch/streaming workloads.
- Implement AWS DMS pipelines to replicate data into Aurora PostgreSQL for near real-time analytics.
- Support data governance, quality, observability, and API design.
- Collaborate with product/engineering/analytics to deliver reusable data solutions.
- Contribute to automation and CI/CD for data infra and pipelines.
- Stay current with tech trends to evolve the platform.
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Engineering Jobs. Just set your
preferences and Job Copilot will do the rest โ finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!