Added
1 day ago
Type
Full time
Salary
Salary not provided

Related skills

redshift aws postgresql graphql pyspark

📋 Description

  • Assist in implementing a data mesh using GraphQL APIs to expose domain data products.
  • Build and maintain an AWS data lake using S3, Glue, Lake Formation, Athena, and Redshift.
  • Develop and maintain ETL/ELT pipelines using AWS Glue and PySpark for batch and streaming workloads.
  • Support AWS DMS pipelines to replicate data into Aurora PostgreSQL for near real-time analytics.
  • Follow best practices for data governance, quality, observability, and API design.
  • Collaborate with product, engineering, and analytics teams to deliver reliable data solutions.

🎯 Requirements

  • Bachelor’s degree in technical fields such as Computer Science or Mathematics.
  • Strong Python and SQL fundamentals; comfortable writing queries and scripts.
  • Academic or project experience with data engineering concepts: pipelines, data modeling, ETL/ELT.
  • Exposure to cloud platforms, particularly AWS.
  • Familiarity with data structures, distributed systems, or database design.
  • Collaborative mindset and strong communication; explain your thinking clearly.
Share job

Meet JobCopilot: Your Personal AI Job Hunter

Automatically Apply to Data Jobs. Just set your preferences and Job Copilot will do the rest — finding, filtering, and applying while you focus on what matters.

Related Data Jobs

See more Data jobs →