Added
9 days ago
Type
Full time
Salary
Salary not provided

Related skills

etl sql data modeling pyspark amazon redshift

πŸ“‹ Description

  • Design and implement a modern data warehouse using Amazon Redshift.
  • Develop and maintain ETL pipelines with AWS Glue (Python/PySpark).
  • Structure the S3 data lake for efficient storage and integration with Redshift.
  • Define data models (star schema, dimensional modeling) for reporting.
  • Establish data governance with documentation and quality checks.
  • Collaborate with BI teams to meet reporting needs (CRM, dashboards).

🎯 Requirements

  • BS in Computer Science or related field, or equivalent experience.
  • 5+ years of data engineering/architecture experience.
  • Strong SQL and Python skills (PySpark a plus).
  • Experience building ETL pipelines (AWS Glue, Airflow, or similar).
  • Knowledge of data modeling (star schema, slowly changing dimensions).
  • Ability to lead technical decisions in a small team.

🎁 Benefits

  • Salary in USD
  • Long-term
  • Flexible schedule (within US Time zones)
  • 100% Remote
Share job

Meet JobCopilot: Your Personal AI Job Hunter

Automatically Apply to Data Jobs. Just set your preferences and Job Copilot will do the rest β€” finding, filtering, and applying while you focus on what matters.

Related Data Jobs

See more Data jobs β†’