Related skills
etl sql data modeling pyspark amazon redshiftπ Description
- Design and implement a modern data warehouse using Amazon Redshift.
- Develop and maintain ETL pipelines with AWS Glue (Python/PySpark).
- Structure the S3 data lake for efficient storage and integration with Redshift.
- Define data models (star schema, dimensional modeling) for reporting.
- Establish data governance with documentation and quality checks.
- Collaborate with BI teams to meet reporting needs (CRM, dashboards).
π― Requirements
- BS in Computer Science or related field, or equivalent experience.
- 5+ years of data engineering/architecture experience.
- Strong SQL and Python skills (PySpark a plus).
- Experience building ETL pipelines (AWS Glue, Airflow, or similar).
- Knowledge of data modeling (star schema, slowly changing dimensions).
- Ability to lead technical decisions in a small team.
π Benefits
- Salary in USD
- Long-term
- Flexible schedule (within US Time zones)
- 100% Remote
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest β finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!