Related skills
redshift aws sql python dbt๐ Description
- Build scalable, maintainable, high-performance data pipelines and workflows.
- Ingest, transform, and deliver high-volume data from MySQL and Kafka streams.
- Design and maintain performant Redshift data models; optimise SQL for analytics.
- Contribute to cloud migration and evolve the data architecture with new ideas.
- Collaborate with cross-functional, global teams to translate requirements into solutions.
- Embed data quality, reliability, observability, and security as core platform principles.
๐ฏ Requirements
- 4-6 years of professional experience in Data Engineering.
- Strong SQL expertise, including query optimisation and data modelling.
- Solid Python skills for data engineering and pipeline development.
- Hands-on experience with AWS services (Redshift, Lambda, Glue, S3) and Airflow.
- Familiarity with dbt for transformation and modelling.
- Collaborative mindset with problem-solving, critical thinking, ownership and initiative.
๐ Benefits
- Competitive salaries, bonuses, equity, and recognition programs.
- Medical coverage with 24/7 assistance; generous vacation; hybrid work (3 days in office).
- Role-specific training, internal workshops, and a learning stipend.
- Company-wide events, team bonding, happy hours, and offsites.
- Diversity and equal opportunity employer.
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest โ finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!