Related skills
bigquery redshift aws snowflake sql📋 Description
- Design, build, and maintain scalable data pipelines, ETL processes, and analytics infrastructure.
- Collaborate with cross-functional teams to understand data needs and deliver solutions for ML teams (training, deployment, feature stores).
- Optimize data storage, retrieval, processing, and queries for performance and cost-efficiency.
- Define and enforce data governance, metadata management, and data lineage standards.
- Ensure data integrity, security, and compliance with industry standards.
🎯 Requirements
- Master’s degree in Computer Science, Engineering, Statistics, or related field
- 3+ years of experience in data engineering, analytics engineering, or related role
- Proficiency in Python and SQL
- Experience with dbt
- Experience with cloud platforms (AWS, GCP, Azure) and data warehousing solutions (Snowflake, BigQuery, Redshift)
- Ability to communicate complex data concepts to both technical and non-technical stakeholders
🎁 Benefits
- Competitive salary and equity package
- Health insurance
- Transportation allowance
- Sport allowance
- Meal vouchers
- Private pension plan
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest — finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!