Related skills
bigquery redshift aws snowflake sql📋 Description
\n- \n
- Design, build, and maintain scalable data platforms, pipelines, and warehouses using SQL, Python, Spark. \n
- Build efficient data solutions, optimize performance, and ensure seamless data integration. \n
- Design data pipelines and ETL for large-scale ingestion and transformation. \n
- Develop data platforms and warehousing to support analytics and BI. \n
- Collaborate with data scientists, analysts, and stakeholders to translate requirements into data solutions. \n
- Ensure data quality, governance, and compliance across systems. \n
🎯 Requirements
\n- \n
- 6+ years' experience as a Senior Data Engineer in data engineering. \n
- Practical experience in agile environments. \n
- Hands-on with SQL, Python, and Spark for large-scale data processing. \n
- Databricks experience. \n
- Strong data warehousing expertise (Snowflake, Redshift, BigQuery). \n
- Experience with cloud data solutions (AWS, Azure, GCP) and data lake architectures. \n
- CI/CD for data pipelines and Infrastructure as Code (IaC). \n
- Git and code review; collaborative software practices. \n
🎁 Benefits
\n- \n
- Annual company bonus with profit sharing. \n
- Remote work and flexible schedule. \n
- Paid time off: 25 days plus holidays. \n
- Remote working allowance up to €1250 every 2 years. \n
- Training and development allowance up to €1000. \n
- Healthcare, pension, and insurance benefits. \n
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest — finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!