Related skills
redshift aws postgresql python databricksπ Description
- Design, develop, and maintain data pipelines ingesting data from multiple sources.
- Create and maintain data models for reporting, analytics.
- Implement ETL processes to transform raw data into insights.
- Monitor data quality, validation, and governance practices.
- Manage and optimize data storage, processing, and distribution.
- Collaborate with data scientists, analysts, and cross-functional teams.
π― Requirements
- Bachelor's or Master's in Computer Science or Data Science.
- 3+ years in data engineering.
- Python for data pipelines.
- Apache Airflow for workflow orchestration.
- Spark and Databricks for big data processing.
- AWS (S3, EC2, EMR).
- SQL and NoSQL databases (Postgres, Redshift).
- Kafka for real-time data streaming.
- Tableau/Power BI/Looker for visualization.
π Benefits
- Work-life balance
- Annual bonus based on personal performance
- Health insurance, pension, and Multisport card
- Full annual performance assessment
- Modern equipment
- Employee referral program
- Additional paid days off and top team collaboration
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest β finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!