Related skills
bigquery redshift python airflow pysparkπ Description
- Design, build, and scale real-time and batch data pipelines for retail media.
- Use Python, Spark, and streaming frameworks to process data.
- Contribute to MLOps: scalable infra, model deployment, monitoring.
- Collaborate with data scientists, analysts, lead engineers, and PMs.
- Focus on reliability, scalability, and performance of data/ML infra.
- Mentor junior engineers; advocate data engineering and MLOps best practices.
π― Requirements
- 7-9 years in data engineering.
- SQL, PySpark, Hive.
- Argo Workflows and Airflow.
- Redshift or BigQuery; GCP or Azure.
- Python, FastAPI; Git; unit testing.
- Reliability: logging, monitoring, alerts; mentoring.
π Benefits
- Flexible working hours.
- Birthday off.
- Investment in cutting-edge tech.
- Inclusive, diverse culture with active networks.
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest β finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!