Related skills
linux sql python hadoop airflowπ Description
- Build data lakes, warehouses, ETL/ELT pipelines, APIs, analytics.
- Create scalable pipelines with Kafka, Flink, or Spark Streaming.
- Automate pipelines end-to-end using Apache Airflow.
- Develop real-time data processing pipelines.
- Implement data governance, metadata, and data quality standards.
- Ensure security, compliance, and regulatory requirements.
π― Requirements
- 2β6 years of experience in Data Engineering.
- Strong experience with Python, PySpark, and SQL.
- Linux environments and shell scripting.
- Hands-on with Spark, Hive, Hadoop, Airflow, Oozie, HBase, or MapReduce.
- Data pipeline development and data flow management.
- Git and process automation familiarity.
π Benefits
- Flexible working hours.
- Birthday off and personal time.
- Comprehensive rewards package.
- Exposure to cutting-edge technology and platforms.
- Small-business feel with autonomy and growth.
- Diversity and inclusion networks at dunnhumby.
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest β finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!