Added
2 days ago
Type
Full time
Salary
Salary not provided

Related skills

linux sql python hadoop airflow

πŸ“‹ Description

  • Build data lakes, warehouses, ETL/ELT pipelines, APIs, analytics.
  • Create scalable pipelines with Kafka, Flink, or Spark Streaming.
  • Automate pipelines end-to-end using Apache Airflow.
  • Develop real-time data processing pipelines.
  • Implement data governance, metadata, and data quality standards.
  • Ensure security, compliance, and regulatory requirements.

🎯 Requirements

  • 2–6 years of experience in Data Engineering.
  • Strong experience with Python, PySpark, and SQL.
  • Linux environments and shell scripting.
  • Hands-on with Spark, Hive, Hadoop, Airflow, Oozie, HBase, or MapReduce.
  • Data pipeline development and data flow management.
  • Git and process automation familiarity.

🎁 Benefits

  • Flexible working hours.
  • Birthday off and personal time.
  • Comprehensive rewards package.
  • Exposure to cutting-edge technology and platforms.
  • Small-business feel with autonomy and growth.
  • Diversity and inclusion networks at dunnhumby.
Share job

Meet JobCopilot: Your Personal AI Job Hunter

Automatically Apply to Data Jobs. Just set your preferences and Job Copilot will do the rest β€” finding, filtering, and applying while you focus on what matters.

Related Data Jobs

See more Data jobs β†’