This job is no longer available

The job listing you are looking has expired.
Please browse our latest remote jobs.

See open jobs →

Senior Data Engineer (Big Data Platform)

Added
19 hours ago
Type
Full time
Salary
Not Specified

Use AI to Automatically Apply!

Let your AI Job Copilot auto-fill application questions
Auto-apply to relevant jobs from 300,000 companies

Auto-apply with JobCopilot Apply manually instead
Save job

Related skills

java kubernetes hadoop spark spring boot

At Lalamove, we believe in the power of community. Millions of drivers and customers use our technology every day to connect with one another and move things that matter. Delivery is what we do best and we ensure it is always fast and simple. Since 2013, we have tackled the logistics industry head on to find the most innovative solutions for the world’s delivery needs. We are full steam ahead to make Lalamove synonymous with delivery and on a mission to impact as many local communities we can. We have massively scaled our efforts across Asia and now have our sights on taking our best in class technology to the rest of the world. And we are looking for talented professionals to join us in this journey!!

As a Data Platform Engineer at Lalamove, you will be joining the growing Data Platform team to develop and enhance our data platform. The data platform is the centralised portal that enables engineers and analysts across the company to ingest, process, orchestrate, analyse, and visualize the data of our company. As part of the Data Engineering function, the team will design, develop, maintain, and scale the data platform, including the data infrastructure and data services, which provides the foundation to downstream use cases such as data warehousing, reporting, machine learning, analytics, data products etc.

What you’ll do:

  • Lead the architecture design, technology selection, deployment, development, management, monitoring, and performance tuning of large-scale data platforms based on the Hadoop ecosystem. Ensure efficient and stable cluster operations and deliver robust architectures for data storage, query engines, real-time computing, and metadata management.
  • Develop and maintain core system components, mentor engineers, and continuously optimize system performance and stability.
  • Collaborate across teams and departments to analyze and resolve operational and data-related issues within the big data platform.
  • Build industry-leading data systems that support fast-growing data businesses and large-scale production workloads.
  • What you’ll need:

  • Bachelor’s degree or above with 3–8 years of experience in big data platform development or distributed system engineering.
  • Solid Java foundation; familiar with SOA architecture and mainstream frameworks such as Spring Boot and Spring Cloud.
  • Proficient with mainstream big data computing engines, including:
  • Flink: Flink on Kubernetes, state management, Checkpoint/Savepoint, exactly-once semantics, unified batch/stream processing, Flink SQL/Table API, CEP, runtime internals, and Flink CDC for real-time incremental change data capture.

    Spark: Spark on Kubernetes, Spark SQL/Streaming, Adaptive Query Execution (AQE), shuffle optimization, resource isolation, and dynamic allocation.

    Hive on Tez: Tez DAG optimization, container reuse, auto-parallelism tuning, and Hive MR-to-Tez migration performance optimization.

  • Experience with data change capture (CDC) pipelines, including Alibaba Canal and Flink CDC, covering MySQL/PostgreSQL binlog parsing, schema evolution handling, watermarking, consistency guarantees, and integration with data lakes or downstream streaming systems.
  • Strong familiarity with OLAP engines and the big data ecosystem, including Doris, ClickHouse, Druid, Kylin, Hive, HBase, Kafka, Elasticsearch, and MapReduce.
  • Experience with lakehouse table formats such as Apache Iceberg and Apache Paimon, including metadata management, table optimization, partition pruning, compaction, and file layout tuning.
  • Strong Mandarin communication skills for daily business communication and technical documentation.
  • Additional Information

    To all candidates- Lalamove respects your privacy and is committed to protecting your personal data.

    This Notice will inform you how we will use your personal data, explain your privacy rights and the protection you have by the law when you apply to join us. Please take time to read and understand this Notice. Candidate Privacy Notice: https://www.lalamove.com/en-hk/candidate-privacy-notice

    Use AI to Automatically Apply!

    Let your AI Job Copilot auto-fill application questions
    Auto-apply to relevant jobs from 300,000 companies

    Auto-apply with JobCopilot Apply manually instead
    Share job

    Meet JobCopilot: Your Personal AI Job Hunter

    Automatically Apply to Data Jobs. Just set your preferences and Job Copilot will do the rest — finding, filtering, and applying while you focus on what matters.

    Related Data Jobs

    See more Data jobs →