Related skills
hadoop airflow spark clickhouse flinkπ Description
- Platform Core: Design and operate large-scale data systems
- Own the big data compute and storage infra (MaxCompute/ODPS, Hologres, Spark)
- Build multi-site task orchestration that selects engines and enforces policy
- Drive reliability and performance improvements across batch and real-time pipelines
- AI Integration: Develop MCP tool interfaces to interact with platform APIs
- Build scheduling and cost-optimization agents to auto-tune resources
π― Requirements
- 5+ years of experience building large-scale data platforms (Hadoop/Spark/Flink or equivalent)
- Deep expertise in distributed storage and compute systems (MaxCompute, Hologres, ClickHouse, Hive)
- Strong software engineering skills in Java, Scala, or Python; API-first design
- Hands-on experience with task scheduling systems (Airflow, DolphinScheduler, or in-house equivalents)
- Solid understanding of multi-cloud architectures and cost governance
- Familiarity with LLM integration patterns: tool calling, RAG pipelines, context management
π Benefits
- Competitive total compensation package
- L&D programs and education subsidies for growth
- Various team building programs and company events
- Wellness and meal allowances
- Comprehensive healthcare schemes for employees and dependants
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Engineering Jobs. Just set your
preferences and Job Copilot will do the rest β finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!