Added
less than a minute ago
Location
Type
Full time
Salary
Upgrade to Premium to se...
Related skills
aws kubernetes eks kafka sparkπ Description
- Design and implement scalable Lakehouse storage and schema evolution
- Build real-time and batch ingestion pipelines with Kafka and Spark
- Deploy and scale data workloads with Kubernetes (EKS)
- Develop ETL workflows and optimize for memory and shuffle
- Manage end-to-end data lifecycles using orchestration tools
- Write high-performance queries with optimization, compaction, and data skipping
π― Requirements
- 5+ years in Data Engineering; 2+ years in Lakehouse or modern Data Lake environments
- Hands-on AWS ecosystems, async/queue processing in production with a Lakehouse
- Expert in workflow management, data transformation, storage and exchange at scale
- Proven experience with event-driven processing; ingestion, large-scale data, performance tuning
- Experience with Kubernetes and infrastructure as code
π Benefits
- Medical, dental, vision, basic life insurance
- PTO and company-paid holidays
- Retirement programs
- 1% charitable giving program
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest β finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!