Related skills
aws python scala apache spark delta lake๐ Description
- Own migration of large data workloads from GCP to AWS.
- Design and build scalable batch and streaming pipelines with Spark and Delta Lake.
- Set standards for data quality, observability, and pipeline patterns.
- Own reliability and cost-efficiency of production ETL jobs on AWS (EMR, Glue).
- Shape the technical direction for data infrastructure and migration strategy.
- Collaborate across teams to enable scalable data foundations.
๐ฏ Requirements
- Strong hands-on Scala and Python; switch between them when needed.
- Deep Apache Spark experience for streaming and batch processing.
- Proven track record running production ETL on AWS (EMR, Glue).
- Experience designing data architectures with Delta Lake and medallion pattern.
- 8+ years of data engineering experience owning critical infrastructure.
- Nice-to-have: familiarity with GCP data services and migrating workloads to AWS.
๐ Benefits
- Competitive compensation and benefits package.
- Professional development opportunities.
- Flexible work arrangements and remote-friendly policies.
- Collaborative team culture and growth opportunities.
- Chance to shape data infrastructure at scale.
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Engineering Jobs. Just set your
preferences and Job Copilot will do the rest โ finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!