Related skills
azure aws python gcp scala๐ Description
- Work with clients on engagements to tackle big data challenges using Databricks.
- Deliver data engineering, data science and cloud projects with client integration.
- Provide billable data engineering and analytics projects with strong customer service.
- Design and build reference architectures and productionize customer use cases.
- Guide strategic customers through end-to-end big data and AI deployments.
- Provide escalated support and coordinate with Databricks teams for successful delivery.
๐ฏ Requirements
- 9+ years experience in data engineering, data platforms & analytics
- Comfortable writing code in Python or Scala
- Experience with two cloud ecosystems (AWS/Azure/GCP) with expertise in one
- Deep experience with Apache Spark and knowledge of Spark runtime internals
- Familiarity with CI/CD for production deployments
- Travel to customers up to 20% of the time
๐ Benefits
- Pay Range Transparency with region-based ranges (see us-pay mapping)
- Comprehensive benefits portal with region-specific details (mybenefitsnow)
- Global company with offices worldwide and focus on data and AI
- Commitment to diversity and inclusion and equal opportunity
- Learning and growth opportunities in a customer-facing role
- Collaborative, cross-functional team culture
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest โ finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!