Related skills
azure aws python gcp scala๐ Description
- Work on client engagements using the Databricks platform.
- Deliver data engineering, data science and cloud projects.
- Integrate with client systems and train teams.
- Design reference architectures and productionalize use cases.
- Guide customers through end-to-end big data deployments.
- Provide escalated support for operational issues.
๐ฏ Requirements
- 6+ years of data engineering and analytics experience.
- Strong data warehousing concepts, architecture and migrations.
- Python, PySpark or Scala coding proficiency.
- Cloud experience across AWS, Azure, or GCP (at least one).
- Deep Apache Spark knowledge and runtime internals.
- CI/CD for production deployments.
- Familiarity with MLOps.
- Willingness to travel 10-20%.
๐ Benefits
- Region-specific benefits described on the linked page.
- Commitment to diversity and inclusion.
- Compliance information provided.
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Engineering Jobs. Just set your
preferences and Job Copilot will do the rest โ finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!