Related skills
terraform sql python databricks kotlinπ Description
- Design and operate ML infrastructure, including workspaces, clusters, jobs, and workflows
- Productionize ML workloads using Spark, Delta Lake, MLflow, and Databricks Workflows
- Teach data scientists how to utilize our ML platform to advance development from notebook to production for our most critical models
- Implement Unity Catalog for data governance, lineage, access control, and secure multi-tenant usage
- Build CI/CD pipelines for ML using Terraform and Git-based workflows (e.g., GitHub Actions)
- Optimize performance, reliability, and cost across training and inference workloads
π― Requirements
- 8+ years of experience building production ML or data platforms
- A degree (preferably graduate level) in Computer Science, Engineering, Statistics, or a related technical field
- Strong hands-on expertise with Databricks, Spark, Delta Lake, and MLflow
- Proficiency in Python, SQL, and distributed systems concepts
- Experience with cloud platforms and infrastructure-as-code
- Solid understanding of MLOps best practices: CI/CD, monitoring, reproducibility, and security
π Benefits
- Equity eligibility
- Hybrid work: 3 days in SF office + remote flexibility
- Competitive benefits package
- Opportunity to work on a global ML platform with mission-driven impact
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Engineering Jobs. Just set your
preferences and Job Copilot will do the rest β finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!