Related skills
grpc golang python kubernetes distributed systems📋 Description
- Architect distributed AI infra services; orchestrate LLM inference.
- Design high-scale, multi-tenant inference cloud solutions.
- Define SLOs and observability for high-scale services.
- Collaborate with Product, TPMs, and Eng Mgmt on roadmaps.
- Advance architecture for fleet optimization and AI-native infra.
🎯 Requirements
- Distributed systems mastery: cloud services, messaging, databases, IaC, observability, security.
- Cloud networking: VPCs, load balancers; Kubernetes; block/object/NFS storage.
- Gen AI/LLM hosting and inference workflows.
- Operational track record: high-availability services across regions.
- Open source experience and ownership.
- GoLang or Python; gRPC for service-to-service.
🎁 Benefits
- Innovate with purpose and own outcomes.
- Career development—conferences, training, LinkedIn Learning.
- Well-being: EAP, meetups, flexible time off.
- Competitive salary, bonus, equity, ESPP.
- Equal opportunity employer; inclusive culture.
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Engineering Jobs. Just set your
preferences and Job Copilot will do the rest — finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!