Related skills
ai apis observability sdks routingπ Description
- Design and operate scalable AI infra for LLM inference, prompts, and eval pipelines
- Build self-service tools, SDKs, and APIs to speed prod adoption by ~30%
- Instrument AI/ML workloads with standardized logging, tracing, and metrics
- Implement intelligent routing, caching, and provider optimisation via the LLM gateway
- Drive adoption of shared platform services across new AI features
- Champion developer experience with documentation and responsive support
π― Requirements
- Built and deployed production AI infra with reliability and observability
- Delivered self-service tools/APIs enabling teams to accelerate AI/ML development
- Implemented evaluation frameworks, A/B testing, or monitoring for model performance
- Led initiatives to reduce AI compute costs via routing or caching
- Migrated teams to shared platform services, driving adoption
- Prioritised and improved developer experience through docs, support, and workflow enhancements
π Benefits
- Β£5,000 training and conference budget
- 33 days holiday (25 days + 8 bank holidays)
- Pension scheme via Penfold
- Mental health support via Spectrum.life
- Private healthcare via AXA
- Cycle to Work Scheme
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Engineering Jobs. Just set your
preferences and Job Copilot will do the rest β finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!