This job is no longer available

The job listing you are looking has expired.
Please browse our latest remote jobs.

See open jobs →
← Back to all jobs

Senior SDET - Generative AI QA

Added
14 days ago
Location
Type
Full time
Salary
Not Specified

Use AI to Automatically Apply!

Let your AI Job Copilot auto-fill application questions
Auto-apply to relevant jobs from 300,000 companies

Auto-apply with JobCopilot Apply manually instead
Save job

About the Company:

Netomi is the leading agentic AI platform for enterprise customer experience. We work with the largest global brands like Delta Airlines, MetLife, MGM, United, and others to enable agentic automation at scale across the entire customer journey. Our no-code platform delivers the fastest time to market, lowest total cost of ownership, and simple, scalable management of AI agents for any CX use case. Backed by WndrCo, Y Combinator, and Index Ventures, we help enterprises drive efficiency, lower costs, and deliver higher quality customer experiences.

Want to be part of the AI revolution and transform how the world’s largest global brands do business? Join us!

We’re seeking a Senior SDET with expertise in Generative AI testing to lead the development of cutting-edge automation frameworks for AI/ML-powered applications. You’ll ensure the reliability, safety, and scalability of LLM-driven products while advancing traditional test automation for cloud-native systems.

Responsibilities

  • AI-Aware Test Automation - Design and maintain Python/Java-based automation frameworks (Selenium, Playwright, TestNG/JUnit) for web, API, and backend services.
  • Extend frameworks to test LLM integrations (OpenAI, HuggingFace, RAG pipelines) with prompt validation, hallucination checks, and response consistency tests.
  • Implement model benchmarking (latency, accuracy, bias/drift detection) for generative AI features.
  • Quality Infrastructure - Integrate tests into CI/CD pipelines (Jenkins, GitHub Actions) with cloud workflows (AWS/GCP).
  • Optimize performance testing (JMeter/Locust) for AI endpoints handling high-throughput inference.
  • Debug flaky tests in (non-deterministic) AI systems.
  • Leadership & Innovation - Mentor junior engineers on AI testing best practices.
  • Research tools like LangChain, synthetic data generators, or adversarial testing techniques.
  • Advocate for ML-specific quality metrics beyond traditional pass/fail.
  • Requirements

  • 7–9 years in QA automation with strong Python/Java proficiency.
  • Hands-on experience with Selenium, Playwright, REST Assured, and CI/CD tools (Jenkins, Docker).
  • Solid understanding of SQL/NoSQL databases and cloud platforms (AWS/GCP).
  • Exposure to performance testing (JMeter, K6) and scalable test frameworks.
  • Experience with LLM testing (prompt engineering, output validation, rubric-based grading).
  • Familiarity with OpenAI APIs, HuggingFace, or LangChain.
  • Knowledge of synthetic test data generation for edge-case scenarios.
  • Autonomy – Thrive in fast-paced, AI-driven environments with minimal supervision.
  • Analytical Mindset – Debug complex failures in probabilistic AI systems.
  • Communication - Explain technical trade-offs to non-technical stakeholders.
  • Additional Information

    Netomi is an equal opportunity employer committed to diversity in the workplace. We evaluate qualified applicants without regard to race, color, religion, sex, sexual orientation, disability, veteran status, and other protected characteristics.

    Use AI to Automatically Apply!

    Let your AI Job Copilot auto-fill application questions
    Auto-apply to relevant jobs from 300,000 companies

    Auto-apply with JobCopilot Apply manually instead
    Share job

    Meet JobCopilot: Your Personal AI Job Hunter

    Automatically Apply to Remote Engineering Jobs. Just set your preferences and Job Copilot will do the rest—finding, filtering, and applying while you focus on what matters.

    Related Engineering Jobs

    See more Engineering jobs →