This job is no longer available

The job listing you are looking has expired.
Please browse our latest remote jobs.

See open jobs →
← Back to all jobs

AI QA Trainer – LLM Evaluation

Added
21 hours ago
Location
Type
Full time
Salary
Not Specified

Use AI to Automatically Apply!

Let your AI Job Copilot auto-fill application questions
Auto-apply to relevant jobs from 300,000 companies

Auto-apply with JobCopilot Apply manually instead
Save job

Are you an AI QA expert eager to shape the future of AI? Large-scale language models are evolving from clever chatbots into enterprise-grade platforms. With rigorous evaluation data, tomorrow’s AI can democratize world-class education, keep pace with cutting-edge research, and streamline workflows for teams everywhere. That quality begins with you—we need your expertise to harden model reasoning and reliability.

We’re looking for AI QA trainers who live and breathe model evaluation, LLM safety, prompt robustness, data quality assurance, multilingual and domain-specific testing, grounding verification, and compliance/readiness checks. You’ll challenge advanced language models on tasks like hallucination detection, factual consistency, prompt-injection and jailbreak resistance, bias/fairness audits, chain-of-reasoning reliability, tool-use correctness, retrieval-augmentation fidelity, and end-to-end workflow validation—documenting every failure mode so we can raise the bar.

On a typical day, you will converse with the model on real-world scenarios and evaluation prompts, verify factual accuracy and logical soundness, design and run test plans and regression suites, build clear rubrics and pass/fail criteria, capture reproducible error traces with root-cause hypotheses, and suggest improvements to prompt engineering, guardrails, and evaluation metrics (e.g., precision/recall, faithfulness, toxicity, and latency SLOs). You’ll also partner on adversarial red-teaming, automation (Python/SQL), and dashboarding to track quality deltas over time.

A bachelor’s, master’s, or PhD in computer science, data science, computational linguistics, statistics, or a related field is ideal; shipped QA for ML/AI systems, safety/red-team experience, test automation frameworks (e.g., PyTest), and hands-on work with LLM eval tooling (e.g., OpenAI Evals, RAG evaluators, W&B) signal fit. Skills that stand out include: evaluation rubric design, adversarial testing/red-teaming, regression testing at scale, bias/fairness auditing, grounding verification, prompt and system-prompt engineering, test automation (Python/SQL), and high-signal bug reporting. Clear, metacognitive communication—“showing your work”—is essential.

Ready to turn your QA expertise into the quality backbone for tomorrow’s AI? Apply today and start teaching the model that will teach the world.

We offer a pay range of $6-to- $65 per hour, with the exact rate determined after evaluating your experience, expertise, and geographic location. Final offer amounts may vary from the pay range listed above. As a contractor you’ll supply a secure computer and high-speed internet; company-sponsored benefits such as health insurance and PTO do not apply.Employment type: ContractWorkplace type: RemoteSeniority level: Mid-Senior Level

Use AI to Automatically Apply!

Let your AI Job Copilot auto-fill application questions
Auto-apply to relevant jobs from 300,000 companies

Auto-apply with JobCopilot Apply manually instead
Share job

Meet JobCopilot: Your Personal AI Job Hunter

Automatically Apply to Remote All Other Jobs. Just set your preferences and Job Copilot will do the rest—finding, filtering, and applying while you focus on what matters.

Related All Other Jobs

See more All Other jobs →