Evaluator-Optimizer

Quality Assurance, Content Creators

Recipe Overview

Ensuring answer quality can be hard in one pass. The evaluator-optimizer agent addresses this by adding a feedback loop. One LLM generates a response and another evaluates it and suggests improvements. Anthropic describes this loop of generation and critique. For example, a translation agent might produce a draft and then a second agent checks for mistakes. This iterative review solves errors: each cycle refines the output. The pattern is especially valuable for high-stakes tasks where quality matters more than speed, like legal document review or medical diagnosis assistance.

Why This Recipe Works

Improves output quality through iterative feedback and refinement cycles

Implementation Tips

Best For:

Quality Assurance, Content Creators

Key Success Factor:

Improves output quality through iterative feedback and refinement cycles...

More AI Agent Recipes

Discover other proven implementation patterns

Developers, Data Scientists

Prompt Chaining

When faced with a complex multi-step task, breaking it into sequential prompts can simplify the problem for the model.

Read Recipe →
AI Engineers, Product Managers

Routing

Tasks often vary by type (e.

Read Recipe →
Software Engineers, Operations Teams

Parallelization

When different parts of a task can be done simultaneously, parallelization speeds up processing.

Read Recipe →
Engineering Managers, System Architects

Orchestrator-Workers

Complex tasks with unpredictable subtasks require dynamic breakdown.

Read Recipe →
Researchers, System Administrators

Autonomous Agent

Some tasks have no fixed steps and require continuous control.

Read Recipe →
Analysts, Researchers

Reflection Pattern

LLMs may make logical mistakes without self-review.

Read Recipe →