Evaluator-Optimizer
Quality Assurance, Content CreatorsRecipe Overview
Ensuring answer quality can be hard in one pass. The evaluator-optimizer agent addresses this by adding a feedback loop. One LLM generates a response and another evaluates it and suggests improvements. Anthropic describes this loop of generation and critique. For example, a translation agent might produce a draft and then a second agent checks for mistakes. This iterative review solves errors: each cycle refines the output. The pattern is especially valuable for high-stakes tasks where quality matters more than speed, like legal document review or medical diagnosis assistance.
Why This Recipe Works
Improves output quality through iterative feedback and refinement cycles
Implementation Resources
Implementation Tips
Best For:
Quality Assurance, Content Creators
Key Success Factor:
Improves output quality through iterative feedback and refinement cycles...
More AI Agent Recipes
Discover other proven implementation patterns
Prompt Chaining
When faced with a complex multi-step task, breaking it into sequential prompts can simplify the problem for the model.
Read Recipe →Parallelization
When different parts of a task can be done simultaneously, parallelization speeds up processing.
Read Recipe →Orchestrator-Workers
Complex tasks with unpredictable subtasks require dynamic breakdown.
Read Recipe →Autonomous Agent
Some tasks have no fixed steps and require continuous control.
Read Recipe →Reflection Pattern
LLMs may make logical mistakes without self-review.
Read Recipe →