Reflection Pattern
Analysts, ResearchersRecipe Overview
LLMs may make logical mistakes without self-review. A reflection agent periodically asks the model to examine its own reasoning. The problem it solves is unchecked errors in chain-of-thought. In this pattern, the agent instructs the LLM to articulate its thought process and then verify or correct it. For example, after generating a solution, the agent might ask 'Is this reasoning sound?' or 'What could go wrong with this approach?' This metacognitive step catches errors and improves answer reliability, especially for complex reasoning tasks where initial intuition might be flawed.
Why This Recipe Works
Reduces reasoning errors through self-examination and verification
Implementation Resources
Implementation Tips
Best For:
Analysts, Researchers
Key Success Factor:
Reduces reasoning errors through self-examination and verification...
More AI Agent Recipes
Discover other proven implementation patterns
Prompt Chaining
When faced with a complex multi-step task, breaking it into sequential prompts can simplify the problem for the model.
Read Recipe →Parallelization
When different parts of a task can be done simultaneously, parallelization speeds up processing.
Read Recipe →Orchestrator-Workers
Complex tasks with unpredictable subtasks require dynamic breakdown.
Read Recipe →Evaluator-Optimizer
Ensuring answer quality can be hard in one pass.
Read Recipe →Autonomous Agent
Some tasks have no fixed steps and require continuous control.
Read Recipe →