Documentation Index Fetch the complete documentation index at: https://docs.praison.ai/llms.txt
Use this file to discover all available pages before exploring further.
Enable agents to evaluate their own responses and iteratively improve quality through self-reflection.
Quick Start
Simple Enable
Enable reflection with defaults: from praisonaiagents import Agent
agent = Agent (
name = " Reflective Agent " ,
instructions = " You self-reflect on responses " ,
reflection = True
)
With Configuration
Configure reflection behavior: from praisonaiagents import Agent
from praisonaiagents . config import ReflectionConfig
agent = Agent (
name = " Reflective Agent " ,
instructions = " You self-reflect on responses " ,
reflection = ReflectionConfig (
min_iterations = 1 ,
max_iterations = 3 ,
llm = " gpt-4o " ,
prompt = " Evaluate accuracy and completeness... "
)
)
Configuration Options
from praisonaiagents . config import ReflectionConfig
config = ReflectionConfig (
# Iteration limits
min_iterations = 1 ,
max_iterations = 3 ,
# Reflection LLM (if different from main)
llm = None ,
# Custom reflection prompt
prompt = None
)
Parameter Type Default Description min_iterationsint1Minimum reflection iterations max_iterationsint3Maximum reflection iterations llmstr | NoneNoneModel for reflection (defaults to agent’s model) promptstr | NoneNoneCustom prompt for evaluation
Common Patterns
Pattern 1: High-Quality Responses
from praisonaiagents import Agent
from praisonaiagents . config import ReflectionConfig
agent = Agent (
name = " Quality Agent " ,
instructions = " Produce high-quality analysis " ,
reflection = ReflectionConfig (
min_iterations = 2 ,
max_iterations = 5 ,
prompt = " Check for: accuracy, completeness, clarity, actionability "
)
)
Pattern 2: Different Model for Reflection
from praisonaiagents import Agent
from praisonaiagents . config import ReflectionConfig
agent = Agent (
name = " Dual Model Agent " ,
instructions = " Use GPT-4o to review responses " ,
llm = " gpt-4o-mini " , # Main model
reflection = ReflectionConfig (
llm = " gpt-4o " , # Better model for evaluation
max_iterations = 2
)
)
Best Practices
Set Reasonable Iteration Limits
Balance quality with response time. 2-3 iterations is usually sufficient.
Use Custom Prompts for Specific Domains
Customize the reflection prompt to evaluate domain-specific criteria.
More iterations and stronger models improve quality but increase cost and latency.
Self Reflection Learn about self-reflection features
PlanningConfig Configure planning mode