Skip to main content
Enable agents to evaluate their own responses and iteratively improve quality through self-reflection.

Quick Start

1

Simple Enable

Enable reflection with defaults:
from praisonaiagents import Agent

agent = Agent(
    name="Reflective Agent",
    instructions="You self-reflect on responses",
    reflection=True
)
2

With Configuration

Configure reflection behavior:
from praisonaiagents import Agent
from praisonaiagents.config import ReflectionConfig

agent = Agent(
    name="Reflective Agent",
    instructions="You self-reflect on responses",
    reflection=ReflectionConfig(
        min_iterations=1,
        max_iterations=3,
        llm="gpt-4o",
        prompt="Evaluate accuracy and completeness..."
    )
)

Configuration Options

from praisonaiagents.config import ReflectionConfig

config = ReflectionConfig(
    # Iteration limits
    min_iterations=1,
    max_iterations=3,
    
    # Reflection LLM (if different from main)
    llm=None,
    
    # Custom reflection prompt
    prompt=None
)
ParameterTypeDefaultDescription
min_iterationsint1Minimum reflection iterations
max_iterationsint3Maximum reflection iterations
llmstr | NoneNoneModel for reflection (defaults to agent’s model)
promptstr | NoneNoneCustom prompt for evaluation

Common Patterns

Pattern 1: High-Quality Responses

from praisonaiagents import Agent
from praisonaiagents.config import ReflectionConfig

agent = Agent(
    name="Quality Agent",
    instructions="Produce high-quality analysis",
    reflection=ReflectionConfig(
        min_iterations=2,
        max_iterations=5,
        prompt="Check for: accuracy, completeness, clarity, actionability"
    )
)

Pattern 2: Different Model for Reflection

from praisonaiagents import Agent
from praisonaiagents.config import ReflectionConfig

agent = Agent(
    name="Dual Model Agent",
    instructions="Use GPT-4o to review responses",
    llm="gpt-4o-mini",  # Main model
    reflection=ReflectionConfig(
        llm="gpt-4o",   # Better model for evaluation
        max_iterations=2
    )
)

Best Practices

Balance quality with response time. 2-3 iterations is usually sufficient.
Customize the reflection prompt to evaluate domain-specific criteria.
More iterations and stronger models improve quality but increase cost and latency.