Skip to main content
Reflection lets agents review their output and self-improve.

Quick Start

1

Create Self-Reviewing Agent

use praisonai::Agent;

// Build reflection loop into instructions
let agent = Agent::new()
    .name("Writer")
    .instructions("Create high-quality content by:
    1. Writing an initial draft
    2. Reviewing your draft critically
    3. Making improvements based on your review
    4. Providing the polished final version")
    .build()?;

let response = agent.chat("Write about AI").await?;
// Agent writes, self-reviews, and outputs improved version
2

Two-Agent Reflection Loop

use praisonai::Agent;

// Writer agent
let writer = Agent::new()
    .name("Writer")
    .instructions("Write clear, engaging content")
    .build()?;

// Reviewer agent for reflection
let reviewer = Agent::new()
    .name("Reviewer")
    .instructions("Review the content. List 3 specific improvements needed.")
    .build()?;

// Reflection loop
let mut content = writer.chat("Write about AI trends").await?;

for _ in 0..2 {  // Max 2 improvement rounds
    let feedback = reviewer.chat(&format!("Review:\n{}", content)).await?;
    content = writer.chat(&format!("Improve based on feedback:\n{}\n\nOriginal:\n{}", feedback, content)).await?;
}

How It Works


Configuration

OptionTypeDefaultDescription
enabledboolfalseEnable reflection
max_iterationsusize2Max improvement rounds
llmStringMain LLMReflection model

Best Practices

Reflection improves writing, analysis, and complex reasoning.
2-3 iterations usually sufficient. More adds latency with diminishing returns.