Skip to main content
Evaluation measures how well your agents perform, helping you improve over time.

Quick Start

1

Evaluate Response

use praisonai::{Agent, AccuracyEvaluator};

let agent = Agent::new().name("Assistant").build()?;
let evaluator = AccuracyEvaluator::new();

let response = agent.chat("What is 2+2?").await?;
let score = evaluator.evaluate(&response, "4");

println!("Accuracy: {:.1}%", score.value * 100.0);
2

Multiple Criteria

use praisonai::CriteriaEvaluator;

let evaluator = CriteriaEvaluator::new()
    .criterion("clarity", "Is the response clear?")
    .criterion("accuracy", "Is the response accurate?")
    .build();

let scores = evaluator.evaluate(&response);

Evaluator Types

EvaluatorMeasures
AccuracyEvaluatorCorrectness vs expected
CriteriaEvaluatorMultiple custom criteria
PerformanceEvaluatorSpeed and efficiency
JudgeLLM-as-judge scoring

Best Practices

Use varied test cases to get accurate evaluation.
Low scores indicate where to improve prompts or tools.