Documentation Index
Fetch the complete documentation index at: https://docs.praison.ai/llms.txt
Use this file to discover all available pages before exploring further.
eval
AI Agent PraisonAI Agents Evaluation Framework. Provides comprehensive evaluation capabilities for AI agents with zero performance impact when not in use through lazy loading. Evaluator Types:- AccuracyEvaluator: Compare output against expected output using LLM-as-judge
- PerformanceEvaluator: Measure runtime and memory usage
- ReliabilityEvaluator: Verify expected tool calls are made
- CriteriaEvaluator: Evaluate against custom criteria
Import
Constants
| Name | Value |
|---|---|
_LAZY_IMPORTS | {'BaseEvaluator': ('base', 'BaseEvaluator'), 'AccuracyEvaluator': ('accuracy', 'AccuracyEvaluator'), 'PerformanceEvaluator': ('performance', 'PerformanceEvaluator'), 'ReliabilityEvaluator': ('reliabil... |

