Quick Start
1
Install Package
First, install the PraisonAI Agents package:
2
Enable Quality Checking
Create a task with quality checking enabled:
How Quality Checking Works
Whenquality_check=True is set on a task:
- Output Generation: Agent completes the task normally
- Quality Assessment: System automatically evaluates the output
- Score Calculation: Assigns a quality score (0.0 to 1.0)
- Memory Storage: High-quality outputs (score > 0.7) are stored in long-term memory
- Metadata Tracking: Quality metrics are saved with the result
Quality Metrics
The quality assessment evaluates multiple factors:Completeness
- Does output match expected format?
- Are all requirements addressed?
- Is the response comprehensive?
Coherence
- Is the output well-structured?
- Does it flow logically?
- Are ideas connected properly?
Relevance
- Does output address the task?
- Is content on-topic?
- Are examples appropriate?
Accuracy
- Are facts correct?
- Is reasoning sound?
- Are calculations accurate?
Complete Examples
Example 1: Blog Post with Quality Tracking
Example 2: Code Generation with Quality Metrics
Example 3: Research with Quality Filtering
Example 4: Iterative Quality Improvement
Configuration Options
Task-Level Configuration
Memory Configuration for Quality
Quality Score Calculation
The default quality scoring considers:-
Task Completion (40%)
- Does output match expected format?
- Are requirements met?
-
Content Quality (30%)
- Grammar and coherence
- Logical flow
- Appropriate length
-
Relevance (20%)
- On-topic content
- Addresses the prompt
-
Creativity/Insight (10%)
- Novel approaches
- Valuable insights
Custom Quality Functions
Best Practices
Define Clear Expectations
- Specify detailed expected outputs
- Include format requirements
- List must-have elements
- Provide examples when possible
Use Appropriate Models
- GPT-4 for complex quality needs
- GPT-3.5 for basic quality checks
- Specialized models for domains
- Consider cost vs quality tradeoff
Quality Metrics Dashboard
Troubleshooting
Quality scores always low
Quality scores always low
- Review expected_output clarity
- Check if model is appropriate
- Verify quality checker logic
- Ensure task description is detailed
Nothing stored in memory
Nothing stored in memory
- Verify memory is configured
- Check quality threshold (default 0.7)
- Ensure quality_check=True
- Look at actual quality scores
Custom quality checker not working
Custom quality checker not working
- Ensure function returns float 0-1
- Check function receives task_output
- Verify no exceptions thrown
- Test function independently

