Guardrails
Output validation and quality assurance for tasks
Guardrails System
Guardrails provide output validation and quality assurance for agent tasks, ensuring results meet specified criteria before being accepted.
Overview
Guardrails ensure task outputs meet quality and safety criteria through:
- Function-based validation for structured checks
- LLM-based validation for natural language criteria
- Automatic retry mechanisms for failed validations
- Custom validation logic for specific requirements
Quick Start
Guardrail Types
Function-Based Guardrails
Function guardrails provide programmatic validation:
LLM-Based Guardrails
LLM guardrails use natural language for validation:
Advanced Features
Retry Configuration
Configure retry behaviour for failed validations:
Composite Guardrails
Combine multiple validation criteria:
Guardrail Results
Access detailed validation results:
Use Cases
Content Safety
Ensure generated content is safe and appropriate
Data Validation
Validate analysis results and reports
Code Quality
Ensure generated code is safe and functional
Compliance
Meet regulatory and policy requirements
Integration Patterns
With Agents
With Dynamic Guardrails
Best Practices
Clear Criteria
- Define specific, measurable criteria
- Document validation requirements
- Provide helpful error messages
- Include examples of valid output
Balanced Approach
- Use function guardrails for simple checks
- Reserve LLM guardrails for complex validation
- Implement caching for repeated validations
Retry Strategy
- Set appropriate retry limits
- Provide detailed failure reasons
- Suggest corrections when possible
Testing
- Log validation failures
- Handle edge cases gracefully
- Test guardrails independently
- Verify both pass and fail cases
- Check retry behaviour
- Monitor validation performance
Complete Example
Next Steps
======= description: “Implement validation and quality assurance for agent outputs” icon: “shield”
Guardrails
Guardrails provide validation and quality assurance mechanisms for agent outputs, supporting both function-based and LLM-based validation to ensure outputs meet specific criteria.
Overview
Guardrails allow you to:
- Validate agent outputs before they’re returned
- Implement custom validation logic
- Use LLM-based validation with natural language criteria
- Retry operations when validation fails
- Transform outputs to meet requirements
Quick Start
Guardrail Types
Function-Based Guardrails
Function guardrails provide programmatic validation:
LLM-Based Guardrails
LLM guardrails use natural language validation:
Validation Process
How It Works
-
Agent executes the task and produces output
-
Guardrail validates the output
-
If validation fails:
- Task retries (up to
max_retries
times) - Agent receives feedback about what to fix
- Task retries (up to
-
If validation passes:
- Output is returned as final result
Validation Response Format
Function guardrails return a tuple:
Where:
success
: Whether validation passedresult
: Modified output or error message
Advanced Examples
Content Moderation
Data Format Validation
Combining Multiple Validations
Output Transformation
Complex LLM Validation
Multi-Criteria Validation
Domain-Specific Validation
Integration Patterns
With PraisonAIAgents
Conditional Guardrails
Async Guardrails
Error Handling
Best Practices
- Clear Criteria - Make validation criteria specific and measurable
- Helpful Feedback - Provide clear error messages for failed validations
- Appropriate Retries - Set
max_retries
based on task complexity - Performance - Consider validation overhead, especially for LLM-based guardrails
- Combine Approaches - Use function guardrails for simple checks, LLM for complex validation
- Test Thoroughly - Test guardrails with various outputs including edge cases
- Fail Gracefully - Have fallback behavior when validation consistently fails