Skip to main content
Task validation in PraisonAI Agents ensures output quality through guardrails and decision-based feedback systems. When validation fails, tasks automatically retry with contextual feedback about what went wrong.

Quick Start

1

Install Package

First, install the PraisonAI Agents package:
pip install praisonaiagents
2

Create Validation with Guardrails

The simplest way to add validation is using guardrails:
from praisonaiagents import Agent, Task, PraisonAIAgents
from typing import Tuple, Any

# Define validation function
def validate_word_count(task_output) -> Tuple[bool, Any]:
    word_count = len(task_output.raw.split())
    if word_count == 500:
        return True, task_output
    else:
        return False, f"Article has {word_count} words, expected exactly 500"

# Create writer agent
writer = Agent(
    name="Writer",
    role="Content creator",
    goal="Write articles with exact word counts"
)

# Task with guardrail validation
write_task = Task(
    name="Write article",
    description="Write a 500-word article about AI",
    expected_output="500-word article",
    agent=writer,
    guardrail=validate_word_count,  # Validation function
    max_retries=3  # Will retry up to 3 times if validation fails
)

# Run the task
agents = PraisonAIAgents(
    agents=[writer],
    tasks=[write_task],
    process="sequential"
)

result = agents.start()

Validation Methods

PraisonAI offers two primary validation approaches: Guardrails provide inline validation with automatic retry and feedback mechanisms.

Function-Based Guardrails

from typing import Tuple, Any

def validate_output(task_output) -> Tuple[bool, Any]:
    """
    Validate task output
    Returns: (is_valid, feedback_or_output)
    """
    content = task_output.raw
    
    # Your validation logic
    if len(content) > 100:
        return True, task_output
    else:
        return False, "Content too short, needs at least 100 characters"

task = Task(
    description="Generate detailed analysis",
    agent=agent,
    guardrail=validate_output,
    max_retries=3
)

LLM-Based Guardrails

For complex validation that requires understanding:
writer = Agent(
    name="Writer",
    role="Content creator",
    goal="Write high-quality content",
    llm="gpt-4"  # Required for LLM guardrails
)

task = Task(
    description="Write a technical blog post about quantum computing",
    expected_output="Technical blog post",
    agent=writer,
    guardrail="Validate that the blog post: 1) Is technically accurate, 2) Contains at least 3 code examples, 3) Has proper introduction and conclusion sections",
    max_retries=2
)

2. Decision-Based Validation Workflows

For complex validation flows with multiple validators:
from praisonaiagents import Agent, Task, PraisonAIAgents

# Create agents
writer = Agent(
    name="Writer",
    role="Content creator",
    goal="Write content"
)

validator = Agent(
    name="Quality Checker",
    role="Content validator",
    goal="Ensure content meets requirements"
)

# Writing task
write_task = Task(
    name="write_article",
    description="Write a 500-word article about AI ethics",
    expected_output="500-word article",
    agent=writer,
    is_start=True,
    next_tasks=["validate_article"]
)

# Validation task (decision type)
validate_task = Task(
    name="validate_article",
    description="Check if article is exactly 500 words and covers key ethical points. Respond with 'valid' if correct, 'retry' with specific feedback if not.",
    expected_output="Validation decision with feedback",
    agent=validator,
    task_type="decision",
    condition={
        "valid": [],  # End workflow if valid
        "retry": ["write_article"]  # Retry writing if invalid
    }
)

# Create workflow
workflow = PraisonAIAgents(
    agents=[writer, validator],
    tasks={"write": write_task, "validate": validate_task},
    process="workflow"
)

result = workflow.start()

How Validation Feedback Works

When validation fails, the system automatically:
  1. Captures the validation feedback including:
    • The validation decision (e.g., “retry”, “invalid”)
    • Detailed feedback about what was wrong
    • The original output that failed
    • Which validator made the decision
  2. Passes feedback to retry task via context:
    # The retry task receives:
    task.validation_feedback = {
        "validation_response": "retry",
        "validation_details": "Article has 450 words, needs exactly 500",
        "rejected_output": "The original article text...",
        "validator_task": "validate_article",
        "validated_task": "write_article"
    }
    
  3. Includes feedback in task context for the next attempt

Complete Examples

Example 1: Data Validation Pipeline

from praisonaiagents import Agent, Task, PraisonAIAgents
import json

# Validation function for JSON data
def validate_json_schema(task_output) -> Tuple[bool, Any]:
    try:
        data = json.loads(task_output.raw)
        
        # Check required fields
        required_fields = ["name", "email", "age"]
        missing_fields = [f for f in required_fields if f not in data]
        
        if missing_fields:
            return False, f"Missing required fields: {', '.join(missing_fields)}"
        
        # Validate data types
        if not isinstance(data["age"], int) or data["age"] < 0:
            return False, "Age must be a positive integer"
        
        if "@" not in data["email"]:
            return False, "Invalid email format"
        
        return True, task_output
        
    except json.JSONDecodeError:
        return False, "Output is not valid JSON"

# Create data processor agent
processor = Agent(
    name="Data Processor",
    role="JSON data generator",
    goal="Generate valid user data in JSON format"
)

# Task with validation
generate_task = Task(
    description="Generate user data for John Doe, age 30, email [email protected]",
    expected_output="Valid JSON with name, email, and age fields",
    agent=processor,
    guardrail=validate_json_schema,
    max_retries=3
)

# Run pipeline
pipeline = PraisonAIAgents(
    agents=[processor],
    tasks=[generate_task]
)

result = pipeline.start()

Example 2: Multi-Stage Validation

from praisonaiagents import Agent, Task, PraisonAIAgents

# Create specialized agents
writer = Agent(
    name="Technical Writer",
    role="Documentation writer",
    goal="Write comprehensive technical documentation"
)

tech_reviewer = Agent(
    name="Technical Reviewer",
    role="Technical accuracy validator",
    goal="Ensure technical accuracy"
)

style_checker = Agent(
    name="Style Checker",
    role="Writing style validator",
    goal="Ensure documentation follows style guide"
)

# Writing task
write_docs = Task(
    name="write_documentation",
    description="Write API documentation for the new authentication endpoint",
    expected_output="Complete API documentation",
    agent=writer,
    is_start=True,
    next_tasks=["technical_review"]
)

# Technical validation
tech_review = Task(
    name="technical_review",
    description="Validate technical accuracy. Check: correct HTTP methods, proper authentication flow, accurate error codes. Respond 'pass' or 'rewrite' with specific issues.",
    expected_output="Technical validation result",
    agent=tech_reviewer,
    task_type="decision",
    condition={
        "pass": ["style_check"],
        "rewrite": ["write_documentation"]
    }
)

# Style validation
style_check = Task(
    name="style_check",
    description="Check documentation style. Verify: consistent formatting, clear examples, proper headings. Respond 'approved' or 'revise' with style issues.",
    expected_output="Style validation result",
    agent=style_checker,
    task_type="decision",
    condition={
        "approved": [],  # Complete
        "revise": ["write_documentation"]
    }
)

# Create validation pipeline
validation_pipeline = PraisonAIAgents(
    agents=[writer, tech_reviewer, style_checker],
    tasks={
        "write": write_docs,
        "tech": tech_review,
        "style": style_check
    },
    process="workflow"
)

result = validation_pipeline.start()

Example 3: Complex Validation with Context

from praisonaiagents import Agent, Task, PraisonAIAgents

# Validation that checks against requirements
class RequirementsValidator:
    def __init__(self, requirements):
        self.requirements = requirements
    
    def validate(self, task_output) -> Tuple[bool, Any]:
        content = task_output.raw
        missing_requirements = []
        
        for req in self.requirements:
            if req.lower() not in content.lower():
                missing_requirements.append(req)
        
        if missing_requirements:
            feedback = f"Missing requirements: {', '.join(missing_requirements)}"
            return False, feedback
        
        return True, task_output

# Create agent
analyst = Agent(
    name="Business Analyst",
    role="Requirements analyst",
    goal="Create comprehensive requirement documents"
)

# Define requirements
requirements = [
    "user authentication",
    "data encryption",
    "audit logging",
    "error handling",
    "performance metrics"
]

# Create validator instance
validator = RequirementsValidator(requirements)

# Task with custom validator
requirements_task = Task(
    description="Write security requirements for the new banking application",
    expected_output="Comprehensive security requirements document",
    agent=analyst,
    guardrail=validator.validate,
    max_retries=3
)

# Run task
agents = PraisonAIAgents(
    agents=[analyst],
    tasks=[requirements_task]
)

result = agents.start()

Validation Feedback in Action

When validation fails, agents receive detailed feedback:
# Example of validation feedback structure
{
    "validation_response": "retry",
    "validation_details": {
        "reason": "Article word count is 450, expected 500",
        "suggestions": "Add 50 more words to meet requirement",
        "failed_criteria": ["word_count"]
    },
    "rejected_output": "The original article content...",
    "validator_task": "validate_article",
    "validated_task": "write_article",
    "retry_count": 1
}

Best Practices

Clear Validation Criteria

  • Define specific, measurable criteria
  • Provide actionable feedback
  • Include examples of valid output
  • Set reasonable retry limits

Efficient Validation

  • Use function guardrails for simple checks
  • Reserve LLM validation for complex logic
  • Validate early to fail fast
  • Cache validation results when possible

Advanced Configuration

Retry Strategies

# Configure retry behavior
task = Task(
    description="Generate report",
    agent=agent,
    guardrail=validate_report,
    max_retries=3,
    retry_strategy="exponential",  # or "linear"
    retry_delay=2  # seconds between retries
)

Custom Feedback Formatting

def validate_with_detailed_feedback(task_output) -> Tuple[bool, Any]:
    # Perform validation
    issues = []
    
    if len(task_output.raw) < 100:
        issues.append("Content too short")
    
    if "conclusion" not in task_output.raw.lower():
        issues.append("Missing conclusion section")
    
    if issues:
        feedback = {
            "status": "failed",
            "issues": issues,
            "suggestions": "Please address the above issues",
            "example": "See template for reference"
        }
        return False, json.dumps(feedback)
    
    return True, task_output

Common Validation Patterns

Word/Character Count

def validate_length(min_words=100, max_words=500):
    def validator(task_output):
        word_count = len(task_output.raw.split())
        if min_words <= word_count <= max_words:
            return True, task_output
        else:
            return False, f"Word count {word_count} not in range {min_words}-{max_words}"
    return validator

task = Task(
    description="Write article",
    guardrail=validate_length(400, 600)
)

Content Requirements

def validate_content_includes(required_sections):
    def validator(task_output):
        content = task_output.raw.lower()
        missing = [s for s in required_sections if s.lower() not in content]
        
        if missing:
            return False, f"Missing sections: {', '.join(missing)}"
        return True, task_output
    return validator

task = Task(
    description="Write report",
    guardrail=validate_content_includes([
        "Executive Summary",
        "Methodology",
        "Results",
        "Conclusion"
    ])
)

Troubleshooting

  • Check validation criteria are achievable
  • Verify feedback is clear and actionable
  • Test validation function separately
  • Increase max_retries if needed
  • Ensure using proper validation return format
  • Check workflow connections
  • Verify decision task conditions
  • Enable verbose mode for debugging
  • Set appropriate max_retries
  • Implement retry counters
  • Add fallback conditions
  • Log validation attempts

Next Steps