Human Approval System

The approval module provides a comprehensive human-in-the-loop approval system for dangerous tool operations, ensuring safety and control over high-risk agent actions.

Overview

The approval system allows you to:
  • Require human approval before executing dangerous operations
  • Categorize tools by risk level (critical, high, medium, low)
  • Implement custom approval logic and policies
  • Modify tool arguments before execution
  • Track approved operations to avoid duplicate prompts

Quick Start

from praisonaiagents import Agent
from praisonaiagents.tools import ShellTool

# Tools automatically require approval based on risk
agent = Agent(
    name="System Admin",
    tools=[ShellTool()]  # Shell commands are high-risk
)

# When agent tries to execute a command
agent.chat("Delete all temporary files in /tmp")
# User will be prompted for approval before execution

Risk Levels

Critical Risk

Operations that could cause severe damage:
  • execute_command - Arbitrary command execution
  • execute_code - Code execution
  • kill_process - Process termination

High Risk

Operations that modify system state:
  • write_file - File creation/modification
  • delete_file - File deletion
  • move_file - File relocation
  • copy_file - File duplication
  • execute_query - Database modifications

Medium Risk

Operations with moderate impact:
  • evaluate - Expression evaluation
  • crawl - Web crawling
  • scrape_page - Web scraping

Low Risk

Operations with minimal impact (example category)

Using the Approval Decorator

Basic Usage

from praisonaiagents.agents.approval import require_approval

@require_approval("high")
def delete_user_data(user_id: str):
    """Delete all data for a user - requires approval"""
    # Dangerous operation
    pass

Async Functions

@require_approval("critical")
async def shutdown_server():
    """Shutdown the server - requires approval"""
    await perform_shutdown()

Custom Approval Callbacks

Console Approval (Default)

The default callback shows a formatted prompt in the console:
from praisonaiagents.agents.approval import console_approval_callback

# This is used by default, but you can customize it
def my_console_approval(tool_name, risk_level, args):
    # Custom console formatting
    response = input(f"Allow {tool_name}? (y/n): ")
    return ApprovalDecision(
        approved=response.lower() == 'y',
        reason="User input"
    )

Automated Policies

from praisonaiagents.agents.approval import ApprovalDecision

def policy_based_approval(tool_name, risk_level, args):
    # Auto-approve low risk operations
    if risk_level == "low":
        return ApprovalDecision(approved=True, reason="Auto-approved: low risk")
    
    # Block certain patterns
    if "rm -rf /" in str(args):
        return ApprovalDecision(approved=False, reason="Dangerous command pattern")
    
    # Modify arguments
    if tool_name == "write_file" and args.get("path", "").startswith("/etc"):
        return ApprovalDecision(
            approved=True,
            modified_args={**args, "path": f"/tmp{args['path']}"},
            reason="Redirected to safe location"
        )
    
    # Delegate to human for others
    return console_approval_callback(tool_name, risk_level, args)

GUI Integration

import tkinter as tk
from tkinter import messagebox

def gui_approval_callback(tool_name, risk_level, args):
    root = tk.Tk()
    root.withdraw()
    
    message = f"Approve {tool_name} ({risk_level} risk)?\nArgs: {args}"
    approved = messagebox.askyesno("Approval Required", message)
    
    root.destroy()
    
    return ApprovalDecision(
        approved=approved,
        reason="GUI approval"
    )

Managing Approval Requirements

Adding Requirements

from praisonaiagents.agents.approval import add_approval_requirement

# Add custom tool to approval list
add_approval_requirement("custom_dangerous_tool", "high")

# Add with specific module
add_approval_requirement("api_delete", "critical", module_name="my_api_tools")

Removing Requirements

from praisonaiagents.agents.approval import remove_approval_requirement

# Remove approval requirement
remove_approval_requirement("read_file")  # Reading files no longer requires approval

Checking Requirements

from praisonaiagents.agents.approval import is_approval_required, get_risk_level

if is_approval_required("delete_file"):
    risk = get_risk_level("delete_file")
    print(f"delete_file requires approval (risk: {risk})")

Integration with Agents

Setting Custom Callbacks

from praisonaiagents import register_approval_callback

# Register globally
register_approval_callback(policy_based_approval)

# Now all agents will use this callback
agent = Agent(
    name="Admin Bot",
    tools=[ShellTool(), FileTools()]
)

Context Preservation

The approval system tracks approved operations within the same context:
# First call - requires approval
agent.chat("Run ls -la")  # Prompts for approval

# Within same context - already approved
agent.chat("Run ls -la again")  # No prompt, uses previous approval

Complete Example

from praisonaiagents import Agent, register_approval_callback
from praisonaiagents.tools import ShellTool, FileTools
from praisonaiagents.agents.approval import ApprovalDecision
import logging

# Set up logging
logging.basicConfig(level=logging.INFO)

# Custom approval callback with logging
def logged_approval(tool_name, risk_level, args):
    logging.info(f"Approval requested: {tool_name} ({risk_level})")
    
    # Auto-approve read operations
    if tool_name.startswith("read_"):
        return ApprovalDecision(approved=True, reason="Read operations allowed")
    
    # Require human approval for others
    print(f"\n🛡️ Approval Required")
    print(f"Tool: {tool_name}")
    print(f"Risk: {risk_level}")
    print(f"Arguments: {args}")
    
    response = input("Approve? (y/n/m to modify): ").lower()
    
    if response == 'y':
        return ApprovalDecision(approved=True, reason="User approved")
    elif response == 'm':
        # Allow argument modification
        new_args = {}
        for key, value in args.items():
            new_val = input(f"{key} [{value}]: ") or value
            new_args[key] = new_val
        return ApprovalDecision(
            approved=True,
            modified_args=new_args,
            reason="User approved with modifications"
        )
    else:
        return ApprovalDecision(approved=False, reason="User denied")

# Register the callback
register_approval_callback(logged_approval)

# Create agent with dangerous tools
agent = Agent(
    name="System Administrator",
    role="Server management",
    goal="Maintain system health",
    tools=[ShellTool(), FileTools()]
)

# Operations will require approval
response = agent.chat("Check disk usage and clean up if needed")

Best Practices

  1. Risk Assessment - Carefully categorize tools by actual risk level
  2. User Experience - Provide clear information about what will be executed
  3. Logging - Always log approval decisions for audit trails
  4. Timeout Handling - Consider adding timeouts for approval prompts
  5. Batch Operations - Group related approvals to reduce prompt fatigue
  6. Emergency Override - Implement emergency approval bypass for critical situations
  7. Testing - Test approval flows thoroughly before production deployment