The PraisonAI CLI includes a safety system that requires human approval before executing potentially dangerous tools. This page explains how to control this behavior.
Overview
Certain tools are marked with risk levels and require approval before execution:
| Risk Level | Tools | Default Behavior |
|---|
| CRITICAL | execute_command, kill_process, execute_code | Always prompts |
| HIGH | write_file, delete_file, move_file, copy_file | Always prompts |
| MEDIUM | evaluate, crawl, scrape_page | Always prompts |
| LOW | (none by default) | Always prompts |
Use the --trust flag to auto-approve all tool executions without prompting:
praisonai "hello world" --trust
Usage
# Auto-approve all tools (use with caution!)
praisonai "run ls -la command" --trust
Output:
⚠️ Trust mode enabled - all tool executions will be auto-approved
Tools used: execute_command
[directory listing...]
The --trust flag bypasses all safety prompts. Only use this when you trust the AI’s actions completely, such as in controlled environments or for testing.
Level-Based Approval (--approve-level)
Use --approve-level to auto-approve tools up to a specific risk level:
# Auto-approve low, medium, and high risk tools
# Still prompt for critical tools
praisonai "write to file and run command" --approve-level high
Available levels:
low - Only auto-approve low risk tools
medium - Auto-approve low and medium risk tools
high - Auto-approve low, medium, and high risk tools
critical - Auto-approve all tools (same as --trust)
Examples
# Auto-approve only low risk tools
praisonai "task" --approve-level low
# Auto-approve up to medium risk
praisonai "task" --approve-level medium
# Auto-approve up to high risk (prompt for critical)
praisonai "task" --approve-level high
# Auto-approve everything (same as --trust)
praisonai "task" --approve-level critical
Default Behavior (No Flags)
Without any flags, the CLI will prompt for approval on all dangerous tools:
praisonai "run a shell command to list files"
Output:
╭─ 🔒 Tool Approval Required ──────────────────────────────────────────────────╮
│ Function: execute_command │
│ Risk Level: CRITICAL │
│ Arguments: │
│ command: ls -la │
╰──────────────────────────────────────────────────────────────────────────────╯
Do you want to execute this critical risk tool? [y/n] (n):
Programmatic Control
You can also control approval behavior programmatically:
from praisonaiagents.approval import (
set_approval_callback,
ApprovalDecision,
remove_approval_requirement,
add_approval_requirement
)
# Option 1: Auto-approve all tools
def auto_approve(function_name, arguments, risk_level):
return ApprovalDecision(approved=True, reason="Auto-approved")
set_approval_callback(auto_approve)
# Option 2: Level-based approval
def level_approve(function_name, arguments, risk_level):
levels = {"low": 1, "medium": 2, "high": 3, "critical": 4}
if levels.get(risk_level, 4) <= levels["high"]:
return ApprovalDecision(approved=True)
return ApprovalDecision(approved=False, reason="Too risky")
set_approval_callback(level_approve)
# Option 3: Remove approval requirement for specific tool
remove_approval_requirement("execute_command")
# Option 4: Add approval requirement for custom tool
add_approval_requirement("my_dangerous_tool", risk_level="high")
Risk Level Reference
execute_command - Run shell commands
kill_process - Terminate processes
execute_code - Execute arbitrary code
write_file - Write to files
delete_file - Delete files
move_file - Move/rename files
copy_file - Copy files
execute_query - Database queries
evaluate - Evaluate expressions
crawl - Web crawling
scrape_page - Web scraping
Best Practices
For development/testing: Use --trust for faster iteration when you’re actively monitoring the AI’s actions.
For production scripts: Use --approve-level high to allow file operations but still require approval for shell commands.
The approval system is designed to prevent accidental destructive actions. Consider your use case carefully before bypassing it.