The PraisonAI CLI includes a safety system that requires human approval before executing potentially dangerous tools. This page explains how to control this behavior.
Overview
Certain tools are marked with risk levels and require approval before execution:
Risk Level Tools Default Behavior CRITICAL execute_command, kill_process, execute_codeAlways prompts HIGH write_file, delete_file, move_file, copy_fileAlways prompts MEDIUM evaluate, crawl, scrape_pageAlways prompts LOW (none by default) Always prompts
Use the --trust flag to auto-approve all tool executions without prompting:
praisonai " hello world " --trust
Usage
# Auto-approve all tools (use with caution!)
praisonai " run ls -la command " --trust
Output:
⚠️ Trust mode enabled - all tool executions will be auto-approved
Tools used: execute_command
[directory listing...]
The --trust flag bypasses all safety prompts. Only use this when you trust the AI’s actions completely, such as in controlled environments or for testing.
Level-Based Approval (--approve-level)
Use --approve-level to auto-approve tools up to a specific risk level:
# Auto-approve low, medium, and high risk tools
# Still prompt for critical tools
praisonai " write to file and run command " --approve-level high
Available levels:
low - Only auto-approve low risk tools
medium - Auto-approve low and medium risk tools
high - Auto-approve low, medium, and high risk tools
critical - Auto-approve all tools (same as --trust)
Examples
# Auto-approve only low risk tools
praisonai " task " --approve-level low
# Auto-approve up to medium risk
praisonai " task " --approve-level medium
# Auto-approve up to high risk (prompt for critical)
praisonai " task " --approve-level high
# Auto-approve everything (same as --trust)
praisonai " task " --approve-level critical
Approval Backend (--approval)
Use --approval to route tool approvals to a specific backend — Slack, Telegram, Discord, a webhook, or more:
# Route approvals to Slack
praisonai " deploy to production " --approval slack
# Route approvals to Telegram
praisonai " delete old logs " --approval telegram
# Auto-approve everything (same as --trust)
praisonai " run tests " --approval auto
# Interactive console prompt (default)
praisonai " clean up files " --approval console
Available Backends
Value Backend Required Env Vars consoleInteractive terminal prompt (default) — slackSlack Block Kit message + reply polling SLACK_BOT_TOKEN, SLACK_CHANNELtelegramTelegram inline keyboard + polling TELEGRAM_BOT_TOKEN, TELEGRAM_CHAT_IDdiscordDiscord embed + text reply polling DISCORD_BOT_TOKEN, DISCORD_CHANNEL_IDwebhookPOST to HTTP endpoint + poll for decision APPROVAL_WEBHOOK_URLhttpLocal web dashboard (browser-based) — agentDelegate to an AI reviewer agent — autoAuto-approve all (same as --trust) — noneDisable approval entirely —
Works With All CLI Commands
# Direct prompt
praisonai " task " --approval slack
# Run command
praisonai run " task " --approval telegram
# Chat / TUI
praisonai chat --approval discord
For full configuration of each backend (timeouts, polling intervals, custom parameters), see Approval Protocol .
Default Behavior (No Flags)
Without any flags, the CLI will prompt for approval on all dangerous tools:
praisonai " run a shell command to list files "
Output:
╭─ 🔒 Tool Approval Required ──────────────────────────────────────────────────╮
│ Function: execute_command │
│ Risk Level: CRITICAL │
│ Arguments: │
│ command: ls -la │
╰──────────────────────────────────────────────────────────────────────────────╯
Do you want to execute this critical risk tool? [y/n] (n):
Programmatic Control
You can also control approval behavior programmatically:
from praisonaiagents . approval import (
set_approval_callback ,
ApprovalDecision ,
remove_approval_requirement ,
add_approval_requirement
)
# Option 1: Auto-approve all tools
def auto_approve ( function_name , arguments , risk_level ):
return ApprovalDecision ( approved = True , reason = " Auto-approved " )
set_approval_callback ( auto_approve )
# Option 2: Level-based approval
def level_approve ( function_name , arguments , risk_level ):
levels = { " low " : 1 , " medium " : 2 , " high " : 3 , " critical " : 4 }
if levels . get ( risk_level , 4 ) <= levels [ " high " ]:
return ApprovalDecision ( approved = True )
return ApprovalDecision ( approved = False , reason = " Too risky " )
set_approval_callback ( level_approve )
# Option 3: Remove approval requirement for specific tool
remove_approval_requirement ( " execute_command " )
# Option 4: Add approval requirement for custom tool
add_approval_requirement ( " my_dangerous_tool " , risk_level = " high " )
Risk Level Reference
execute_command - Run shell commands
kill_process - Terminate processes
execute_code - Execute arbitrary code
write_file - Write to files
delete_file - Delete files
move_file - Move/rename files
copy_file - Copy files
execute_query - Database queries
evaluate - Evaluate expressions
crawl - Web crawling
scrape_page - Web scraping
Best Practices
For development/testing: Use --trust for faster iteration when you’re actively monitoring the AI’s actions.
For production scripts: Use --approve-level high to allow file operations but still require approval for shell commands.
The approval system is designed to prevent accidental destructive actions. Consider your use case carefully before bypassing it.
Approval Protocol All approval backends (Slack, Telegram, Discord, Webhook, HTTP, Agent)
Sandbox Execution Secure isolated command execution
Autonomy Modes Control AI autonomy levels
Tool Tracking Monitor tool usage