Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.praison.ai/llms.txt

Use this file to discover all available pages before exploring further.

The PraisonAI CLI includes a safety system that requires human approval before executing potentially dangerous tools. This page explains how to control this behavior.

Overview

Certain tools are marked with risk levels and require approval before execution:
Risk LevelToolsDefault Behavior
CRITICALexecute_command, kill_process, execute_codeAlways prompts
HIGHwrite_file, delete_file, move_file, copy_fileAlways prompts
MEDIUMevaluate, crawl, scrape_pageAlways prompts
LOW(none by default)Always prompts

YAML Configuration

Configure approval settings in your YAML configuration file using the approval: block:
approval:
  enabled: true                    # bool, default false
  backend: "console"               # console | slack | telegram | discord | webhook | http | agent | auto | none
  approve_all_tools: false         # bool, default false
  timeout: 60                      # float seconds, default null (no timeout); accepts "none"
  approve_level: "high"            # low | medium | high | critical, default null
  guardrails: "..."                # optional guardrail description

Shorthand Forms

approval: true                # → enabled, console backend
approval: false               # → disabled
approval: slack               # → enabled, slack backend
approval: null                # → disabled

Legacy YAML Aliases (Backward Compatible)

Legacy keyNew key
backend_namebackend
all_toolsapprove_all_tools
approval_timeouttimeout
Primary keys win when both are present.

Validation

Unknown keys now raise ValueError with the list of allowed keys — silent typos no longer pass through undetected.

CLI ⇄ YAML ⇄ Python Mapping

All three surfaces use the same unified configuration:
CLI flagYAML keyPython field
--trustN/A (use backend: auto)backend="auto", enabled=True
--approval <backend>backend: <backend>backend="<backend>", enabled=True
--approve-all-toolsapprove_all_tools: trueapprove_all_tools=True
--approval-timeout <sec>timeout: <sec>timeout=<sec>
--approve-level <level>approve_level: <level>approve_level="<level>"
--guardrail "<text>"guardrails: "<text>"guardrails="<text>"

Auto-Approve All Tools (--trust)

Use the --trust flag to auto-approve all tool executions without prompting:
praisonai "hello world" --trust
Trust mode auto-approves all tools example

Usage

# Auto-approve all tools (use with caution!)
praisonai "run ls -la command" --trust
Output:
⚠️  Trust mode enabled - all tool executions will be auto-approved
Tools used: execute_command
[directory listing...]
The --trust flag bypasses all safety prompts. Only use this when you trust the AI’s actions completely, such as in controlled environments or for testing.

Level-Based Approval (--approve-level)

Use --approve-level to auto-approve tools up to a specific risk level:
# Auto-approve low, medium, and high risk tools
# Still prompt for critical tools
praisonai "write to file and run command" --approve-level high
Available levels:
  • low - Only auto-approve low risk tools
  • medium - Auto-approve low and medium risk tools
  • high - Auto-approve low, medium, and high risk tools
  • critical - Auto-approve all tools (same as --trust)

Examples

# Auto-approve only low risk tools
praisonai "task" --approve-level low

# Auto-approve up to medium risk
praisonai "task" --approve-level medium

# Auto-approve up to high risk (prompt for critical)
praisonai "task" --approve-level high

# Auto-approve everything (same as --trust)
praisonai "task" --approve-level critical

Approval Backend (--approval)

Use --approval to route tool approvals to a specific backend — Slack, Telegram, Discord, a webhook, or more:
# Route approvals to Slack
praisonai "deploy to production" --approval slack

# Route approvals to Telegram
praisonai "delete old logs" --approval telegram

# Auto-approve everything (same as --trust)
praisonai "run tests" --approval auto

# Interactive console prompt (default)
praisonai "clean up files" --approval console

Available Backends

ValueBackendRequired Env Vars
consoleInteractive terminal prompt (default)
slackSlack Block Kit message + reply pollingSLACK_BOT_TOKEN, SLACK_CHANNEL
telegramTelegram inline keyboard + pollingTELEGRAM_BOT_TOKEN, TELEGRAM_CHAT_ID
discordDiscord embed + text reply pollingDISCORD_BOT_TOKEN, DISCORD_CHANNEL_ID
webhookPOST to HTTP endpoint + poll for decisionAPPROVAL_WEBHOOK_URL
httpLocal web dashboard (browser-based)
agentDelegate to an AI reviewer agent
autoAuto-approve all (same as --trust)
noneDisable approval entirely

Works With All CLI Commands

# Direct prompt
praisonai "task" --approval slack

# Run command
praisonai run "task" --approval telegram

# Chat / TUI
praisonai chat --approval discord
For full configuration of each backend (timeouts, polling intervals, custom parameters), see Approval Protocol.

Default Behavior (No Flags)

Without any flags, the CLI will prompt for approval on all dangerous tools:
praisonai "run a shell command to list files"
Output:
╭─ 🔒 Tool Approval Required ──────────────────────────────────────────────────╮
│ Function: execute_command                                                    │
│ Risk Level: CRITICAL                                                         │
│ Arguments:                                                                   │
│   command: ls -la                                                            │
╰──────────────────────────────────────────────────────────────────────────────╯
Do you want to execute this critical risk tool? [y/n] (n):

Programmatic Control

You can also control approval behavior programmatically:
from praisonaiagents.approval import (
    set_approval_callback,
    ApprovalDecision,
    remove_approval_requirement,
    add_approval_requirement
)

# Option 1: Auto-approve all tools
def auto_approve(function_name, arguments, risk_level):
    return ApprovalDecision(approved=True, reason="Auto-approved")

set_approval_callback(auto_approve)

# Option 2: Level-based approval
def level_approve(function_name, arguments, risk_level):
    levels = {"low": 1, "medium": 2, "high": 3, "critical": 4}
    if levels.get(risk_level, 4) <= levels["high"]:
        return ApprovalDecision(approved=True)
    return ApprovalDecision(approved=False, reason="Too risky")

set_approval_callback(level_approve)

# Option 3: Remove approval requirement for specific tool
remove_approval_requirement("execute_command")

# Option 4: Add approval requirement for custom tool
add_approval_requirement("my_dangerous_tool", risk_level="high")

Risk Level Reference

Critical Risk Tools

  • execute_command - Run shell commands
  • kill_process - Terminate processes
  • execute_code - Execute arbitrary code

High Risk Tools

  • write_file - Write to files
  • delete_file - Delete files
  • move_file - Move/rename files
  • copy_file - Copy files
  • execute_query - Database queries

Medium Risk Tools

  • evaluate - Evaluate expressions
  • crawl - Web crawling
  • scrape_page - Web scraping

Best Practices

For development/testing: Use --trust for faster iteration when you’re actively monitoring the AI’s actions.
For production scripts: Use --approve-level high to allow file operations but still require approval for shell commands.
The approval system is designed to prevent accidental destructive actions. Consider your use case carefully before bypassing it.

Approval Protocol

All approval backends (Slack, Telegram, Discord, Webhook, HTTP, Agent)

Sandbox Execution

Secure isolated command execution

Autonomy Modes

Control AI autonomy levels

Tool Tracking

Monitor tool usage