Skip to main content
The PraisonAI CLI provides powerful commands and flags to interact with AI agents directly from your terminal.

Installation

pip install praisonai
export OPENAI_API_KEY=your_api_key

Quick Start

Direct Prompt Execution

# Basic usage - includes 5 built-in tools by default
praisonai "hello world"
Direct prompt execution

With Specific Model

# With a specific model
praisonai "list files" --llm gpt-4o-mini
With specific model

Verbose Mode

# Verbose mode - shows full agent panels and tool call details
praisonai "explain AI" -v
Verbose mode shows execution details

Basic Math Calculation

# Simple calculation
praisonai "What is 2+2?"
Basic Math Calculation

Other Examples

# Run agents from YAML
praisonai agents.yaml

# Interactive mode with slash commands
praisonai --interactive

Default Tools

The CLI now includes 5 built-in tools by default, giving agents the ability to interact with your filesystem and the web:
ToolDescription
read_fileRead contents of files
write_fileWrite content to files
list_filesList directory contents
execute_commandRun shell commands
internet_searchSearch the web
# Example: Agent uses list_files tool automatically
praisonai "List all Python files in this directory"
# Output: Tools used: list_files
# [file listing...]

# Example: Agent uses multiple tools
praisonai "Read README.md and summarize it"
# Output: Tools used: list_files, read_file
# [summary...]

Tool Call Tracking

When tools are used, the CLI displays which tools were called:
# Non-verbose mode (default) - clean output with tool summary
praisonai "List files here"
# Output:
# Tools used: list_files
# [results...]

# Verbose mode - full panels with tool call details
praisonai "List files here" -v
# Output:
# ╭─ Agent Info ─────────────────────────────────────────────────────╮
# │  👤 Agent: DirectAgent                                           │
# │  Tools: read_file, write_file, list_files, execute_command, ...  │
# ╰──────────────────────────────────────────────────────────────────╯
# ╭───────── Tool Call ──────────╮
# │ Calling function: list_files │
# ╰──────────────────────────────╯
# [results...]

New CLI Features

All CLI Features

Complete CLI Reference

Core Flags

FlagDescriptionExample
--frameworkSpecify framework (crewai, autogen, praisonai)praisonai agents.yaml --framework crewai
--uiUI mode (chainlit, gradio)praisonai --ui chainlit
--llmSpecify LLM modelpraisonai "task" --llm gpt-4o
--modelModel namepraisonai "task" --model gpt-4o
-v, --verboseVerbose output with full agent panelspraisonai "task" -v
--save, -sSave output to filepraisonai "task" --save

Interactive Mode

FlagDescriptionExample
--interactive, -iStart interactive mode with toolspraisonai --interactive
--chat, --chat-modeSingle prompt in interactive stylepraisonai "task" --chat

Tool Approval & Safety

FlagDescriptionExample
--trustAuto-approve all tool executionspraisonai "task" --trust
--approve-levelAuto-approve up to risk level (low/medium/high/critical)praisonai "task" --approve-level high
--autonomySet autonomy mode (suggest, auto_edit, full_auto)praisonai "task" --autonomy auto_edit
--sandboxEnable sandbox execution (off, basic, strict)praisonai "task" --sandbox basic
--guardrailValidate output against criteriapraisonai "task" --guardrail "criteria"

Planning & Memory

FlagDescriptionExample
--planningEnable planning modepraisonai "task" --planning
--planning-toolsTools for planning phasepraisonai "task" --planning --planning-tools tools.py
--planning-reasoningEnable chain-of-thought in planningpraisonai "task" --planning --planning-reasoning
--auto-approve-planAuto-approve generated planspraisonai "task" --planning --auto-approve-plan
--memoryEnable file-based memorypraisonai "task" --memory
--auto-memoryAuto extract memoriespraisonai "task" --auto-memory
--claude-memoryEnable Claude Memory Tool (Anthropic only)praisonai "task" --llm anthropic/claude-3 --claude-memory
--user-idUser ID for memory isolationpraisonai "task" --memory --user-id user123
--auto-saveAuto-save session with namepraisonai "task" --auto-save mysession
--historyLoad history from last N sessionspraisonai "task" --history 3

Tools & Extensions

FlagDescriptionExample
--tools, -tLoad additional toolspraisonai "task" --tools my_tools.py
--mcpUse MCP serverpraisonai "task" --mcp "npx server"
--mcp-envMCP environment variablespraisonai "task" --mcp "cmd" --mcp-env "KEY=val"
--handoffAgent delegation (comma-separated)praisonai "task" --handoff "a1,a2"
--final-agentFinal agent for multi-agent taskspraisonai "task" --final-agent summarizer
FlagDescriptionExample
--web-searchEnable native web searchpraisonai "task" --web-search
--web-fetchEnable web fetch for URLspraisonai "task" --web-fetch
--researchRun deep research on topicpraisonai research "topic"
--query-rewriteRewrite query for better resultspraisonai "task" --query-rewrite
--rewrite-toolsTools for query rewritingpraisonai "task" --query-rewrite --rewrite-tools tools.py

Context & Prompts

FlagDescriptionExample
--fast-contextAdd code context from pathpraisonai "task" --fast-context ./src
--file, -fRead input from filepraisonai "task" --file input.txt
--urlRepository URL for contextpraisonai "task" --url https://github.com/repo
--goalGoal for context engineeringpraisonai --url repo --goal "understand auth"
--auto-analyzeEnable automatic analysispraisonai --url repo --auto-analyze
--expand-promptExpand short prompt to detailedpraisonai "task" --expand-prompt
--expand-toolsTools for prompt expansionpraisonai "task" --expand-prompt --expand-tools tools.py
--include-rulesInclude rules filepraisonai "task" --include-rules rules.md
--max-tokensMaximum tokens for responsepraisonai "task" --max-tokens 4000

Monitoring & Display

FlagDescriptionExample
--metricsShow token usage and costspraisonai "task" --metrics
--telemetryEnable usage monitoringpraisonai "task" --telemetry
--flow-displayVisual workflow trackingpraisonai agents.yaml --flow-display
--todoGenerate todo list from taskpraisonai "plan" --todo
--routerSmart model selectionpraisonai "task" --router
--router-providerProvider for routerpraisonai "task" --router --router-provider openai
--imageProcess image filepraisonai "describe" --image photo.png
--prompt-cachingEnable prompt cachingpraisonai "task" --prompt-caching

Server & Deployment

FlagDescriptionExample
--serveStart API server for agentspraisonai agents.yaml --serve
--portServer port (default: 8005)praisonai agents.yaml --serve --port 8080
--hostServer host (default: 127.0.0.1)praisonai agents.yaml --serve --host 0.0.0.0
--deployDeploy the applicationpraisonai agents.yaml --deploy
--providerDeployment provider (gcp, aws, azure)praisonai --deploy --provider aws
--scheduleSchedule deploymentpraisonai --deploy --schedule daily
--schedule-configSchedule configurationpraisonai --deploy --schedule-config config.yaml
--max-retriesMax retries for deploymentpraisonai --deploy --max-retries 3

Workflow & Integration

FlagDescriptionExample
--workflowRun inline workflow stepspraisonai --workflow "step1:action1;step2:action2"
--workflow-varWorkflow variablespraisonai --workflow "..." --workflow-var "key=val"
--n8nExport workflow to n8npraisonai agents.yaml --n8n
--n8n-urln8n instance URLpraisonai --n8n --n8n-url http://localhost:5678
--api-urlPraisonAI API URL for n8npraisonai --n8n --api-url http://localhost:8005

Initialization & Setup

FlagDescriptionExample
--autoEnable auto modepraisonai --auto "create agents for task"
--initInitialize agents with topicpraisonai --init "research assistant"
--mergeMerge with existing agents.yamlpraisonai --auto "task" --merge

Model Providers

FlagDescriptionExample
--hfHugging Face modelpraisonai "task" --hf model-name
--ollamaOllama modelpraisonai "task" --ollama llama2
--datasetDataset for trainingpraisonai --dataset data.json

Special Modes

FlagDescriptionExample
--realtimeStart realtime voice interfacepraisonai --realtime
--callStart PraisonAI Call serverpraisonai --call
--publicExpose server with ngrok (with —call)praisonai --call --public
--claudecodeEnable Claude Code integrationpraisonai "task" --claudecode

Slash Commands (Interactive Mode)

CommandDescription
/helpShow available commands
/exit, /quitExit interactive mode
/clearClear the screen
/toolsList available tools (5 built-in)
Both direct prompts and interactive mode include 5 built-in tools by default: read_file, write_file, list_files, execute_command, internet_search. Tool usage is automatically tracked and displayed.

Standalone Commands

CommandDescriptionExample
chatStart web-based Chainlit chat UIpraisonai chat
knowledgeManage knowledge basepraisonai knowledge add doc.pdf
sessionManage sessionspraisonai session list
toolsManage toolspraisonai tools list
todoManage todospraisonai todo list
memoryManage memorypraisonai memory show
rulesManage rulespraisonai rules list
workflowManage workflowspraisonai workflow list
hooksManage hookspraisonai hooks list
researchDeep researchpraisonai research "query"
skillsManage agent skillspraisonai skills list

Skills Commands

CommandDescriptionExample
skills listList available skillspraisonai skills list
skills validateValidate a skill directorypraisonai skills validate --path ./my-skill
skills createCreate a new skill from templatepraisonai skills create --name my-skill
skills promptGenerate prompt XML for skillspraisonai skills prompt --dirs ./skills

Global Options

# Verbose output
praisonai "task" -v

# Specify LLM model
praisonai "task" --llm openai/gpt-4o

# Save output to file
praisonai "task" --save

# Enable planning mode
praisonai "task" --planning

# Enable memory
praisonai "task" --memory

Combining Features

You can combine multiple CLI features for powerful workflows:
# Research with metrics and guardrails
praisonai "Analyze market trends" --metrics --guardrail "Include sources"

# Planning with router and flow display
praisonai "Complex analysis" --planning --router --flow-display

# Multi-agent with handoff and memory
praisonai "Research and write" --handoff "researcher,writer" --auto-memory
Use praisonai --help to see all available options and commands.