PraisonAI CLI provides a rich interactive terminal user interface (TUI) for seamless AI-assisted coding sessions. Inspired by Gemini CLI, Codex CLI, and Claude Code, it offers streaming responses, built-in tools, and a clean terminal experience.
For testing and scripting, use --chat (or praisonai chat) to run a single prompt with interactive-style output:
Copy
# Single prompt with tools, streaming output, no boxespraisonai "list files in current folder" --chat# Test web searchpraisonai "search the web for AI news" --chat# Test file operationspraisonai "read the file README.md" --chat
--chat is different from praisonai chat which starts a web-based Chainlit UI. The praisonai chat flag is an alias for backward compatibility.
Toggle debug logging to ~/.praisonai/async_tui_debug.log
/plan <task>
Create a step-by-step plan for a task
/handoff <type> <task>
Delegate to specialized agent (code/research/review/docs)
/compact
Compress conversation history
/undo
Undo last response
/queue
Show queued messages
/queue clear
Clear message queue
/files
List workspace files for @ mentions
Copy
❯ /helpCommands: /help - Show this help /exit - Exit interactive mode /clear - Clear screen /tools - List available tools /profile - Toggle profiling (show timing breakdown) /model [name] - Show or change current model /stats - Show session statistics (tokens, cost) /compact - Compress conversation history /undo - Undo last response /queue - Show queued messages /queue clear - Clear message queue@ Mentions: @file.txt - Include file content in prompt @src/ - Include directory listingFeatures: • File operations (read, write, list) • Shell command execution • Web search • Context compression for long sessions • Queue messages while agent is processing
Include file content directly in your prompts using @ syntax, inspired by Gemini CLI and Claude Code:
Copy
# Include a file in your prompt❯ what does @README.md say about installation?📄 Included: README.md (2,345 chars)The README.md file explains that installation can be done via pip...# Include multiple files❯ compare @file1.py and @file2.py📄 Included: file1.py (500 chars)📄 Included: file2.py (450 chars)Here are the key differences between the two files...# Include directory listing❯ what files are in @src/📁 Listed: src/ (15 items)The src/ directory contains the following files...
Files larger than 50KB are automatically truncated
Hidden files and common ignore patterns (node_modules, pycache) are filtered from directory listings
Paths can be relative or absolute
Use ~ for home directory (e.g., @~/Documents/file.txt)
Made a mistake? Use /undo to remove the last conversation turn:
Copy
❯ write a function to sort a list[AI generates sorting function]❯ /undo✓ Undone last turnRemoved: write a function to sort a list...❯ /statsSession Statistics ... History turns: 8 # Reduced from 10
Queue messages while the AI agent is processing. Type new prompts and they’ll be executed in order as each task completes.
Copy
# While agent is processing, type more messages❯ Create a Python function to calculate fibonacci[Agent processing...]❯ Add docstrings to the function❯ Create unit tests# Check the queue❯ /queue⏳ Processing...Queued Messages (2): 0. ↳ Add docstrings to the function 1. ↳ Create unit testsUse /queue clear to clear, /queue remove N to remove
from praisonai.cli.features.interactive_tui import InteractiveSessionsession = InteractiveSession()# Add slash commands for completion# Note: Autocomplete only triggers when you type /session.add_commands(["help", "exit", "cost", "model", "plan", "queue"])# Add symbols from your codebasesession.add_symbols(["MyClass", "my_function", "CONFIG"])# Refresh file completionssession.refresh_files(root=Path("/path/to/project"))
from praisonai.cli.features.interactive_tui import StatusDisplaydisplay = StatusDisplay(show_status_bar=True)# Set status itemsdisplay.set_status("model", "gpt-4o")display.set_status("tokens", "1,234")display.set_status("cost", "$0.05")# Print formatted outputdisplay.print_welcome(version="1.0.0")display.print_response("Here's the solution...", title="AI Response")display.print_error("Something went wrong")display.print_info("Processing...")display.print_success("Done!")
# Ensure the file exists and is readablels -la @yourfile.txt# Use absolute paths if relative paths don't work❯ what does @/full/path/to/file.txt say?# Check for typos in the path❯ what does @README.md say? # Correct❯ what does @readme.md say? # Case-sensitive on Linux/Mac
# Run built-in smoke testspraisonai test interactive --suite smoke# Run tool-specific testspraisonai test interactive --suite tools# Run refactoring workflow testspraisonai test interactive --suite refactor# Run multi-agent testspraisonai test interactive --suite multi_agent# List available suitespraisonai test interactive --list# Run custom CSV testspraisonai test interactive --csv my_tests.csv# Keep artifacts for debuggingpraisonai test interactive --suite tools --keep-artifacts# Generate CSV templatepraisonai test interactive --generate-template
praisonai test interactive [OPTIONS]Options: --csv, -c PATH Path to CSV test file --suite, -s NAME Built-in suite: smoke, tools, refactor, multi_agent, github-advanced --model, -m MODEL LLM model for agent (default: gpt-4o-mini) --judge-model MODEL LLM model for judge (default: gpt-4o-mini) --workspace, -w PATH Workspace directory --artifacts-dir PATH Directory for artifacts --fail-fast, -x Stop on first failure --keep-artifacts Keep test artifacts --no-judge Skip judge evaluation --verbose, -v Verbose output --list List available suites --generate-template Generate CSV template