Slash Commands
PraisonAI CLI provides interactive slash commands for quick actions during your AI coding sessions. Inspired by Gemini CLI, Codex CLI, and Claude Code, these commands give you powerful control without leaving the terminal.
Overview
Slash commands start with / and provide quick access to common operations in interactive mode.
❯ /help
❯ /tools
❯ /clear
❯ /exit
Available Commands (Interactive Mode)
When using praisonai chat, these commands are available:
| Command | Description |
|---|
/help | Show available commands and features |
/exit | Exit interactive mode |
/quit | Exit interactive mode (alias) |
/clear | Clear the screen |
/tools | List available tools |
/profile | Toggle profiling (show timing breakdown) |
/model [name] | Show or change current model |
/stats | Show session statistics (tokens, cost) |
/compact | Compress conversation history |
/undo | Undo last response |
/queue | Show queued messages |
/queue clear | Clear the message queue |
/queue remove N | Remove message at index N |
Interactive mode includes 5 built-in tools that the AI can use:
| Tool | Description | Risk Level |
|---|
read_file | Read contents of a file | Low |
write_file | Write content to a file | High (requires approval) |
list_files | List files in a directory | Low |
execute_command | Run shell commands | Critical (requires approval) |
internet_search | Search the web | Low |
❯ /tools
Available tools: 5
• read_file
• write_file
• list_files
• execute_command
• internet_search
Usage Examples
Starting Interactive Mode
# Start interactive mode
praisonai chat
# Or use short flag
praisonai -i
Using Help
❯ /help
Commands:
/help - Show this help
/exit - Exit interactive mode
/clear - Clear screen
/tools - List available tools
/profile - Toggle profiling (show timing breakdown)
/model [name] - Show or change current model
/stats - Show session statistics (tokens, cost)
/compact - Compress conversation history
/undo - Undo last response
/queue - Show queued messages
/queue clear - Clear message queue
@ Mentions:
@file.txt - Include file content in prompt
@src/ - Include directory listing
Features:
• File operations (read, write, list)
• Shell command execution
• Web search
• Context compression for long sessions
• Queue messages while agent is processing
❯ /tools
Available tools: 5
• read_file
• write_file
• list_files
• execute_command
• internet_search
# List files
❯ list files in current folder
Here are the files: README.md, main.py, config.yaml
# Read a file
❯ read the contents of README.md
The file contains: # My Project...
# Search the web
❯ search the web for latest AI news
Here are the results from web search...
Python API
You can also use slash commands programmatically:
from praisonai.cli.features import SlashCommandHandler
# Create handler
handler = SlashCommandHandler()
# Check if input is a command
if handler.is_command("/help"):
result = handler.execute("/help")
print(result)
# Get completions for auto-complete
completions = handler.get_completions("/he")
# Returns: ["/help"]
Custom Commands
Register your own slash commands:
from praisonai.cli.features.slash_commands import (
SlashCommand, SlashCommandHandler, CommandKind
)
# Define custom command
def my_command(args, context):
return {"type": "custom", "message": f"Args: {args}"}
custom_cmd = SlashCommand(
name="mycommand",
description="My custom command",
handler=my_command,
kind=CommandKind.ACTION,
aliases=["mc"]
)
# Register it
handler = SlashCommandHandler()
handler.register(custom_cmd)
# Use it
result = handler.execute("/mycommand arg1 arg2")
Command Context
Provide context for commands that need session data:
from praisonai.cli.features.slash_commands import CommandContext
# Create context with session data
context = CommandContext(
total_tokens=5000,
total_cost=0.015,
prompt_count=10,
current_model="gpt-4o",
session_start_time=time.time() - 300
)
# Set context on handler
handler.set_context(context)
# Now /cost will show real data
result = handler.execute("/cost")
Integration with Interactive Mode
Slash commands are automatically available in interactive mode:
praisonai chat
>>> Hello, help me with my code
[AI responds...]
>>> /cost
Session: abc12345
Tokens: 1,500
Cost: $0.0045
>>> /model gpt-4o-mini
Model changed to: gpt-4o-mini
>>> /exit
Goodbye!
Command Reference
/help
Show help information.
/help # Show all commands
/help <command> # Show help for specific command
/cost
Display session cost and token statistics.
/cost # Show full statistics
/model
Manage the AI model.
/model # Show current model
/model <name> # Change to specified model
/plan
Create an execution plan for a task.
/plan # Show current plan
/plan <task> # Create plan for task
/diff
Show git diff of current changes.
/diff # Show all changes
/diff --staged # Show staged changes only
/commit
Commit changes with an AI-generated message.
/commit # Auto-generate commit message
/commit "message" # Use custom message
/profile
Toggle profiling to see timing breakdown.
/profile # Toggle profiling on/off
When enabled, shows timing after each response:
─── Profiling ───
Import: 0.1ms
Agent setup: 0.3ms
LLM call: 1,234.5ms
Display: 15.2ms
Total: 1,250.1ms
/stats
Show session statistics.
/stats # Show token usage and cost
Output:
Session Statistics
Model: gpt-4o-mini
Requests: 5
Input tokens: 1,234
Output tokens: 2,567
Total tokens: 3,801
Estimated cost: $0.0023
History turns: 10
/compact
Compress conversation history to save tokens.
/compact # Summarize older history
This command:
- Keeps the last 2 conversation turns intact
- Summarizes older turns using the LLM
- Reduces token usage for long sessions
/undo
Undo the last conversation turn.
/undo # Remove last user prompt and AI response
/queue
Manage the message queue. Queue messages while the AI agent is processing and they’ll be executed in order.
/queue # Show all queued messages
/queue clear # Clear the entire queue
/queue remove N # Remove message at index N
Output when messages are queued:
❯ /queue
⏳ Processing...
Queued Messages (2):
0. ↳ Add docstrings to the function
1. ↳ Create unit tests
Use /queue clear to clear, /queue remove N to remove
Type new messages while the agent is processing. They’ll be queued and executed automatically in FIFO order.
Best Practices
- Use aliases -
/h is faster than /help
- Check costs regularly - Use
/cost to monitor spending
- Plan before executing - Use
/plan for complex tasks
- Commit frequently - Use
/commit after each logical change