Documentation Index
Fetch the complete documentation index at: https://docs.praison.ai/llms.txt
Use this file to discover all available pages before exploring further.
CLI Compare
The --compare flag allows you to compare different CLI modes side-by-side, helping you understand the trade-offs between speed, accuracy, and capabilities.
Quick Start
# Compare basic vs tools mode
praisonai "What is artificial intelligence?" --compare "basic,tools"
# Compare multiple modes
praisonai "Explain quantum computing" --compare "basic,tools,planning,research"
# Save results to file
praisonai "Latest AI trends" --compare "basic,tools" --compare-output results.json
Available Modes
| Mode | Description | Use Case |
|---|
basic | Direct agent response | Simple questions, fast responses |
tools | Agent with tool access | Tasks requiring external data |
research | Deep research mode | Comprehensive research tasks |
planning | Planning-enabled agent | Complex multi-step tasks |
memory | Memory-enabled agent | Context-aware conversations |
router | Smart model selection | Automatic model optimization |
web_search | Native web search | Real-time information |
web_fetch | URL content retrieval | Specific webpage analysis |
query_rewrite | Query optimization | Improved search results |
expand_prompt | Prompt expansion | Detailed prompts from brief input |
Usage Examples
Basic Comparison
# Compare basic and tools modes
praisonai "What is the capital of France?" --compare "basic,tools"
Output:
┌─────────────────────────────────────────────────────────────┐
│ Comparison: What is the capital... │
├──────────┬────────────┬─────────────┬────────┬─────────────┤
│ Mode │ Time (ms) │ Model │ Tools │ Status │
├──────────┼────────────┼─────────────┼────────┼─────────────┤
│ basic │ 1234.5 │ gpt-4o-mini │ - │ ✅ │
│ tools │ 2567.8 │ gpt-4o-mini │ search │ ✅ │
├──────────┼────────────┼─────────────┼────────┼─────────────┤
│ Summary │ Fastest: basic │ │ Δ 1333.3ms │
└──────────┴────────────┴─────────────┴────────┴─────────────┘
Research Comparison
# Compare research approaches
praisonai "What are the latest developments in AI?" --compare "basic,research,web_search"
With Model Override
# Compare using a specific model
praisonai "Explain machine learning" --compare "basic,planning" --model gpt-4o
Save Results
# Save comparison to JSON file
praisonai "Write a poem about AI" --compare "basic,planning" --compare-output comparison.json
Python API
from praisonai.cli.features.compare import (
CompareHandler,
get_mode_config,
list_available_modes,
parse_modes,
)
# List available modes
modes = list_available_modes()
print(f"Available modes: {modes}")
# Create handler
handler = CompareHandler()
# Run comparison
result = handler.compare(
query="What is AI?",
modes=["basic", "tools", "planning"],
model="gpt-4o-mini"
)
# Print results
handler.print_result(result)
# Get summary
summary = result.get_summary()
print(f"Fastest: {summary['fastest']}")
print(f"Slowest: {summary['slowest']}")
# Save to file
from praisonai.cli.features.compare import save_compare_result
save_compare_result(result, "results.json")
Result Structure
ModeResult
Each mode comparison returns a ModeResult with:
| Field | Type | Description |
|---|
mode | str | Mode name |
output | str | Agent output |
execution_time_ms | float | Execution time in milliseconds |
model_used | str | Model used for generation |
tokens | dict | Token usage (input/output) |
cost | float | Estimated cost |
tools_used | list | Tools called during execution |
error | str | Error message if failed |
CompareResult
The overall comparison returns a CompareResult with:
| Field | Type | Description |
|---|
query | str | Original query |
comparisons | list | List of ModeResult objects |
timestamp | str | ISO timestamp |
Methods:
get_summary() - Returns summary statistics
to_dict() - Convert to dictionary
to_json() - Convert to JSON string
Best Practices
When to Use Compare
- Evaluating Approaches: Test different modes before production use
- Performance Tuning: Find the fastest mode for your use case
- Cost Optimization: Compare token usage across modes
- Quality Assessment: Compare output quality for different tasks
Mode Selection Guide
| Task Type | Recommended Modes |
|---|
| Simple Q&A | basic |
| Current events | web_search, research |
| Complex analysis | planning, research |
| Code generation | basic, tools |
| Multi-step tasks | planning |
CLI Reference
praisonai "<query>" --compare "<modes>" [options]
Options:
--compare <modes> Comma-separated list of modes to compare
--compare-output <path> Save results to JSON file
--model <model> Override model for all modes
--verbose Enable verbose output
Examples
Compare All Research Modes
praisonai "What are the benefits of renewable energy?" \
--compare "basic,research,web_search,planning" \
--compare-output energy_comparison.json
praisonai "Hello world" --compare "basic,tools" --verbose
Production Evaluation
from praisonai.cli.features.compare import CompareHandler
handler = CompareHandler(output="silent")
# Run multiple comparisons
queries = [
"What is AI?",
"Explain machine learning",
"How does neural network work?"
]
for query in queries:
result = handler.compare(query, modes=["basic", "planning"])
summary = result.get_summary()
print(f"{query[:30]}... - Fastest: {summary['fastest']}")