Skip to main content

CLI Compare

The --compare flag allows you to compare different CLI modes side-by-side, helping you understand the trade-offs between speed, accuracy, and capabilities.

Quick Start

# Compare basic vs tools mode
praisonai "What is artificial intelligence?" --compare "basic,tools"

# Compare multiple modes
praisonai "Explain quantum computing" --compare "basic,tools,planning,research"

# Save results to file
praisonai "Latest AI trends" --compare "basic,tools" --compare-output results.json

Available Modes

ModeDescriptionUse Case
basicDirect agent responseSimple questions, fast responses
toolsAgent with tool accessTasks requiring external data
researchDeep research modeComprehensive research tasks
planningPlanning-enabled agentComplex multi-step tasks
memoryMemory-enabled agentContext-aware conversations
routerSmart model selectionAutomatic model optimization
web_searchNative web searchReal-time information
web_fetchURL content retrievalSpecific webpage analysis
query_rewriteQuery optimizationImproved search results
expand_promptPrompt expansionDetailed prompts from brief input

Usage Examples

Basic Comparison

# Compare basic and tools modes
praisonai "What is the capital of France?" --compare "basic,tools"
Output:
┌─────────────────────────────────────────────────────────────┐
│                Comparison: What is the capital...           │
├──────────┬────────────┬─────────────┬────────┬─────────────┤
│ Mode     │ Time (ms)  │ Model       │ Tools  │ Status      │
├──────────┼────────────┼─────────────┼────────┼─────────────┤
│ basic    │ 1234.5     │ gpt-4o-mini │ -      │ ✅          │
│ tools    │ 2567.8     │ gpt-4o-mini │ search │ ✅          │
├──────────┼────────────┼─────────────┼────────┼─────────────┤
│ Summary  │ Fastest: basic           │        │ Δ 1333.3ms  │
└──────────┴────────────┴─────────────┴────────┴─────────────┘

Research Comparison

# Compare research approaches
praisonai "What are the latest developments in AI?" --compare "basic,research,web_search"

With Model Override

# Compare using a specific model
praisonai "Explain machine learning" --compare "basic,planning" --model gpt-4o

Save Results

# Save comparison to JSON file
praisonai "Write a poem about AI" --compare "basic,planning" --compare-output comparison.json

Python API

from praisonai.cli.features.compare import (
    CompareHandler,
    get_mode_config,
    list_available_modes,
    parse_modes,
)

# List available modes
modes = list_available_modes()
print(f"Available modes: {modes}")

# Create handler
handler = CompareHandler(verbose=True)

# Run comparison
result = handler.compare(
    query="What is AI?",
    modes=["basic", "tools", "planning"],
    model="gpt-4o-mini"
)

# Print results
handler.print_result(result)

# Get summary
summary = result.get_summary()
print(f"Fastest: {summary['fastest']}")
print(f"Slowest: {summary['slowest']}")

# Save to file
from praisonai.cli.features.compare import save_compare_result
save_compare_result(result, "results.json")

Result Structure

ModeResult

Each mode comparison returns a ModeResult with:
FieldTypeDescription
modestrMode name
outputstrAgent output
execution_time_msfloatExecution time in milliseconds
model_usedstrModel used for generation
tokensdictToken usage (input/output)
costfloatEstimated cost
tools_usedlistTools called during execution
errorstrError message if failed

CompareResult

The overall comparison returns a CompareResult with:
FieldTypeDescription
querystrOriginal query
comparisonslistList of ModeResult objects
timestampstrISO timestamp
Methods:
  • get_summary() - Returns summary statistics
  • to_dict() - Convert to dictionary
  • to_json() - Convert to JSON string

Best Practices

When to Use Compare

  1. Evaluating Approaches: Test different modes before production use
  2. Performance Tuning: Find the fastest mode for your use case
  3. Cost Optimization: Compare token usage across modes
  4. Quality Assessment: Compare output quality for different tasks

Mode Selection Guide

Task TypeRecommended Modes
Simple Q&Abasic
Current eventsweb_search, research
Complex analysisplanning, research
Code generationbasic, tools
Multi-step tasksplanning

CLI Reference

praisonai "<query>" --compare "<modes>" [options]

Options:
  --compare <modes>        Comma-separated list of modes to compare
  --compare-output <path>  Save results to JSON file
  --model <model>          Override model for all modes
  --verbose               Enable verbose output

Examples

Compare All Research Modes

praisonai "What are the benefits of renewable energy?" \
  --compare "basic,research,web_search,planning" \
  --compare-output energy_comparison.json

Quick Performance Check

praisonai "Hello world" --compare "basic,tools" --verbose

Production Evaluation

from praisonai.cli.features.compare import CompareHandler

handler = CompareHandler(verbose=False)

# Run multiple comparisons
queries = [
    "What is AI?",
    "Explain machine learning",
    "How does neural network work?"
]

for query in queries:
    result = handler.compare(query, modes=["basic", "planning"])
    summary = result.get_summary()
    print(f"{query[:30]}... - Fastest: {summary['fastest']}")