Skip to main content
PraisonAI provides CLI commands for profiling agent performance without modifying code.

Quick Start

# Run with profiling enabled
PRAISONAI_PROFILE=1 praisonai "Your task here"

# Run with profiling and export report
praisonai profile run "Analyze this data" --output report.html --format html
Profiling Summary

Commands

profile run

Run a task with profiling enabled:
praisonai profile run "Your task description" [OPTIONS]
Options:
OptionShortDescription
--output-oOutput file path
--format-fOutput format: console, json, html
Examples:
# Basic profiling with console output
praisonai profile run "Write a poem about AI"

# Save JSON report
praisonai profile run "Analyze sentiment" -o report.json -f json

# Save HTML report
praisonai profile run "Summarize this text" -o report.html -f html

profile report

Generate a report from existing profiling data:
praisonai profile report [OPTIONS]
Options:
OptionShortDescription
--output-oOutput file path
--format-fOutput format: console, json, html
Examples:
# Print to console
praisonai profile report

# Export as JSON
praisonai profile report -f json -o profile.json

# Export as HTML
praisonai profile report -f html -o profile.html

profile benchmark

Benchmark agent performance with multiple iterations:
praisonai profile benchmark "Task" [OPTIONS]
Options:
OptionShortDefaultDescription
--iterations-n5Number of benchmark iterations
--warmup-w1Number of warmup runs
--output-o-Output file for results (JSON)
Examples:
# Basic benchmark
praisonai profile benchmark "Simple math: 2+2"

# 10 iterations with 2 warmup runs
praisonai profile benchmark "Translate to French" -n 10 -w 2

# Save results
praisonai profile benchmark "Generate code" -n 5 -o benchmark.json
Output:
Warmup 1/1...
Iteration 1/5...
Iteration 2/5...
...

============================================================
BENCHMARK RESULTS
============================================================
Iterations: 5
Successful: 5
Failed: 0

Timing:
  Mean: 1523.45ms
  Min: 1234.12ms
  Max: 1892.34ms
  P50: 1456.78ms
  P95: 1834.56ms

profile flamegraph

Export a flamegraph visualization:
praisonai profile flamegraph [OPTIONS]
Options:
OptionShortDefaultDescription
--output-oprofile.svgOutput SVG file
Example:
# Generate flamegraph
praisonai profile flamegraph -o my_profile.svg

profile summary

Print a quick profiling summary:
praisonai profile summary
Output:
============================================================
PROFILING SUMMARY
============================================================

Total Time: 2345.67ms
Operations: 42
Imports: 15
Flow Steps: 8

Statistics:
  P50 (Median): 45.23ms
  P95: 234.56ms
  P99: 456.78ms
  Mean: 55.85ms
  Std Dev: 78.34ms

Slowest Operations:
  llm_call: 1234.56ms
  agent_init: 456.78ms
  tool_execution: 234.56ms
  data_processing: 123.45ms
  response_parsing: 45.67ms
============================================================

Environment Variable

Enable profiling globally without CLI commands:
# Enable profiling
export PRAISONAI_PROFILE=1

# Run any praisonai command - profiling is active
praisonai "Your task"

# Disable profiling
unset PRAISONAI_PROFILE

Integration with agents.yaml

Profile agents defined in YAML:
# Profile agents.yaml execution
PRAISONAI_PROFILE=1 praisonai agents.yaml

# Or with explicit profiling
praisonai profile run --config agents.yaml "Execute the workflow"

Advanced Usage

Combine with py-spy

For production-grade flamegraphs:
# Install py-spy
pip install py-spy

# Record with py-spy (requires sudo on some systems)
py-spy record -o profile.svg -- python -m praisonai "Your task"

# Or for a running process
py-spy record -o profile.svg --pid <PID>

Continuous Profiling

Profile multiple runs and aggregate:
#!/bin/bash
for i in {1..10}; do
    praisonai profile run "Test task $i" -o "profile_$i.json" -f json
done

# Aggregate results with jq
jq -s '.' profile_*.json > all_profiles.json

CI/CD Integration

Add profiling to your CI pipeline:
# .github/workflows/benchmark.yml
name: Performance Benchmark

on:
  push:
    branches: [main]

jobs:
  benchmark:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      
      - name: Setup Python
        uses: actions/setup-python@v4
        with:
          python-version: '3.11'
      
      - name: Install dependencies
        run: pip install praisonai
      
      - name: Run benchmark
        run: |
          praisonai profile benchmark "Standard test task" \
            -n 5 -w 1 -o benchmark.json
      
      - name: Upload results
        uses: actions/upload-artifact@v3
        with:
          name: benchmark-results
          path: benchmark.json

Output Formats

Console Output

Human-readable format printed to terminal:
============================================================
PraisonAI Profiling Report
============================================================

Total Time: 2345.67ms
Import Time: 234.56ms
Timing Records: 42
Import Records: 15
Flow Steps: 8
Files Accessed: 12

By Category:
  function: 456.78ms
  api: 1234.56ms
  block: 654.33ms

Slowest Operations:
  llm_completion: 1234.56ms
  agent_init: 456.78ms
  ...
============================================================

JSON Output

Machine-readable format for processing:
{
  "summary": {
    "total_time_ms": 2345.67,
    "import_time_ms": 234.56,
    "timing_count": 42,
    "import_count": 15,
    "flow_steps": 8,
    "by_category": {
      "function": 456.78,
      "api": 1234.56
    }
  },
  "statistics": {
    "p50": 45.23,
    "p95": 234.56,
    "p99": 456.78,
    "mean": 55.85,
    "std_dev": 78.34
  },
  "timings": [...],
  "api_calls": [...],
  "streaming": [...],
  "memory": [...]
}

HTML Output

Interactive report with styling:
  • Summary metrics dashboard
  • Statistical analysis table
  • Slowest operations list
  • API calls breakdown
  • Streaming metrics (TTFT, total time)

Best Practices

Always include warmup runs to account for JIT compilation and caching:
praisonai profile benchmark "Task" -n 10 -w 2
Run benchmarks with similar data sizes and network conditions as production.
Mean latency can hide outliers. Focus on percentiles:
praisonai profile report -f json | jq '.statistics.p95'
Save baseline benchmarks and compare after code changes:
# Before
praisonai profile benchmark "Task" -o baseline.json

# After changes
praisonai profile benchmark "Task" -o after.json

# Compare
jq -s '.[0].mean_ms, .[1].mean_ms' baseline.json after.json

Troubleshooting

Ensure profiling is enabled:
export PRAISONAI_PROFILE=1
# or use profile run command
Increase iterations and warmup:
praisonai profile benchmark "Task" -n 20 -w 5
API calls are only tracked when using profiled functions or context managers in your code.