The --telemetry flag enables detailed usage monitoring and analytics tracking.
Quick Start
praisonai "Your task" --telemetry
Usage
Basic Telemetry
praisonai "Analyze market trends" --telemetry
Expected Output:
📡 Telemetry enabled
╭─ Agent Info ─────────────────────────────────────────────────────────────────╮
│ 👤 Agent: DirectAgent │
│ Role: Assistant │
╰──────────────────────────────────────────────────────────────────────────────╯
╭────────────────────────────────── Response ──────────────────────────────────╮
│ Based on current market analysis... │
╰──────────────────────────────────────────────────────────────────────────────╯
📊 Telemetry Data:
┌─────────────────────────┬────────────────────────────┐
│ Metric │ Value │
├─────────────────────────┼────────────────────────────┤
│ Session ID │ sess_abc123def456 │
│ Start Time │ 2024-12-16T15:30:00Z │
│ End Time │ 2024-12-16T15:30:05Z │
│ Duration │ 5.2s │
│ Agent Type │ DirectAgent │
│ Model │ gpt-4o-mini │
│ Status │ success │
└─────────────────────────┴────────────────────────────┘
Combine with Metrics
praisonai "Complex analysis" --telemetry --metrics
Expected Output:
📡 Telemetry enabled
📊 Metrics enabled
╭────────────────────────────────── Response ──────────────────────────────────╮
│ [Agent response here] │
╰──────────────────────────────────────────────────────────────────────────────╯
📊 Combined Analytics:
┌─────────────────────────┬────────────────────────────┐
│ Metric │ Value │
├─────────────────────────┼────────────────────────────┤
│ Session ID │ sess_abc123def456 │
│ Duration │ 8.3s │
│ Total Tokens │ 1,245 │
│ Estimated Cost │ $0.0075 │
│ Model │ gpt-4o-mini │
│ Status │ success │
│ Tool Calls │ 2 │
│ Memory Operations │ 0 │
└─────────────────────────┴────────────────────────────┘
Telemetry Data Collected
| Category | Data Points |
|---|
| Session | Session ID, timestamps, duration |
| Agent | Agent type, model used, configuration |
| Execution | Status, errors, retries |
| Performance | Response time, token counts |
| Tools | Tool calls, success/failure rates |
Use Cases
Track execution times across different tasks:
# Monitor a complex workflow
praisonai "Multi-step analysis" --telemetry --planning
Debugging
Identify issues in agent execution:
# Verbose telemetry for debugging
praisonai "Failing task" --telemetry -v
Expected Output (with error):
📡 Telemetry enabled
⚠️ Execution Warning:
┌─────────────────────────┬────────────────────────────┐
│ Metric │ Value │
├─────────────────────────┼────────────────────────────┤
│ Session ID │ sess_xyz789 │
│ Status │ partial_success │
│ Retries │ 2 │
│ Error Type │ RateLimitError │
│ Recovery │ Automatic retry succeeded │
└─────────────────────────┴────────────────────────────┘
Usage Analytics
Track patterns over time:
# Run multiple tasks with telemetry
praisonai "Task 1" --telemetry
praisonai "Task 2" --telemetry
praisonai "Task 3" --telemetry
Privacy & Data
Telemetry data is used to improve PraisonAI and is handled according to our privacy policy. No prompt content or sensitive data is collected.
What’s Collected
- ✅ Execution metrics (duration, token counts)
- ✅ Error types and frequencies
- ✅ Feature usage patterns
- ✅ Model selection statistics
What’s NOT Collected
- ❌ Prompt content
- ❌ Response content
- ❌ API keys or credentials
- ❌ Personal information
Disable Telemetry
To disable telemetry globally:
export PRAISON_TELEMETRY=false
Or in Python:
import os
os.environ["PRAISON_TELEMETRY"] = "false"
Best Practices
Development
Enable telemetry during development to catch performance issues early
Production
Use telemetry to monitor production deployments and track SLAs
Debugging
Combine with -v verbose flag for detailed debugging information
Cost Tracking
Pair with --metrics for complete cost and performance visibility