Skip to main content
Telemetry tracks agent metrics for monitoring and optimization.

Quick Start

1

Monitor Agent with Verbose Output

use praisonai::Agent;

// Enable verbose output for telemetry
let agent = Agent::new()
    .name("Assistant")
    .instructions("You are a helpful assistant")
    .verbose(true)  // Shows detailed output including timing
    .build()?;

let response = agent.chat("Hello").await?;
// Verbose mode shows token usage and timing information
2

Custom Metrics Tracking

use praisonai::Agent;
use std::time::Instant;

let agent = Agent::new()
    .name("Assistant")
    .build()?;

// Wrap with your own telemetry
let start = Instant::now();
let response = agent.chat("Hello").await?;
let latency = start.elapsed();

// Send to your metrics system
metrics::record_latency("agent_chat", latency);
metrics::increment_counter("agent_requests");

Metrics Available

MetricDescription
total_tokensTotal tokens used
prompt_tokensInput tokens
completion_tokensOutput tokens
latencyResponse time
tool_callsNumber of tool calls
api_callsNumber of LLM requests

Best Practices

Track costs and latency to optimize performance.
Alert on high token usage or slow responses.