Skip to main content

Attribution & Tracing CLI

Track agent execution with attribution headers and tracing using the PraisonAI CLI.

Commands

View Trace Information

# Run with tracing enabled
praisonai-ts llm trace "Hello world" --model openai/gpt-4o-mini

# Show attribution headers in output
praisonai-ts llm trace "Hello" --verbose

Set Custom Attribution

# Custom agent ID
praisonai-ts chat "Hello" --agent-id my-custom-agent

# Custom session ID
praisonai-ts chat "Hello" --session my-session-123

# Custom run ID
praisonai-ts chat "Hello" --run-id run-abc-123

# All custom attribution
praisonai-ts chat "Hello" \
  --agent-id my-agent \
  --session my-session \
  --run-id my-run \
  --trace-id my-trace

Multi-Agent Tracing

# Run workflow with tracing
praisonai-ts workflow run ./workflow.yaml --trace

# Show agent attribution for each step
praisonai-ts workflow run ./workflow.yaml --trace --verbose

Options

OptionDescription
--agent-idCustom agent identifier
--sessionSession ID for conversation continuity
--run-idCustom run identifier
--trace-idTrace ID for distributed tracing
--traceEnable tracing output
--verboseShow detailed attribution info
--jsonOutput attribution as JSON

Examples

Basic Tracing

$ praisonai-ts llm trace "What is 2+2?" --model openai/gpt-4o-mini

 Response: 2 + 2 = 4

Attribution:
  Agent ID: agent_abc123
  Run ID: run_xyz789
  Session ID: session_def456
  Backend: ai-sdk
  Duration: 234ms

JSON Attribution Output

$ praisonai-ts llm trace "Hello" --json
{
  "success": true,
  "data": {
    "response": "Hello! How can I help you today?",
    "attribution": {
      "agentId": "agent_abc123",
      "runId": "run_xyz789",
      "sessionId": "session_def456",
      "traceId": null,
      "backend": "ai-sdk",
      "provider": "openai",
      "model": "gpt-4o-mini"
    }
  },
  "meta": {
    "duration_ms": 234,
    "tokens": {
      "prompt": 12,
      "completion": 8,
      "total": 20
    }
  }
}

Workflow Tracing

$ praisonai-ts workflow run ./pipeline.yaml --trace --verbose

=== Workflow: data-pipeline ===

Step 1: extract
  Agent ID: extractor
  Run ID: run_step1_abc
  Status: Complete (1.2s)

Step 2: transform
  Agent ID: transformer
  Run ID: run_step2_def
  Status: Complete (0.8s)

Step 3: load
  Agent ID: loader
  Run ID: run_step3_ghi
  Status: Complete (0.5s)

=== Workflow Complete ===
Total Duration: 2.5s

Session Continuity

# First message - creates session
$ praisonai-ts chat "My name is John" --session user-123
Session: user-123
Response: Nice to meet you, John!

# Second message - continues session
$ praisonai-ts chat "What is my name?" --session user-123
Session: user-123
Response: Your name is John.

Custom Attribution for Debugging

# Tag requests for debugging
$ praisonai-ts chat "Process this data" \
  --agent-id data-processor \
  --run-id debug-run-001 \
  --trace-id incident-12345 \
  --verbose

Attribution Headers:
  X-Agent-Id: data-processor
  X-Run-Id: debug-run-001
  X-Trace-Id: incident-12345
  X-Session-Id: session_auto_xyz

Response: ...

Parallel Agent Tracing

$ praisonai-ts workflow run ./parallel-agents.yaml --trace

=== Parallel Execution ===

[agent-1] Starting... (Run: run_p1_abc)
[agent-2] Starting... (Run: run_p2_def)
[agent-3] Starting... (Run: run_p3_ghi)

[agent-2] Complete (0.9s)
[agent-1] Complete (1.1s)
[agent-3] Complete (1.3s)

=== All Agents Complete ===

Environment Variables

# Enable tracing globally
export PRAISONAI_TRACE=true

# Set default agent ID prefix
export PRAISONAI_AGENT_PREFIX=myapp

# Enable verbose attribution logging
export PRAISONAI_VERBOSE=true

Scripting Examples

Capture Attribution

#!/bin/bash
# capture-trace.sh

result=$(praisonai-ts llm trace "$1" --json)

agent_id=$(echo "$result" | jq -r '.data.attribution.agentId')
run_id=$(echo "$result" | jq -r '.data.attribution.runId')
duration=$(echo "$result" | jq -r '.meta.duration_ms')

echo "Agent: $agent_id"
echo "Run: $run_id"
echo "Duration: ${duration}ms"

Log Attribution to File

#!/bin/bash
# log-traces.sh

LOG_FILE="traces.log"

praisonai-ts chat "$1" --json | jq '{
  timestamp: now | todate,
  agent: .data.attribution.agentId,
  run: .data.attribution.runId,
  duration: .meta.duration_ms
}' >> "$LOG_FILE"

Monitor Multi-Agent Workflow

#!/bin/bash
# monitor-workflow.sh

praisonai-ts workflow run ./workflow.yaml --trace --json | while read -r line; do
  agent=$(echo "$line" | jq -r '.attribution.agentId // empty')
  status=$(echo "$line" | jq -r '.status // empty')
  
  if [ -n "$agent" ]; then
    echo "[$(date +%H:%M:%S)] $agent: $status"
  fi
done

Integration with Observability Tools

Export to OpenTelemetry

# Set OTLP endpoint
export OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317

# Run with telemetry
praisonai-ts chat "Hello" --telemetry

Export to Logging Service

# Pipe JSON output to logging service
praisonai-ts llm trace "Hello" --json | \
  curl -X POST -H "Content-Type: application/json" \
  -d @- https://logs.example.com/ingest

Exit Codes

CodeDescription
0Success
1General error
2Invalid arguments

Best Practices

  1. Use consistent session IDs for conversation continuity
  2. Log run IDs for debugging and auditing
  3. Enable tracing in production for observability
  4. Use meaningful agent IDs for easier debugging
  5. Export traces to observability platforms