Skip to main content
Monitor agent execution, LLM calls, tool usage, and token costs in LangSmith.

Quick Start

1

Install Dependencies

pip install praisonaiagents praisonai-tools opentelemetry-sdk opentelemetry-exporter-otlp
2

Set Environment Variables

export LANGSMITH_API_KEY=lsv2_xxx
export LANGSMITH_PROJECT=my-project
3

Run Your Agent

from praisonai_tools.observability import obs
from praisonaiagents import Agent

obs.init(provider="langsmith")

agent = Agent(
    name="Assistant",
    instructions="You are a helpful assistant.",
    model="gpt-4o-mini",
)

response = agent.chat("What is AI?")
print(response)
That’s it — three lines of setup. Every LLM call, tool call, and agent step is automatically traced to your LangSmith dashboard.

How It Works

What Gets TracedDetails
Agent lifecycleStart/end timing, agent name, role
LLM callsInput messages, output, model, token usage
Tool callsTool name, arguments, results
Token usagePrompt tokens, completion tokens, total
ErrorsStack traces, error messages

Configuration Options

OptionEnvironment VariableDescription
api_keyLANGSMITH_API_KEYYour LangSmith API key
projectLANGSMITH_PROJECTProject name (default: "default")
endpointLANGSMITH_ENDPOINTAPI endpoint (default: https://api.smith.langchain.com)
tracingLANGSMITH_TRACINGSet to true to enable (auto-detected)

Common Patterns

from praisonai_tools.observability import obs
from praisonaiagents import Agent

obs.init(provider="langsmith")

agent = Agent(
    name="Assistant",
    instructions="You are a helpful assistant.",
    model="gpt-4o-mini",
)

response = agent.chat("What is AI?")
print(response)

What You See in LangSmith


Diagnostics & Verification

Check your setup with the built-in doctor:
from praisonai_tools.observability import obs

obs.init(provider="langsmith")
results = obs.doctor()
print(results)
{
    "enabled": true,
    "provider": "langsmith",
    "connection_status": true,
    "connection_message": "LangSmith API key configured"
}

PraisonAI Branding

Every agent and workflow span automatically includes PraisonAI branding in LangSmith metadata:
Metadata KeyValueDescription
praisonai.version0.2.20SDK version used
praisonai.frameworkpraisonaiFramework identifier
Workflow spans also capture structured input (agent names, task descriptions) and output.

Best Practices

Set LANGSMITH_PROJECT or pass project_name to group traces by environment or feature.
obs.init(provider="langsmith", project_name="production-chatbot")
Auto-instrumentation traces everything automatically. Only use explicit obs.trace() when you need custom trace boundaries or additional metadata.
Keep API keys out of code. Use .env files or your deployment platform’s secret management.
# .env
LANGSMITH_API_KEY=lsv2_xxx
LANGSMITH_PROJECT=my-project
LangSmith traces include token counts for every LLM call. Use this to identify expensive operations and optimize prompts.

Observability Overview

All supported observability providers

Langfuse

Alternative open-source observability