Monitor agent execution, LLM calls, tool usage, and token costs in LangSmith .
Quick Start
Install Dependencies
pip install praisonaiagents praisonai-tools opentelemetry-sdk opentelemetry-exporter-otlp
Set Environment Variables
export LANGSMITH_API_KEY = lsv2_xxx
export LANGSMITH_PROJECT = my-project
Run Your Agent
from praisonai_tools . observability import obs
from praisonaiagents import Agent
obs . init ( provider = " langsmith " )
agent = Agent (
name = " Assistant " ,
instructions = " You are a helpful assistant. " ,
model = " gpt-4o-mini " ,
)
response = agent . chat ( " What is AI? " )
print ( response )
That’s it — three lines of setup. Every LLM call, tool call, and agent step is automatically traced to your LangSmith dashboard.
How It Works
What Gets Traced Details Agent lifecycle Start/end timing, agent name, role LLM calls Input messages, output, model, token usage Tool calls Tool name, arguments, results Token usage Prompt tokens, completion tokens, total Errors Stack traces, error messages
Configuration Options
Option Environment Variable Description api_keyLANGSMITH_API_KEYYour LangSmith API key projectLANGSMITH_PROJECTProject name (default: "default") endpointLANGSMITH_ENDPOINTAPI endpoint (default: https://api.smith.langchain.com) tracingLANGSMITH_TRACINGSet to true to enable (auto-detected)
Common Patterns
Single Agent
Multi-Agent Team
With Tools
Explicit Tracing
from praisonai_tools . observability import obs
from praisonaiagents import Agent
obs . init ( provider = " langsmith " )
agent = Agent (
name = " Assistant " ,
instructions = " You are a helpful assistant. " ,
model = " gpt-4o-mini " ,
)
response = agent . chat ( " What is AI? " )
print ( response )
from praisonai_tools . observability import obs
from praisonaiagents import Agent , Task , PraisonAIAgents
obs . init ( provider = " langsmith " , project_name = " my-team " )
researcher = Agent (
name = " Researcher " ,
instructions = " Search for information. " ,
model = " gpt-4o-mini " ,
)
writer = Agent (
name = " Writer " ,
instructions = " Write clear summaries. " ,
model = " gpt-4o-mini " ,
)
task1 = Task ( description = " Research AI trends " , agent = researcher )
task2 = Task ( description = " Summarize findings " , agent = writer )
agents = PraisonAIAgents ( agents =[ researcher , writer ], tasks =[ task1 , task2 ])
agents . start ()
from praisonai_tools . observability import obs
from praisonaiagents import Agent , tool
obs . init ( provider = " langsmith " )
@ tool
def search_web ( query : str ) -> str :
""" Search the web for information. """
return f "Results for: { query } "
agent = Agent (
name = " Researcher " ,
instructions = " Search and answer questions. " ,
tools =[ search_web ],
model = " gpt-4o-mini " ,
)
response = agent . chat ( " Latest AI news " )
print ( response )
from praisonai_tools . observability import obs
from praisonaiagents import Agent
obs . init ( provider = " langsmith " , auto_instrument = False )
agent = Agent (
name = " Assistant " ,
instructions = " You are helpful. " ,
model = " gpt-4o-mini " ,
)
with obs . trace ( " chat-session " ):
response = agent . chat ( " What is AI? " )
print ( response )
What You See in LangSmith
Diagnostics & Verification
Check your setup with the built-in doctor:
from praisonai_tools . observability import obs
obs . init ( provider = " langsmith " )
results = obs . doctor ()
print ( results )
{
" enabled " : true ,
" provider " : " langsmith " ,
" connection_status " : true ,
" connection_message " : " LangSmith API key configured "
}
# Health check
python -m praisonai_tools.observability.cli doctor
# Health check (JSON)
python -m praisonai_tools.observability.cli doctor --json
# Verify traces in LangSmith (requires LANGSMITH_API_KEY)
python -m praisonai_tools.observability.cli verify --project " My First App "
# Verify with JSON output
python -m praisonai_tools.observability.cli verify --project " My First App " --json
PraisonAI Branding
Every agent and workflow span automatically includes PraisonAI branding in LangSmith metadata:
Metadata Key Value Description praisonai.version0.2.20SDK version used praisonai.frameworkpraisonaiFramework identifier
Workflow spans also capture structured input (agent names, task descriptions) and output.
Best Practices
Use project names to organize traces
Set LANGSMITH_PROJECT or pass project_name to group traces by environment or feature. obs . init ( provider = " langsmith " , project_name = " production-chatbot " )
Use auto-instrumentation for most cases
Auto-instrumentation traces everything automatically. Only use explicit obs.trace() when you need custom trace boundaries or additional metadata.
Set environment variables in .env files
Keep API keys out of code. Use .env files or your deployment platform’s secret management. # .env
LANGSMITH_API_KEY = lsv2_xxx
LANGSMITH_PROJECT = my-project
Monitor token usage for cost control
LangSmith traces include token counts for every LLM call. Use this to identify expensive operations and optimize prompts.
Observability Overview All supported observability providers
Langfuse Alternative open-source observability