Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.praison.ai/llms.txt

Use this file to discover all available pages before exploring further.

Langfuse provides observability and evaluation tools for LLM applications with automatic tracing of all agent conversations.

Path Comparison

Path A — obs.langfuse()Path B — praisonai --observe langfuse
UsagePython scriptCLI flag (also PRAISONAI_OBSERVE=langfuse)
MechanismInstruments OpenAI client globally via langfuse.openai drop-inLangfuseSink + ContextTraceEmitter bridge
Span CoveragePer-LLM-call generations (input/output, tokens, model)Full lifecycle: agent_start, agent_end, tool_call_*, llm_*
Manual flush needed?Yes — provider.flush()No — atexit registers it
Best forProgrammatic agents, any Python flowYAML / CLI workflows, multi-agent pipelines

Quick Start

1

Install and Enable

# Install required packages
# pip install praisonaiagents langfuse

import os
os.environ["LANGFUSE_PUBLIC_KEY"] = "pk-lf-xxx"
os.environ["LANGFUSE_SECRET_KEY"] = "sk-lf-xxx"

from praisonaiagents.obs import obs
from praisonaiagents import Agent

# Initialize Langfuse tracing
provider = obs.langfuse()

agent = Agent(
    name="Assistant",
    instructions="You are a helpful assistant.",
    model="gpt-4o-mini",
)

result = agent.start("What is the capital of France?")
print(result)

# Always flush before exit
provider.flush()
2

Auto-Detection

from praisonaiagents.obs import obs
from praisonaiagents import Agent

# Auto-detects Langfuse from environment variables
provider = obs.auto()

agent = Agent(
    name="Assistant",
    instructions="You are a helpful assistant.",
    model="gpt-4o-mini",
)

result = agent.start("Hello!")

How Path A Works — obs.langfuse()

ComponentPurpose
obs.langfuse()Instruments OpenAI client globally for automatic tracing
AgentMakes LLM calls that are automatically traced
Langfuse SDKCaptures traces via langfuse.openai drop-in

Environment Variables

VariableRequiredDescription
LANGFUSE_PUBLIC_KEYYour Langfuse public key (pk-lf-...)
LANGFUSE_SECRET_KEYYour Langfuse secret key (sk-lf-...)
LANGFUSE_BASE_URLFor self-hostedBase URL e.g. http://localhost:3000
LANGFUSE_HOSTFor compatibilitySame as LANGFUSE_BASE_URL
# Cloud Langfuse
export LANGFUSE_PUBLIC_KEY=pk-lf-xxx
export LANGFUSE_SECRET_KEY=sk-lf-xxx

# Self-hosted Langfuse
export LANGFUSE_BASE_URL=http://localhost:3000
export LANGFUSE_HOST=http://localhost:3000
As of PraisonAI’s wrapper-layer refactor, OTEL_SDK_DISABLED and EC_TELEMETRY are only set on first observability use, not at import. User-set values are preserved (setdefault). If LANGFUSE_PUBLIC_KEY is set or ~/.praisonai/langfuse.env exists, OTEL_SDK_DISABLED=false is set explicitly so Langfuse v4 can use OTel internally.

CLI Observability — --observe langfuse

Enable full agent lifecycle tracing with a single CLI flag:
# Basic usage
praisonai --observe langfuse run agents.yaml

# With environment variable
PRAISONAI_OBSERVE=langfuse praisonai run agents.yaml

# Multi-agent workflows
praisonai --observe langfuse agents "Research AI trends" "Write a summary"

What Gets Traced

The CLI --observe langfuse captures:
  • Agent lifecycle: agent_start / agent_end spans
  • LLM interactions: llm_request / llm_response with readable content
  • Tool usage: tool_call_start / tool_call_end with args and results
  • Automatic flush: No manual provider.flush() required
As of PR #1461, atexit auto-closes the sink — no manual flush required for CLI runs. See Custom Tracing for the underlying ContextTraceSinkProtocol.

Programmatic — LangfuseSink + Context Bridge

For full control over Langfuse tracing in Python code:
from praisonaiagents import Agent
from praisonaiagents.trace.protocol import TraceEmitter, set_default_emitter
from praisonaiagents.trace.context_events import ContextTraceEmitter, set_context_emitter
from praisonai.observability import LangfuseSink, LangfuseSinkConfig
import atexit

sink = LangfuseSink(LangfuseSinkConfig())  # reads env vars

# Action-level events (RouterAgent / PlanningAgent)
set_default_emitter(TraceEmitter(sink=sink, enabled=True))

# Context-level events (Agent.start lifecycle, tool calls, LLM I/O) — required for full coverage
set_context_emitter(ContextTraceEmitter(sink=sink.context_sink(), enabled=True))

atexit.register(sink.close)

agent = Agent(name="Writer", instructions="Write a haiku about code.")
agent.start("Write a haiku about code.")
The set_context_emitter(... sink=sink.context_sink() ...) call is required for typical single-agent flows. Without it, only RouterAgent token-usage and PlanningAgent.plan_created events appear in Langfuse — Agent.start() lifecycle is silent.

LangfuseSinkConfig Options

OptionTypeDefaultDescription
public_keystr"" (then LANGFUSE_PUBLIC_KEY)Langfuse public key (pk-lf-...)
secret_keystr"" (then LANGFUSE_SECRET_KEY)Langfuse secret key (sk-lf-...)
hoststr"" (then LANGFUSE_HOSTLANGFUSE_BASE_URLhttps://cloud.langfuse.com)Langfuse server URL
flush_atint20Number of events that triggers a flush
flush_intervalfloat10.0Seconds between background flushes
enabledboolTrueMaster switch

CLI Server Commands

# Start local Langfuse server
praisonai langfuse start

# Custom port and credentials
praisonai langfuse start --port 8080 --email admin@example.com

# Check status
praisonai langfuse status

# Stop server
praisonai langfuse stop

Common Patterns

Multi-Agent Tracing

All agents in a session share the same Langfuse context automatically:
from praisonaiagents.obs import obs
from praisonaiagents import Agent, Task, PraisonAIAgents

provider = obs.langfuse()

researcher = Agent(
    name="Researcher", 
    role="Research specialist",
    model="gpt-4o-mini"
)

writer = Agent(
    name="Writer", 
    role="Content writer",
    model="gpt-4o-mini"
)

agents = PraisonAIAgents(
    agents=[researcher, writer],
    tasks=[
        Task(description="Research AI trends"),
        Task(description="Write a summary")
    ],
)

agents.start()
provider.flush()

Connection Verification

from praisonaiagents.obs import obs

provider = obs.langfuse()
ok, message = provider.check_connection()
print(f"Connected: {ok}{message}")

Configuration File Usage

Credentials from ~/.praisonai/langfuse.env are auto-loaded:
from praisonaiagents.obs import obs

# Automatically loads from config file if env vars not set
provider = obs.auto()

if provider:
    print(f"Provider active: {type(provider).__name__}")

Best Practices

Call provider.flush() to ensure all traces are sent for obs.langfuse():
provider = obs.langfuse()
# ... run agents ...
provider.flush()  # Critical for trace delivery
Path B (--observe langfuse) auto-registers atexit.close since PR #1461.
Prefer obs.auto() for environment-based configuration:
provider = obs.auto()  # Detects Langfuse automatically
As of PR #1461, llm_response spans contain the assistant message text (or [tool_calls: name1, name2] summary), not the raw ChatCompletion(...) repr. The Langfuse “Output” panel is now human-readable:
  • Before: ChatCompletion(id='chatcmpl-...', choices=[Choice(...)], ...)
  • After: "The capital of France is Paris." or [tool_calls: search_web, calculator]
For programmatic usage, include the context emitter for complete lifecycle tracing:
from praisonaiagents.trace.context_events import ContextTraceEmitter, set_context_emitter
set_context_emitter(ContextTraceEmitter(sink=sink.context_sink(), enabled=True))
Without this, only RouterAgent and PlanningAgent events appear — Agent.start() flows are silent.
Use CLI for local development:
  1. praisonai langfuse start
  2. praisonai langfuse config
  3. Test with praisonai langfuse test

Observability Overview

Compare observability providers

Agent Configuration

Configure agent settings