Skip to main content
Langextract transforms PraisonAI agent runs into self-contained interactive HTML visualizations, grounding extractions in the original input text for deeper analysis.

Quick Start

1

Install langextract

Install PraisonAI with langextract support:
pip install 'praisonai[langextract]'
2

Programmatic Usage

Set up langextract observability for any agent run:
from praisonaiagents import Agent
from praisonaiagents.trace.protocol import TraceEmitter, set_default_emitter
from praisonai.observability import LangextractSink, LangextractSinkConfig

# Configure langextract sink
sink = LangextractSink(LangextractSinkConfig(
    output_path="trace.html",
    auto_open=True
))

# Set up trace emitter
emitter = TraceEmitter(sink=sink, enabled=True)
set_default_emitter(emitter)

# Run your agent
agent = Agent(
    name="Writer",
    instructions="Write a haiku about code."
)
result = agent.start("Write a haiku about code.")

# Close sink to render the trace
sink.close()  # Creates trace.jsonl and trace.html
3

CLI Usage

Use langextract with any PraisonAI workflow:
# Render a YAML workflow with langextract
praisonai langextract render workflow.yaml -o trace.html

# View an existing JSONL trace
praisonai langextract view trace.jsonl -o trace.html

# Instrument any praisonai run
praisonai --observe langextract agents.yaml

How It Works

Event TypeExtraction ClassGroundedDescription
AGENT_STARTagent_runFirst 200 chars of inputAgent run initiation
TOOL_STARTtool_callNo (ungrounded)Tool execution start
TOOL_ENDtool_resultNoTool execution result
OUTPUTfinal_outputFirst 1000 chars of outputAgent final output
ERRORerrorNoError events

Configuration Options

The LangextractSinkConfig class provides comprehensive configuration:
OptionTypeDefaultDescription
output_pathstr"praisonai-trace.html"HTML file written on close()
jsonl_pathOptional[str]NoneAnnotated-documents JSONL path (derived from output_path if None)
document_idstr"praisonai-run"Document ID in the JSONL
auto_openboolFalseOpen the HTML in a browser after render
include_llm_contentboolTrueInclude response text in attributes
include_tool_argsboolTrueInclude tool args in attributes
enabledboolTrueMaster switch

CLI Reference

render command

Render a YAML workflow with langextract observability:
praisonai langextract render workflow.yaml [OPTIONS]
Options:
  • -o, --output FILE: Output HTML file path (default: workflow.html)
  • --no-open: Don’t open HTML in browser automatically
  • --api-url URL: API URL if using remote API

view command

Render an existing annotated-documents JSONL to HTML:
praisonai langextract view trace.jsonl [OPTIONS]
Options:
  • -o, --output FILE: Output HTML file path (default: trace.html)
  • --no-open: Don’t open HTML in browser automatically

—observe flag

Instrument any PraisonAI command with langextract:
praisonai --observe langextract <command>

Common Patterns

from praisonaiagents import Agent
from praisonaiagents.trace.protocol import TraceEmitter, set_default_emitter
from praisonai.observability import LangextractSink, LangextractSinkConfig

sink = LangextractSink(LangextractSinkConfig(
    output_path="analysis.html",
    document_id="data-analysis",
    include_tool_args=False,  # Exclude tool args for cleaner view
    auto_open=False  # Don't auto-open in CI
))

set_default_emitter(TraceEmitter(sink=sink, enabled=True))

agent = Agent(name="Analyst", instructions="Analyze the data.")
result = agent.start("Analyze quarterly sales data")
sink.close()

Troubleshooting

Ensure you call sink.close() to trigger the rendering process. The CLI commands handle this automatically, but programmatic usage requires explicit closure.
# ✅ Correct
sink = LangextractSink(config)
set_default_emitter(TraceEmitter(sink=sink))
agent.start("task")
sink.close()  # Required!
Install langextract with the PraisonAI extra:
pip install 'praisonai[langextract]'
This installs both PraisonAI and the required langextract dependency.
Verify that your agent actually emits trace events. Single agents require proper trace emitter setup as shown in the examples above.
Check the auto_open configuration:
config = LangextractSinkConfig(auto_open=True)
For CLI commands, remove the --no-open flag.

Best Practices

Disable automatic browser opening in automated environments:
sink = LangextractSink(LangextractSinkConfig(
    output_path="ci-trace.html",
    auto_open=False
))
Restore the previous emitter after your run to avoid affecting other code:
from praisonaiagents.trace.protocol import get_default_emitter

# Save current emitter
previous_emitter = get_default_emitter()

# Set up langextract
sink = LangextractSink(config)
set_default_emitter(TraceEmitter(sink=sink))

try:
    # Your agent work
    agent.start("task")
finally:
    sink.close()
    # Restore previous emitter
    set_default_emitter(previous_emitter)
Set meaningful document IDs for easier trace identification:
config = LangextractSinkConfig(
    document_id=f"analysis-{datetime.now().isoformat()}"
)
Langextract rendering failures don’t break agent execution - check logs for details:
import logging
logging.basicConfig(level=logging.DEBUG)
# Rendering errors will appear in logs without stopping your agent

Observability Overview

Compare all observability providers

Langfuse Integration

Hosted observability alternative