Skip to main content
Output configuration controls how agents display their responses - silent for programmatic use, verbose for interactive sessions, or streaming for real-time feedback.

Quick Start

1

Default (Silent)

from praisonaiagents import Agent

# Default is silent - no output overhead
agent = Agent(
    name="API Agent",
    instructions="You process data"
)

result = agent.chat("Analyze this data")  # Returns result, no display
2

Verbose Mode

agent = Agent(
    name="Interactive Agent",
    instructions="You help users",
    output="verbose"  # Rich panels and formatting
)

agent.start("Help me with Python")  # Shows formatted output

Output Presets

PresetDisplayUse Case
silentNoneProgrammatic use, fastest
actionsTool calls + final outputDebugging
verboseRich panelsInteractive sessions
jsonJSONL eventsPiping to other tools
# Preset examples
agent = Agent(instructions="...", output="silent")   # Default
agent = Agent(instructions="...", output="actions")  # Debug
agent = Agent(instructions="...", output="verbose")  # Interactive
agent = Agent(instructions="...", output="json")     # Piping

Configuration Options

from praisonaiagents import OutputConfig

config = OutputConfig(
    verbose=True,           # Enable verbose output
    markdown=True,          # Format as markdown
    stream=True,            # Enable streaming
    metrics=True,           # Show token metrics
    reasoning_steps=True,   # Show reasoning
    actions_trace=False,    # Tool call trace
    json_output=False,      # JSONL output
    simple_output=False,    # Plain text only
    show_parameters=False,  # Show LLM parameters (debug)
    status_trace=False,     # Clean inline status updates
)
OptionTypeDefaultDescription
verboseboolFalseEnable verbose output
markdownboolFalseFormat as markdown
streamboolFalseEnable streaming
metricsboolFalseShow token metrics
reasoning_stepsboolFalseShow reasoning process
actions_traceboolFalseShow tool calls
json_outputboolFalseOutput JSONL events
simple_outputboolFalsePlain text without panels
show_parametersboolFalseShow LLM parameters (debug)
status_traceboolFalseClean inline status updates

Streaming

Real-time output as the agent generates:
agent = Agent(
    instructions="You write stories",
    output=OutputConfig(stream=True)
)

# Streams response in real-time
agent.start("Write a short story about AI")

Verbose vs Silent

Silent (Default)

# Fastest - no display overhead
agent = Agent(instructions="...")
result = agent.chat("Query")  # Just returns result

Verbose

# Rich formatted output
agent = Agent(instructions="...", output="verbose")
agent.start("Query")  # Shows panels, formatting

JSON Output

For piping to other tools:
agent = Agent(
    instructions="You analyze data",
    output=OutputConfig(json_output=True)
)

# Outputs JSONL events to stderr
# {"event": "llm_start", "model": "gpt-4o", ...}
# {"event": "llm_end", "tokens": 150, ...}
# {"event": "response", "content": "...", ...}

Metrics Display

Show token usage and timing:
agent = Agent(
    instructions="You help with tasks",
    output=OutputConfig(
        verbose=True,
        metrics=True,  # Show token counts
    )
)

# Output includes:
# Tokens: 150 input, 200 output
# Time: 1.2s

Method-Specific Output

Different methods have different default behaviors:
MethodDefault OutputOverride
agent.chat()SilentUse output= param
agent.start()Verbose + StreamUse output= param
agent.run()SilentUse output= param
# chat() is silent by default
result = agent.chat("Query")

# start() is verbose by default
agent.start("Query")

# Override defaults
agent.chat("Query", stream=True)  # Force streaming

Best Practices

Default silent mode has zero overhead - ideal for programmatic use.
See exactly what the agent is doing with verbose mode.
Users prefer seeing progress - enable streaming for interactive apps.
JSONL output integrates well with log aggregators and pipelines.