Skip to main content
Control how agents display output, from silent mode for programmatic use to verbose mode with rich formatting.

Quick Start

1

Default (Silent Mode)

Silent mode is the default for programmatic use:
from praisonaiagents import Agent

# No output overhead - fastest performance
agent = Agent(
    name="Silent Agent",
    instructions="Work quietly"
)
2

With Presets

Use string presets for common configurations:
from praisonaiagents import Agent

# Actions mode - shows tool calls
agent = Agent(
    name="Traced Agent",
    instructions="Show what I do",
    output="actions"
)

# Verbose mode - rich panels
agent = Agent(
    name="Verbose Agent",
    instructions="Show everything",
    output="verbose"
)
3

With Configuration

Fine-grained control:
from praisonaiagents import Agent
from praisonaiagents.config import OutputConfig

agent = Agent(
    name="Custom Output Agent",
    instructions="Custom output settings",
    output=OutputConfig(
        verbose=True,
        markdown=True,
        stream=True,
        metrics=True
    )
)

Configuration Options

from praisonaiagents.config import OutputConfig

config = OutputConfig(
    # Verbosity
    verbose=False,
    
    # Formatting
    markdown=False,
    
    # Streaming
    stream=False,
    
    # Metrics display
    metrics=False,
    
    # Show reasoning steps
    reasoning_steps=False,
    
    # Output style
    style=None,
    
    # Actions trace mode
    actions_trace=False,
    
    # JSON output mode
    json_output=False,
    
    # Simple output (no panels)
    simple_output=False,
    
    # Show LLM parameters (debug)
    show_parameters=False,
    
    # Status trace mode
    status_trace=False,
    
    # Save response to file
    output_file=None,
    
    # Response format template
    template=None
)
ParameterTypeDefaultDescription
verboseboolFalseEnable verbose output
markdownboolFalseFormat output as markdown
streamboolFalseStream output tokens
metricsboolFalseShow performance metrics
reasoning_stepsboolFalseDisplay reasoning process
styleAny | NoneNoneCustom output styling
actions_traceboolFalseShow tool calls and lifecycle
json_outputboolFalseEmit JSONL events
simple_outputboolFalsePlain text without panels
show_parametersboolFalseShow LLM parameters (debug)
status_traceboolFalseInline status updates
output_filestr | NoneNoneSave response to file
templatestr | NoneNoneResponse format template

Output Presets

PresetDescription
"silent"No output (default, fastest)
"minimal"Basic output only
"normal"Standard output
"verbose"Detailed with rich panels
"debug"All information including parameters

Common Patterns

Pattern 1: Streaming Chat

from praisonaiagents import Agent
from praisonaiagents.config import OutputConfig

agent = Agent(
    name="Chat Agent",
    instructions="Interactive chat",
    output=OutputConfig(
        stream=True,
        markdown=True,
        simple_output=True
    )
)

Pattern 2: Save to File

from praisonaiagents import Agent
from praisonaiagents.config import OutputConfig

agent = Agent(
    name="Writer Agent",
    instructions="Generate content",
    output=OutputConfig(
        output_file="output.md",
        template="# {title}\n\n{content}"
    )
)

Pattern 3: JSON Pipeline

from praisonaiagents import Agent
from praisonaiagents.config import OutputConfig

agent = Agent(
    name="Pipeline Agent",
    instructions="Emit structured events",
    output=OutputConfig(json_output=True)
)

Best Practices

Silent mode has zero output overhead, making it ideal for programmatic use.
Actions trace shows tool calls and agent lifecycle without full verbosity.
Streaming improves perceived responsiveness for chat interfaces.