Output Styles
Configure how agents format their responses. Control verbosity, format, tone, and length to match your application’s needs.
Output Presets
The output parameter controls what the agent displays during execution:
from praisonaiagents import Agent
# Clean timestamped status (NEW)
agent = Agent(instructions="...", output="status")
# Full verbose output with panels
agent = Agent(instructions="...", output="verbose")
# Silent mode (default) - no output, just returns result
agent = Agent(instructions="...", output="silent")
Preset Reference
| Preset | Description | Use Case |
|---|
silent | Nothing (default for SDK) | SDK/programmatic use |
editor | Step 1: 📄 Creating file → ✓ Done | CLI default, beginners |
status | ▸ AI → thinking/responding + inline tools | Simple CLI output |
trace | Status with timestamps | Progress monitoring |
debug | trace + metrics (no boxes) | Developer debugging |
verbose | Task + Tools + Response panels | Interactive use |
stream | Real-time token streaming | Chat interfaces |
json | JSONL events | Scripting/parsing |
Editor Output Example (CLI Default)
The editor preset is designed for beginners and non-programmers. It shows numbered steps with emoji icons and human-readable tool names:
agent = Agent(
instructions="You are helpful",
tools=[list_files, execute_command],
output="editor" # Beginner-friendly numbered steps
)
result = agent.start("List files in /tmp and get system info")
Output:
▸ Thinking...
Step 1: 📂 Listing files: /tmp
✓ Done
Step 2: ⚡ Running command: uname -a
✓ Done
▸ Responding...
The files in /tmp are: file1.txt, file2.py...
System: Darwin Mac.lan 25.3.0...
Completed
- Duration: 5.2s
- Blocks: 2
The editor preset is the default for CLI (praisonai "prompt"). It provides the most user-friendly output for interactive use.
Status Output Example
agent = Agent(
instructions="You are helpful",
tools=[get_weather],
output="status" # Clean inline status
)
result = agent.start("What is the weather in Tokyo?")
Output:
▸ AI → thinking...
│ → get_weather(city="Tokyo") → "sunny in Tokyo"
▸ AI → responding...
──────────────────────────────────────────────────
Final Output:
The weather in Tokyo is sunny.
Trace Output Example (with timestamps)
agent = Agent(
instructions="You are helpful",
tools=[get_weather],
output="trace" # Full trace with timestamps
)
Output:
[13:47:33.123] ▸ AI → thinking...
[13:47:33.456] │ → get_weather(city="Tokyo") → "sunny" [0.2s] ✓
[13:47:33.789] ▸ AI → responding...
──────────────────────────────────────────────────
Final Output:
The weather in Tokyo is sunny.
Backward Compatible Aliases
| Alias | Maps To |
|---|
plain, minimal | silent |
normal | verbose |
text, actions | status |
Quick Start
Agent-Centric Usage
from praisonaiagents import Agent
from praisonaiagents.output import OutputStyle
# Agent with concise output style
agent = Agent(
name="Formatter",
instructions="You are a helpful assistant.",
output_style=OutputStyle.concise() # Minimal, direct responses
)
agent.start("Explain machine learning in simple terms")
# Available styles: concise(), detailed(), technical(), conversational(), structured()
Style Presets
Pre-configured styles for common use cases:
from praisonaiagents.output import OutputStyle
# Available presets
concise = OutputStyle.concise() # Minimal, direct
detailed = OutputStyle.detailed() # Verbose, thorough
technical = OutputStyle.technical() # Developer-focused
conversational = OutputStyle.conversational() # Friendly tone
structured = OutputStyle.structured() # Organized with headers
minimal = OutputStyle.minimal() # Bare minimum
Custom Styles
from praisonaiagents import Agent
from praisonaiagents.output import OutputStyle
style = OutputStyle(
name="custom",
verbosity="normal", # minimal, normal, verbose
format="markdown", # markdown, plain, json
tone="professional", # professional, friendly, technical
max_length=1000, # Character limit
include_examples=True,
include_code_blocks=True
)
agent = Agent(
name="CustomAgent",
instructions="You are a helpful assistant.",
output_style=style
)
CLI Usage
praisonai output status # Show current style
praisonai output set concise # Set output style
Low-level API Reference
OutputStyle Direct Usage
from praisonaiagents.output import OutputStyle, OutputFormatter
# Use a preset style
style = OutputStyle.concise()
formatter = OutputFormatter(style)
# Format output
text = "# Hello\n\nThis is **bold** text."
plain = formatter.format(text)
print(plain) # "Hello\n\nThis is bold text."
from praisonaiagents.output import OutputFormatter
# Markdown to plain text
plain_style = OutputStyle(format="plain")
formatter = OutputFormatter(plain_style)
markdown = "# Title\n\n**Bold** and *italic*"
plain = formatter.format(markdown)
# "Title\n\nBold and italic"
Length Control
style = OutputStyle(max_length=100)
formatter = OutputFormatter(style)
long_text = "This is a sample sentence. " * 20
truncated = formatter.format(long_text)
# Truncated to ~100 characters with "..."
JSON Output
json_style = OutputStyle(format="json")
formatter = OutputFormatter(json_style)
result = formatter.format("Hello, World!")
# {"response": "Hello, World!", "format": "text"}
Utility Functions
formatter = OutputFormatter()
text = "This is a test sentence with several words."
# Count words
words = formatter.get_word_count(text) # 8
# Count characters
chars = formatter.get_char_count(text) # 43
# Estimate tokens
tokens = formatter.estimate_tokens(text) # ~10
System Prompt Integration
Styles can generate system prompt additions:
style = OutputStyle.concise()
prompt_addition = style.get_system_prompt_addition()
# "Be concise and direct. Avoid unnecessary elaboration..."
# Add to agent instructions
full_instructions = f"{base_instructions}\n\n{prompt_addition}"
Serialization
# Save style
style = OutputStyle(name="custom", format="markdown")
data = style.to_dict()
# Restore style
restored = OutputStyle.from_dict(data)
Output styles use lazy loading:
# Only loads when accessed
from praisonaiagents.output import OutputStyle