Quick Start
1
Default (Silent)
2
Verbose Mode
Output Presets
| Preset | Display | Use Case |
|---|---|---|
silent | None | Programmatic use, fastest |
actions | Tool calls + final output | Debugging |
verbose | Rich panels | Interactive sessions |
json | JSONL events | Piping to other tools |
Configuration Options
| Option | Type | Default | Description |
|---|---|---|---|
verbose | bool | False | Enable verbose output |
markdown | bool | False | Format as markdown |
stream | bool | False | Enable streaming |
metrics | bool | False | Show token metrics |
reasoning_steps | bool | False | Show reasoning process |
actions_trace | bool | False | Show tool calls |
json_output | bool | False | Output JSONL events |
simple_output | bool | False | Plain text without panels |
show_parameters | bool | False | Show LLM parameters (debug) |
status_trace | bool | False | Clean inline status updates |
Streaming
Real-time output as the agent generates:Verbose vs Silent
Silent (Default)
Verbose
JSON Output
For piping to other tools:Metrics Display
Show token usage and timing:Method-Specific Output
Different methods have different default behaviors:| Method | Default Output | Override |
|---|---|---|
agent.chat() | Silent | Use output= param |
agent.start() | Verbose + Stream | Use output= param |
agent.run() | Silent | Use output= param |
Best Practices
Use silent for APIs
Use silent for APIs
Default silent mode has zero overhead - ideal for programmatic use.
Use verbose for debugging
Use verbose for debugging
See exactly what the agent is doing with verbose mode.
Enable streaming for UX
Enable streaming for UX
Users prefer seeing progress - enable streaming for interactive apps.
Use JSON for pipelines
Use JSON for pipelines
JSONL output integrates well with log aggregators and pipelines.

