Quick Start
Configuration Options
| Parameter | Type | Default | Description |
|---|---|---|---|
verbose | bool | False | Enable verbose output |
markdown | bool | False | Format output as markdown |
stream | bool | False | Stream output tokens |
metrics | bool | False | Show performance metrics |
reasoning_steps | bool | False | Display reasoning process |
style | Any | None | None | Custom output styling |
actions_trace | bool | False | Show tool calls and lifecycle |
json_output | bool | False | Emit JSONL events |
simple_output | bool | False | Plain text without panels |
show_parameters | bool | False | Show LLM parameters (debug) |
status_trace | bool | False | Inline status updates |
output_file | str | None | None | Save response to file |
template | str | None | None | Response format template |
Output Presets
| Preset | Description |
|---|---|
"silent" | No output (default, fastest) |
"minimal" | Basic output only |
"normal" | Standard output |
"verbose" | Detailed with rich panels |
"debug" | All information including parameters |
Common Patterns
Pattern 1: Streaming Chat
Pattern 2: Save to File
Pattern 3: JSON Pipeline
Best Practices
Use Silent Mode for Production
Use Silent Mode for Production
Silent mode has zero output overhead, making it ideal for programmatic use.
Use Actions Mode for Debugging
Use Actions Mode for Debugging
Actions trace shows tool calls and agent lifecycle without full verbosity.
Enable Streaming for Interactive Use
Enable Streaming for Interactive Use
Streaming improves perceived responsiveness for chat interfaces.

