Display AI responses as they generate, creating a natural chat experience.Documentation Index
Fetch the complete documentation index at: https://docs.praison.ai/llms.txt
Use this file to discover all available pages before exploring further.
Quick Start
How It Works
Stream Events
Events emitted during streaming.| Event | Description |
|---|---|
RequestStart | API call initiated |
HeadersReceived | HTTP headers arrived |
FirstToken | First content (TTFT marker) |
DeltaText | Text content chunk |
DeltaToolCall | Tool call in progress |
ToolCallEnd | Tool call complete |
LastToken | Final content chunk |
StreamEnd | Stream completed |
Error | Error occurred |
Configuration
| Option | Type | Default | Description |
|---|---|---|---|
stream | bool | true | Enable response streaming |
Best Practices
Enable for chat interfaces
Enable for chat interfaces
Streaming provides immediate feedback, improving perceived performance.
Disable for batch processing
Disable for batch processing
When processing many requests, disable streaming for efficiency.
Related
Agent
Agent configuration
Callbacks
Handle stream events

