Skip to main content
Configure context budget allocation via CLI flags and interactive commands.

CLI Flags

Output Reserve

# Set output token reserve
praisonai chat --context-output-reserve 16000
Default: 8000 tokens

Interactive Commands

View Budget

> /context budget
Output:
Budget Allocation
  Model Limit:     128,000
  Output Reserve:  8,000
  Usable:          120,000

  Segment Budgets:
    System Prompt: 2,000
    Rules:         500
    Skills:        500
    Memory:        1,000
    Tool Schemas:  2,000
    Tool Outputs:  20,000
    History:       84,616
    Buffer:        1,000

View Stats

> /context stats
Shows current usage vs budget per segment.

Environment Variables

export PRAISONAI_CONTEXT_OUTPUT_RESERVE=8000

config.yaml

context:
  output_reserve: 8000
  default_tool_output_max: 10000

Model Limits

ModelContext LimitDefault Reserve
gpt-4o128,00016,384
gpt-4o-mini128,00016,384
gpt-4-turbo128,0004,096
claude-3-opus200,0008,192
gemini-1.5-pro2,097,1528,192

Troubleshooting

Not enough space for output

# Increase output reserve
praisonai chat --context-output-reserve 16000

Context filling too fast

# Lower threshold for earlier compaction
praisonai chat --context-threshold 0.7