Skip to main content
This page provides a complete reference for all context management CLI commands and flags.

Interactive Commands

Use these commands in praisonai chat or praisonai code interactive mode.

/context

Show context summary and statistics.
> /context
Output:
Context Summary
  Model:          gpt-4o-mini
  Model Limit:    128,000 tokens
  Output Reserve: 8,000 tokens
  Usable Budget:  120,000 tokens
  Current Usage:  15,234 tokens (12.7%)
  Turns:          8
  Messages:       16
  Auto-Compact:   enabled
  Monitoring:     disabled

/context show

Alias for /context. Shows summary view.
> /context show

/context stats

Show detailed token ledger by segment.
> /context stats
Output:
Token Ledger
Segment              Tokens     Budget     Used
--------------------------------------------------
system_prompt         1,200      2,000    60.0%
history              12,500     84,616    14.8%
tools_schema          1,534      2,000    76.7%
--------------------------------------------------
TOTAL                15,234    120,000    12.7%

/context budget

Show budget allocation details.
> /context budget
Output:
Budget Allocation
  Model Limit:     128,000
  Output Reserve:  8,000
  Usable:          120,000

  Segment Budgets:
    System Prompt: 2,000
    Rules:         500
    Skills:        500
    Memory:        1,000
    Tool Schemas:  2,000
    Tool Outputs:  20,000
    History:       84,616
    Buffer:        1,000

/context dump

Write context snapshot to disk immediately.
> /context dump
Output:
✓ Context snapshot written to: ./context.txt

/context on

Enable context monitoring.
> /context on
Output:
✓ Context monitoring enabled
Output: ./context.txt

/context off

Disable context monitoring.
> /context off

/context path <path>

Set monitor output path.
> /context path ./debug/context.json

/context format <human|json>

Set monitor output format.
> /context format json

/context frequency <turn|tool_call|manual|overflow>

Set monitor update frequency.
> /context frequency overflow
FrequencyDescription
turnWrite after each turn (default)
tool_callWrite after each tool call
manualOnly write on /context dump
overflowWrite when approaching limit

/context compact

Trigger manual context optimization.
> /context compact
Output:
Optimizing context...
✓ Optimized: 45,000 → 30,000 tokens
Saved 15,000 tokens (33.3%)
Strategy: smart

/context history

Show optimization event history.
> /context history
Output:
Optimization History
Time                     Event                Tokens       Saved
----------------------------------------------------------------------
2024-01-07T12:00:00      overflow_detected       45,000          -
2024-01-07T12:00:01      auto_compact           45,000     -15,000
2024-01-07T12:05:00      snapshot               30,000          -

Showing last 3 of 3 events

/context config

Show resolved configuration with precedence info.
> /context config
Output:
Resolved Configuration
Precedence: CLI > ENV > config.yaml > defaults
Source: env

Auto-Compaction:
  auto_compact:           True
  compact_threshold:      0.8
  strategy:               smart
  compression_min_gain:   5.0%

Budget:
  output_reserve:         8,000
  default_tool_max:       10,000

Estimation:
  estimation_mode:        heuristic
  log_mismatch:           False

Monitoring:
  monitor_enabled:        False
  monitor_path:           ./context.txt
  monitor_format:         human
  monitor_frequency:      turn
  monitor_write_mode:     sync
  redact_sensitive:       True

Effective Budget:
  model_limit:            128,000
  usable:                 120,000
  history_budget:         84,616

CLI Flags

Use these flags when starting praisonai chat or praisonai code.

Auto-Compaction

# Enable auto-compaction (default)
praisonai chat --context-auto-compact

# Disable auto-compaction
praisonai chat --no-context-auto-compact

Strategy

praisonai chat --context-strategy smart
Options: smart, truncate, sliding_window, summarize, prune_tools

Threshold

praisonai chat --context-threshold 0.8
Value: 0.0 to 1.0 (default: 0.8)

Monitoring

# Enable monitoring
praisonai chat --context-monitor

# Set output path
praisonai chat --context-monitor-path ./debug/context.json

# Set format
praisonai chat --context-monitor-format json

# Set frequency
praisonai chat --context-monitor-frequency overflow

Redaction

# Enable redaction (default)
praisonai chat --context-redact

# Disable redaction
praisonai chat --no-context-redact

Output Reserve

praisonai chat --context-output-reserve 16000

Estimation Mode

praisonai chat --context-estimation-mode validated
Options: heuristic, accurate, validated

Mismatch Logging

praisonai chat --context-log-mismatch

Snapshot Timing

praisonai chat --context-snapshot-timing both
Options: pre_optimization, post_optimization, both

Write Mode

praisonai chat --context-write-mode async
Options: sync, async

Show Config

praisonai chat --context-show-config
Shows resolved configuration and exits.

Environment Variables

# Auto-compaction
export PRAISONAI_CONTEXT_AUTO_COMPACT=true
export PRAISONAI_CONTEXT_THRESHOLD=0.8
export PRAISONAI_CONTEXT_STRATEGY=smart

# Monitoring
export PRAISONAI_CONTEXT_MONITOR=false
export PRAISONAI_CONTEXT_MONITOR_PATH=./context.txt
export PRAISONAI_CONTEXT_MONITOR_FORMAT=human
export PRAISONAI_CONTEXT_MONITOR_FREQUENCY=turn

# Redaction
export PRAISONAI_CONTEXT_REDACT=true

# Budget
export PRAISONAI_CONTEXT_OUTPUT_RESERVE=8000

# Estimation
export PRAISONAI_CONTEXT_ESTIMATION_MODE=heuristic

config.yaml

context:
  auto_compact: true
  compact_threshold: 0.8
  strategy: smart
  output_reserve: 8000
  
  monitor:
    enabled: false
    path: ./context.txt
    format: human
    frequency: turn
    write_mode: sync
  
  redact_sensitive: true
  
  estimation:
    mode: heuristic
    log_mismatch: false

Precedence

Configuration is resolved in this order (highest to lowest):
  1. CLI flags - --context-* flags
  2. Environment variables - PRAISONAI_CONTEXT_*
  3. config.yaml - context: section
  4. Defaults - Built-in defaults

Troubleshooting

Context overflow errors

# Lower threshold to trigger earlier
praisonai chat --context-threshold 0.7

# Use more aggressive strategy
praisonai chat --context-strategy truncate

Monitor not updating

# Check if enabled
> /context config

# Enable and set frequency
> /context on
> /context frequency turn

Sensitive data in snapshots

# Ensure redaction is enabled
praisonai chat --context-redact

# Check patterns in snapshot
> /context dump