Skip to main content
Configure token estimation behavior via CLI flags and environment variables.

CLI Flags

Estimation Mode

# Fast heuristic (default)
praisonai chat --context-estimation-mode heuristic

# Accurate with tiktoken
praisonai chat --context-estimation-mode accurate

# Validated (compares both, logs mismatches)
praisonai chat --context-estimation-mode validated
ModeDescriptionPerformance
heuristicCharacter-based estimateFastest
accurateUses tiktoken tokenizerSlower
validatedCompares both, logs errorsSlowest

Mismatch Logging

# Log when heuristic differs from accurate by >15%
praisonai chat --context-log-mismatch

Environment Variables

export PRAISONAI_CONTEXT_ESTIMATION_MODE=heuristic
export PRAISONAI_CONTEXT_LOG_MISMATCH=false

Interactive Commands

View Estimation Config

> /context config
Shows current estimation mode and mismatch logging setting.

View Token Stats

> /context stats
Shows token counts per segment using configured estimation mode.

config.yaml

context:
  estimation:
    mode: heuristic
    log_mismatch: false
    mismatch_threshold_pct: 15.0

Troubleshooting

Inaccurate token counts

# Use accurate mode for precise counts
praisonai chat --context-estimation-mode accurate

Debug estimation errors

# Enable validated mode with logging
praisonai chat --context-estimation-mode validated --context-log-mismatch
Watch for log messages like:
WARNING: Token estimation mismatch: heuristic=1250, accurate=1100, error=13.6%