Skip to main content

Context Management

PraisonAI provides a complete context management system that prevents context overflow, optimizes token usage, and provides real-time visibility into what’s being sent to the model.

Overview

Core Components

ComponentPurpose
Token EstimationFast offline token counting
Context LedgerToken accounting per segment
Context BudgeterModel limits and budget allocation
Context ComposerMessage assembly with limits
Context OptimizerCompaction strategies
Context MonitorReal-time disk snapshots

Agent-Centric Quick Start

The simplest way to enable context management is with the context= parameter:
from praisonaiagents import Agent

# Enable context management with safe defaults
agent = Agent(
    instructions="You are a helpful assistant.",
    context=True,  # Enable with defaults
)

# The agent now automatically:
# - Tracks token usage
# - Optimizes when approaching limits (80% threshold)
# - Uses smart optimization strategy

response = agent.chat("Hello!")

Custom Configuration

from praisonaiagents import Agent
from praisonaiagents.context import ManagerConfig

# Custom context configuration
config = ManagerConfig(
    auto_compact=True,
    compact_threshold=0.7,  # Trigger at 70%
    strategy="smart",
    monitor_enabled=True,
    monitor_path="./context.txt",
)

agent = Agent(
    instructions="You are helpful.",
    context=config,
)

Low-Level API (Advanced)

from praisonaiagents.context import (
    ContextBudgeter,
    ContextLedgerManager,
    get_optimizer,
    OptimizerStrategy,
)

# Create budgeter for your model
budgeter = ContextBudgeter(model="gpt-4o-mini")
budget = budgeter.allocate()
print(f"Usable context: {budget.usable:,} tokens")

# Track token usage
ledger = ContextLedgerManager()
ledger.track_system_prompt("You are a helpful assistant.")
ledger.track_history(messages)
print(f"Total used: {ledger.get_total()} tokens")

# Optimize when needed
optimizer = get_optimizer(OptimizerStrategy.SMART)
optimized, stats = optimizer.optimize(messages, target_tokens=50000)

CLI Interactive Mode

# Enable context monitoring
praisonai chat --context-monitor

# Use specific optimization strategy
praisonai chat --context-strategy smart --context-threshold 0.8

# View context stats in session
/context stats
/context budget
/context dump

Features

CLI Flags

FlagDescriptionDefault
--context-auto-compactEnable automatic compactiontrue
--context-strategyOptimization strategysmart
--context-thresholdTrigger threshold (0.0-1.0)0.8
--context-monitorEnable monitoringfalse
--context-monitor-pathOutput file path./context.txt
--context-monitor-formatOutput formathuman
--context-output-reserveReserve for output8000

Environment Variables

PRAISONAI_CONTEXT_AUTO_COMPACT=true
PRAISONAI_CONTEXT_STRATEGY=smart
PRAISONAI_CONTEXT_THRESHOLD=0.8
PRAISONAI_CONTEXT_MONITOR=true
PRAISONAI_CONTEXT_MONITOR_PATH=./context.txt
PRAISONAI_CONTEXT_MONITOR_FORMAT=human
PRAISONAI_CONTEXT_REDACT=true

Interactive Commands

CommandDescription
/contextShow context stats
/context showSummary + budgets
/context statsToken ledger table
/context budgetBudget allocation
/context dumpWrite snapshot now
/context onEnable monitoring
/context offDisable monitoring
/context compactTrigger optimization

Multi-Agent Support

from praisonaiagents.context import MultiAgentLedger, MultiAgentMonitor

# Per-agent context isolation
multi_ledger = MultiAgentLedger()
researcher = multi_ledger.get_agent_ledger("researcher")
writer = multi_ledger.get_agent_ledger("writer")

# Per-agent monitoring
multi_monitor = MultiAgentMonitor(base_path="./context/")

Next Steps