Skip to main content

Context Strategies & Defaults

This is the master reference for context management in PraisonAI Agents. It covers all strategies, default behaviors, and how to customize them.
Context management is opt-in via the context= parameter. When disabled (default), there is zero performance overhead.

Quick Start

from praisonaiagents import Agent
from praisonaiagents.context import ManagerConfig

# Simple: Enable with defaults
agent = Agent(
    instructions="You are helpful.",
    context=True,  # Enable context management
)

# Custom: Fine-tune behavior
agent = Agent(
    instructions="You are a code assistant.",
    context=ManagerConfig(
        auto_compact=True,
        compact_threshold=0.8,
        strategy="smart",
        output_reserve=16384,
    ),
)

Default Behavior

Interactive Mode (praisonai chat)

SettingDefaultReason
context=FalseZero overhead for simple chats
When enabled:
- auto_compactTruePrevent overflow automatically
- compact_threshold0.8Trigger at 80% usage
- strategysmartBest balance of preservation
- output_reserveModel-specific8K-16K tokens
To enable in CLI:
praisonai chat --context  # Enable with defaults

Auto-Agents Mode (PraisonAIAgents)

SettingDefaultReason
context=FalseZero overhead for simple tasks
When enabled:
- auto_compactTrueHandle long multi-agent tasks
- compact_threshold0.8Trigger at 80% usage
- strategysmartPreserve important context
To enable:
from praisonaiagents import PraisonAIAgents

agents = PraisonAIAgents(
    agents=[...],
    context=True,  # Enable for all agents
)

Optimization Strategies

Strategy Overview

StrategyDescriptionProsCons
truncateRemove oldest messages firstFast, simpleLoses early context
sliding_windowKeep N most recent messagesPreserves recentLoses early context
prune_toolsTruncate old tool outputsKeeps messagesMay lose tool details
summarizeReplace old messages with summaryPreserves meaningSlower, uses API
smartCombine strategies intelligentlyBest balanceMore complex

When to Use Each

  • truncate: Simple chatbots, Q&A agents
  • sliding_window: Long conversations where recent context matters most
  • prune_tools: Tool-heavy agents with large outputs
  • summarize: When historical context is critical
  • smart (recommended): Production use, balances all concerns

Overflow Handling

Threshold Playbook

UsageLevelAction
70%INFOMonitor usage, no action needed
80%NOTICEConsider optimization soon
90%WARNINGTrigger auto-compact if enabled
95%CRITICALAggressive optimization required
100%OVERFLOWImmediate truncation to prevent API error

Automatic Handling

When auto_compact=True, the system automatically:
  1. Monitors token usage before each API call
  2. Triggers optimization when threshold is reached
  3. Applies the configured strategy
  4. Logs the optimization event
# Example: Custom threshold
agent = Agent(
    instructions="...",
    context=ManagerConfig(
        auto_compact=True,
        compact_threshold=0.7,  # Earlier trigger
        strategy="smart",
    ),
)

Budgeting

Token Allocation

The context budget is divided into segments:
SegmentDefaultDescription
System Prompt2,000Agent instructions
Rules500Behavioral rules
Skills500Skill definitions
Memory1,000Long-term memory
Tools Schema2,000Tool definitions
Tool Outputs20,000Tool call results
HistoryRemainingConversation history
Buffer1,000Safety margin

Custom Budgets

from praisonaiagents.context import ContextBudgeter

budgeter = ContextBudgeter(
    model="gpt-4o",
    system_prompt_budget=3000,
    tools_schema_budget=5000,
    memory_budget=2000,
)
budget = budgeter.allocate()
print(f"Usable: {budget.usable:,} tokens")

Monitoring

Enable Context Monitoring

agent = Agent(
    instructions="...",
    context=ManagerConfig(
        monitor_enabled=True,
        monitor_path="./context_debug.txt",
        monitor_format="human",  # or "json"
    ),
)

Snapshot Output Example

================================================================================
PRAISONAI CONTEXT SNAPSHOT
================================================================================
Timestamp: 2026-01-08T12:00:00Z
Model: gpt-4o-mini
Model Limit: 128,000 tokens
Output Reserve: 16,384 tokens
Usable Budget: 111,616 tokens

--------------------------------------------------------------------------------
TOKEN LEDGER
--------------------------------------------------------------------------------
Segment              |     Tokens |     Budget |    Usage
--------------------------------------------------------------------------------
System Prompt        |        150 |      2,000 |    7.5%
History              |      5,230 |     84,616 |    6.2%
Tool Outputs         |      1,200 |     20,000 |    6.0%
--------------------------------------------------------------------------------
TOTAL                |      6,580 |    111,616 |    5.9%

Percentage Display

Context utilization is displayed with smart formatting:
  • Values < 0.1%: Shows <0.1%
  • Values < 1%: Shows 2 decimal places (e.g., 0.02%)
  • Values >= 1%: Shows 1 decimal place (e.g., 5.3%)

Multi-Agent Policies

Isolated (Default)

Each agent has its own context ledger:
from praisonaiagents.context import ManagerConfig

agent1 = Agent(
    instructions="Researcher",
    context=ManagerConfig(policy="isolated"),
)
agent2 = Agent(
    instructions="Writer", 
    context=ManagerConfig(policy="isolated"),
)

Shared

Agents share a common context ledger:
from praisonaiagents import PraisonAIAgents

agents = PraisonAIAgents(
    agents=[agent1, agent2],
    context=ManagerConfig(policy="shared"),
)

Redaction & Security

Sensitive data is automatically redacted in snapshots:
  • API keys (OpenAI, Anthropic, Google, AWS, etc.)
  • Passwords and secrets
  • Email addresses (optional)
  • Custom patterns
agent = Agent(
    instructions="...",
    context=ManagerConfig(
        monitor_enabled=True,
        redact_sensitive=True,
    ),
)

Configuration Reference

ManagerConfig Options

OptionTypeDefaultDescription
auto_compactboolTrueAuto-optimize on threshold
compact_thresholdfloat0.8Trigger at this usage %
strategystr"smart"Optimization strategy
output_reserveintModel-specificReserved for output
monitor_enabledboolFalseEnable snapshots
monitor_pathstrNoneSnapshot file path
monitor_formatstr"human""human" or "json"
redact_sensitiveboolTrueRedact secrets
policystr"isolated"Multi-agent policy

See Also