Documentation Index
Fetch the complete documentation index at: https://docs.praison.ai/llms.txt
Use this file to discover all available pages before exploring further.
Enable response and prompt caching to improve performance and reduce API costs.
Quick Start
Simple Enable
Enable caching with defaults:from praisonaiagents import Agent
agent = Agent(
name="Cached Agent",
instructions="Use caching",
caching=True
)
With Configuration
Configure caching behavior:from praisonaiagents import Agent
from praisonaiagents.config import CachingConfig
agent = Agent(
name="Cached Agent",
instructions="Use caching",
caching=CachingConfig(
enabled=True,
prompt_caching=True
)
)
Configuration Options
from praisonaiagents.config import CachingConfig
config = CachingConfig(
# Response caching
enabled=True,
# Prompt caching (provider-specific)
prompt_caching=None
)
| Parameter | Type | Default | Description |
|---|
enabled | bool | True | Enable response caching |
prompt_caching | bool | None | None | Enable prompt caching (Anthropic, etc.) |
Common Patterns
Pattern 1: Full Caching
from praisonaiagents import Agent
from praisonaiagents.config import CachingConfig
agent = Agent(
name="Full Cache Agent",
instructions="Maximum caching",
caching=CachingConfig(
enabled=True,
prompt_caching=True
)
)
Pattern 2: Disable Caching
from praisonaiagents import Agent
from praisonaiagents.config import CachingConfig
agent = Agent(
name="No Cache Agent",
instructions="Always fresh responses",
caching=CachingConfig(enabled=False)
)
Best Practices
Enable Prompt Caching for Anthropic
Anthropic Claude supports prompt caching for significant cost savings on repeated prompts.
Disable for Real-Time Data
Turn off caching when agents need fresh, real-time information.
Performance
Performance optimization tips
ExecutionConfig
Execution limits configuration