Skip to main content
ManagedAgent executes on Anthropic’s managed infrastructure while seamlessly persisting conversation history and session state to your choice of 7 database backends.

Quick Start

1

Basic Usage

Run gpt-4o-mini conversations with SQLite persistence in 5 lines:
from praisonaiagents import Agent, db
from praisonai.integrations.managed_agents import ManagedAgent, ManagedConfig

managed = ManagedAgent(
    provider="anthropic",
    config=ManagedConfig(model="gpt-4o-mini", system="You are helpful")
)

agent = Agent(
    name="Assistant", 
    backend=managed,
    db=db(database_url="conversation.db"),
    session_id="session-1"
)

agent.start("Remember: The sky is blue")
2

Session Resume

Continue conversations after restarts:
# Later process - same session_id resumes conversation
agent2 = Agent(
    name="Assistant",
    backend=managed,
    db=db(database_url="conversation.db"), 
    session_id="session-1"  # Same ID = resume
)

response = agent2.start("What color is the sky?")
# Response: "The sky is blue" (remembers from previous session)

How It Works


Database Backends

Zero external dependencies, file-based storage:
from praisonaiagents import Agent, db
from praisonai.integrations.managed_agents import ManagedAgent, ManagedConfig

# Phase 1: First session (teach facts)
managed = ManagedAgent(
    provider="anthropic",
    config=ManagedConfig(model="gpt-4o-mini", system="You are helpful")
)

agent = Agent(
    name="Assistant",
    backend=managed,
    db=db(database_url="sqlite:///my_data.db"),
    session_id="learning-session"
)

agent.start("Remember: PraisonAI is an AI agent framework")
agent.start("Also remember: It supports multiple LLM providers")

# Phase 2: Direct verification
import sqlite3
conn = sqlite3.connect("my_data.db")
cursor = conn.cursor()
cursor.execute("SELECT COUNT(*) FROM messages WHERE session_id = ?", ("learning-session",))
message_count = cursor.fetchone()[0]
print(f"Messages stored: {message_count}")
conn.close()

# Phase 3: Session resume (new instance)
managed2 = ManagedAgent(
    provider="anthropic", 
    config=ManagedConfig(model="gpt-4o-mini", system="You are helpful")
)

agent2 = Agent(
    name="Assistant",
    backend=managed2,
    db=db(database_url="sqlite:///my_data.db"),
    session_id="learning-session"  # Same ID resumes
)

result = agent2.start("What did I tell you about PraisonAI?")
# Result: "You told me that PraisonAI is an AI agent framework and that it supports multiple LLM providers."
Prerequisites: None (built into Python)

Configuration Options

ManagedAgent API Reference

Complete ManagedAgent configuration options

PraisonDB Reference

Database adapter configuration options
ComponentPurposeKey Parameters
ManagedAgentAnthropic execution backendprovider, config, api_key, timeout
ManagedConfigAgent definitionmodel, system, tools, packages
PraisonDBDatabase adapterdatabase_url, state_url, analytics_url
DbSessionAdapterSession bridgeAuto-configured based on database URL

Common Patterns

The most common pattern for persistent managed agents:
from praisonaiagents import Agent, db
from praisonai.integrations.managed_agents import ManagedAgent, ManagedConfig

def create_agent(session_id: str):
    managed = ManagedAgent(
        provider="anthropic",
        config=ManagedConfig(
            model="gpt-4o-mini",
            system="You are a helpful assistant with perfect memory"
        )
    )
    
    return Agent(
        name="PersistentAgent",
        backend=managed,
        db=db(database_url="postgresql://localhost/agentdb"),
        session_id=session_id
    )

# First conversation
agent1 = create_agent("user-123")
agent1.start("My name is Alice and I live in Paris")

# Later conversation (different process/server restart)
agent2 = create_agent("user-123")  # Same session_id
response = agent2.start("What's my name and where do I live?")
# Response: "Your name is Alice and you live in Paris."
Use different backends for different data types:
from praisonaiagents import Agent, db
from praisonai.integrations.managed_agents import ManagedAgent, ManagedConfig

managed = ManagedAgent(
    provider="anthropic",
    config=ManagedConfig(model="gpt-4o-mini")
)

agent = Agent(
    name="MultiBackendAgent",
    backend=managed,
    db=db(
        database_url="postgresql://localhost/conversations",  # Conversations
        state_url="redis://localhost:6379",                  # Fast state
        analytics_url="clickhouse://localhost:8123/default"  # Analytics
    ),
    session_id="multi-backend-session"
)

# All data types are automatically stored in appropriate backends
agent.start("Analyze this data and remember the insights")
Best practices for session identification:
from praisonaiagents import Agent, db
from praisonai.integrations.managed_agents import ManagedAgent, ManagedConfig
import hashlib
from datetime import datetime

def generate_session_id(user_id: str, conversation_type: str) -> str:
    """Generate deterministic session IDs"""
    # Per-user, per-type sessions
    base = f"{user_id}-{conversation_type}"
    return hashlib.md5(base.encode()).hexdigest()[:16]

def get_daily_session_id(user_id: str) -> str:
    """Daily session rotation"""
    today = datetime.now().strftime("%Y-%m-%d")
    return f"{user_id}-{today}"

# Usage examples
user_session = generate_session_id("user-456", "support")
daily_session = get_daily_session_id("user-456")

managed = ManagedAgent(
    provider="anthropic",
    config=ManagedConfig(model="gpt-4o-mini")
)

agent = Agent(
    name="SupportAgent",
    backend=managed,
    db=db(database_url="sqlite:///support.db"),
    session_id=user_session  # Consistent across requests
)

Best Practices

  • Use meaningful session IDs (user-based, not random)
  • Implement session rotation for long conversations
  • Store session metadata for debugging
  • Handle concurrent access with proper locking
  • SQLite: Development, single-user apps, file-based persistence
  • PostgreSQL: Production apps, complex queries, ACID compliance
  • MySQL: Existing MySQL infrastructure, compatibility requirements
  • Redis: High-speed state, session caching, temporary data
  • MongoDB: Document-based state, flexible schemas
  • ClickHouse: Analytics, large-scale logging, data warehousing
  • JSON Files: Prototyping, zero dependencies, simple use cases
  • Use connection pooling for database connections
  • Implement message compaction for long sessions
  • Cache frequently accessed session data
  • Use async database operations when possible
  • Monitor database performance metrics
  • Implement retry logic for transient database failures
  • Handle session corruption gracefully
  • Log database errors for debugging
  • Provide fallback behavior when persistence fails
  • Test database connection before agent creation

Database Persistence

Traditional Agent persistence patterns

Session Management

Advanced session handling techniques