Skip to main content
CockroachDB provides a distributed, PostgreSQL-compatible database that automatically scales and handles multi-region deployments for resilient AI agents.

Quick Start

1

Create CockroachDB Cluster

  1. Sign up at cockroachlabs.cloud
  2. Create a new Serverless cluster
  3. Download the cluster certificate
  4. Copy the connection string
export COCKROACHDB_URL="postgresql://user:password@free-tier.gcp-us-central1.cockroachlabs.cloud:26257/defaultdb?sslmode=verify-full"
2

Create Distributed Agent

from praisonaiagents import Agent

agent = Agent(
    name="Distributed Agent",
    instructions="You are a globally distributed AI assistant.",
    db={"database_url": "postgresql://user:pass@xxx.cockroachlabs.cloud:26257/db?sslmode=verify-full"}
)

# Data automatically distributed across regions
result = agent.start("I need high availability and consistency")
print(result)
3

Test Global Consistency

# Write from one location
agent.start("Remember: I'm testing global distribution")

# Read from anywhere in the world - data is consistent
result = agent.start("What was I testing?")
print(result)  # "You were testing global distribution"
# Same result worldwide due to strong consistency

Installation

# CockroachDB uses standard PostgreSQL driver
pip install "praisonai[cockroachdb]"

Configuration Options

OptionTypeDefaultDescription
database_urlstrNoneFull PostgreSQL connection URL with SSL
max_retriesint3Retries for serialization errors (40001)
retry_delayfloat0.5Base delay between retries
pool_sizeint5Connection pool size
auto_create_tablesboolTrueCreate tables automatically

Usage Patterns

Using Convenience Class

from praisonai.db.adapter import CockroachDB
from praisonaiagents import Agent

# Auto-reads from COCKROACHDB_URL environment variable
db = CockroachDB()
agent = Agent(name="CRDB Agent", db=db)

Manual Configuration with Retry Settings

from praisonai.db.adapter import PraisonAIDB
from praisonaiagents import Agent

db = PraisonAIDB(
    database_url="postgresql://user:pass@cluster.cockroachlabs.cloud:26257/mydb?sslmode=verify-full",
    max_retries=5,  # Extra retries for serialization conflicts
    retry_delay=1.0,  # 1 second base delay
    pool_size=10  # Larger pool for distributed workload
)

agent = Agent(name="High-Availability Agent", db=db)

Multi-Region Agent Setup

import os
from praisonai import ManagedAgent, LocalManagedConfig
from praisonai.db.adapter import CockroachDB
from praisonaiagents import Agent

# Create globally distributed agent
db = CockroachDB(database_url=os.environ["COCKROACHDB_URL"])
managed = ManagedAgent(
    provider="local",
    db=db,
    config=LocalManagedConfig(
        model="gpt-4o-mini",
        name="Global Agent",
        system="You are a globally distributed AI assistant with strong consistency."
    )
)

agent = Agent(name="User", backend=managed)

# Agent data automatically distributed across regions
result1 = agent.run("Store this globally: I'm building a multi-region application")
print(f"Agent: {result1}")

result2 = agent.run("The app needs to handle users from different continents")
print(f"Agent: {result2}")

# Save session - data replicated globally
session_data = managed.save_ids()
print(f"Global session: {session_data['session_id']}")

# Resume from any region - same data everywhere
managed2 = ManagedAgent(provider="local", db=CockroachDB())
managed2.resume_session(session_data["session_id"])

agent2 = Agent(name="User", backend=managed2)
result3 = agent2.run("What application am I building?")
print(f"Resumed Agent: {result3}")
# Works from any region with strong consistency

CockroachDB-Specific Features

Automatic Serialization Retry

CockroachDB may return serialization errors (40001) under high contention. PraisonAI handles these automatically:
from praisonai.db.adapter import CockroachDB

# Optimized for serialization conflict handling
db = CockroachDB(
    database_url="postgresql://...",
    max_retries=10,  # More retries for high-contention workloads
    retry_delay=0.1  # Shorter delays for faster retry
)

# Agent automatically retries on serialization conflicts
agent = Agent(name="High-Contention Agent", db=db)

Global Data Distribution

Data is automatically distributed across regions:
-- View data distribution (run in CockroachDB SQL interface)
SHOW RANGES FROM TABLE praison_sessions;
SHOW RANGES FROM TABLE praison_messages;

-- See which nodes have your data
SELECT node_id, count(*) FROM [SHOW RANGES FROM TABLE praison_sessions] GROUP BY node_id;

Follower Reads

Reduce latency by reading from local replicas:
# Connection string with follower reads enabled
follower_url = "postgresql://user:pass@cluster.cockroachlabs.cloud:26257/db?sslmode=verify-full&options=--cluster=my-cluster"

# Some reads may be slightly stale but much faster
agent = Agent(
    name="Fast Read Agent",
    instructions="You prioritize read speed with slight staleness acceptable.",
    db={"database_url": follower_url}
)

Backup and Point-in-Time Recovery

CockroachDB automatically backs up your data:
-- Restore to point in time (via CockroachDB Console)
RESTORE FROM LATEST IN 'gs://backup-bucket' AS OF SYSTEM TIME '2024-01-15 14:00:00';

-- Create scheduled backups
CREATE SCHEDULE FOR BACKUP INTO 'gs://my-backup-bucket' 
    RECURRING '@daily' 
    WITH SCHEDULE OPTIONS first_run = 'now';

Best Practices

CockroachDB uses optimistic concurrency control. Design for retries:
from praisonai.db.adapter import CockroachDB

# Tune retry settings for your workload
db = CockroachDB(
    max_retries=10,  # High-contention: more retries
    retry_delay=0.05  # Fast retry for short transactions
)

# Keep agent operations short and idempotent
agent = Agent(
    name="Optimized Agent",
    instructions="Keep responses concise for fast database transactions.",
    db=db
)
Distributed systems benefit from larger connection pools:
db = CockroachDB(
    database_url="postgresql://...",
    pool_size=20,  # Larger pool for distributed load
    max_retries=5
)

# Multiple agents can share the same pool efficiently
agent1 = Agent(name="Agent 1", db=db)
agent2 = Agent(name="Agent 2", db=db)
Track key CockroachDB metrics:
  • Serialization conflicts: High rate indicates need for retry tuning
  • Node latency: Shows geographic distribution performance
  • Storage usage: Plan for data growth
Use the CockroachDB Console for monitoring and alerts.
Design agent interactions for global distribution:
# Store region information in session metadata
agent = Agent(
    name="Regional Agent",
    instructions="You serve users globally with consistent data.",
    db=CockroachDB()
)

# Session metadata can track user region
session_metadata = {"user_region": "us-east", "timezone": "America/New_York"}

Environment Variables

VariableRequiredFormatExample
COCKROACHDB_URLpostgresql://...cockroachlabs.cloud:26257/...postgresql://user:pass@cluster.gcp-us-central1.cockroachlabs.cloud:26257/defaultdb?sslmode=verify-full
OPENAI_API_KEYsk-...sk-1234567890abcdef...

Performance Characteristics

MetricServerlessDedicatedUse Case
Latency10-50ms5-20msReal-time chat
Throughput1000 QPS10000+ QPSHigh-volume agents
ConsistencyStrongStrongFinancial applications
Availability99.9%99.99%Mission-critical systems

Troubleshooting

Serialization Conflict Errors

If you see “restart transaction: TransactionRetryWithProtoRefreshError”:
# Increase retry settings
db = CockroachDB(max_retries=20, retry_delay=0.1)

# Or reduce transaction size by batching operations less

SSL Certificate Issues

Ensure SSL is properly configured:
# Download cluster certificate if needed
curl -k https://cluster.cockroachlabs.cloud:26257/ca.crt > ca.crt

# Add to connection string
export COCKROACHDB_URL="postgresql://user:pass@cluster.cockroachlabs.cloud:26257/db?sslmode=verify-full&sslrootcert=ca.crt"

High Latency

For better performance across regions:
# Use connection string with local region preference
regional_url = "postgresql://user:pass@us-east-1.cluster.cockroachlabs.cloud:26257/db?sslmode=verify-full"

Connection Pool Exhaustion

If you hit connection limits:
db = CockroachDB(
    pool_size=50,  # Increase pool size
    max_retries=3   # Reduce retries to fail faster
)

Cloud Databases Overview

Compare all cloud database providers

Multi-Region Deployment

Deploy agents across multiple regions