Skip to main content
LocalAgent runs the agent execution loop locally in your process, supporting any LLM via litellm routing and optional cloud compute for tool sandboxing.

Quick Start

1

Simplest Usage

Create a local agent with minimal configuration:
from praisonai import LocalAgent, LocalAgentConfig
from praisonaiagents import Agent

local = LocalAgent(
    config=LocalAgentConfig(
        model="gpt-4o-mini"
    )
)

agent = Agent(name="assistant", backend=local)
result = agent.start("Explain quantum computing in simple terms")
2

With Cloud Compute Sandbox

Use cloud compute for secure tool execution:
from praisonai import LocalAgent, LocalAgentConfig
from praisonaiagents import Agent

local = LocalAgent(
    compute="e2b",  # Cloud sandbox for tools
    config=LocalAgentConfig(
        model="gpt-4o-mini",
        tools=["execute_command", "read_file", "write_file"]
    )
)

agent = Agent(name="coder", backend=local)
result = agent.start("Create a Python script that analyzes a CSV file")

How It Works

ComponentLocationPurpose
Agent LoopLocal ProcessComplete execution control
LLMExternal APIAny provider via litellm routing
ToolsLocal or CloudConfigurable execution environment
Session StateLocal MemoryProcess-managed state

Choosing an LLM

Use OpenAI models with API key authentication:
from praisonai import LocalAgent, LocalAgentConfig
import os

os.environ["OPENAI_API_KEY"] = "your-key-here"

local = LocalAgent(
    config=LocalAgentConfig(
        model="gpt-4o"  # or "gpt-4o-mini", "gpt-3.5-turbo"
    )
)

Choosing a Compute Backend

Execute tools in local subprocess (fastest, least secure):
from praisonai import LocalAgent, LocalAgentConfig

local = LocalAgent(
    # No compute parameter = local subprocess
    config=LocalAgentConfig(
        model="gpt-4o-mini",
        tools=["execute_command", "read_file", "write_file"]
    )
)

Compute Selection Guide


Configuration Options

LocalAgent API Reference

Complete LocalAgent configuration options

LocalAgentConfig Reference

Configuration object parameters
OptionTypeDefaultDescription
modelstrRequiredLLM model (supports litellm prefixes)
systemstr"You are a helpful assistant."System prompt
toolsList[str][]Available tool names
packagesDictNonePackage dependencies for compute
host_packages_okboolFalseAllow host package installation

Common Patterns

Switching LLMs

Change LLM providers without touching other code:
from praisonai import LocalAgent, LocalAgentConfig
from praisonaiagents import Agent

# Start with OpenAI
config = LocalAgentConfig(
    model="gpt-4o-mini",
    system="You are a helpful coding assistant."
)

# Switch to Gemini
config.model = "gemini/gemini-2.0-flash"

# Switch to Ollama
config.model = "ollama/llama3"

# Same agent setup works with any model
local = LocalAgent(config=config)
agent = Agent(name="coder", backend=local)

Tool Execution

Configure tools for different execution environments:
from praisonai import LocalAgent, LocalAgentConfig

# Local execution (fast, less secure)
local_tools = LocalAgent(
    config=LocalAgentConfig(
        model="gpt-4o-mini",
        tools=["read_file", "write_file", "execute_command"]
    )
)

# Cloud execution (slower, more secure)
cloud_tools = LocalAgent(
    compute="e2b",
    config=LocalAgentConfig(
        model="gpt-4o-mini", 
        tools=["read_file", "write_file", "execute_command"]
    )
)

Multi-turn Conversations

Maintain conversation state locally:
from praisonai import LocalAgent, LocalAgentConfig
from praisonaiagents import Agent

local = LocalAgent(
    config=LocalAgentConfig(
        model="gpt-4o-mini",
        system="You are a helpful assistant with memory."
    )
)

agent = Agent(name="assistant", backend=local)

# First turn
agent.start("My name is Alice")

# Second turn - state maintained in local process
response = agent.start("What's my name?")
# Response: "Your name is Alice"

Usage Tracking

Monitor local agent resource usage:
# After execution
session_info = local.retrieve_session()
print(f"Input tokens: {session_info['usage']['input_tokens']}")
print(f"Output tokens: {session_info['usage']['output_tokens']}")

# List sessions
sessions = local.list_sessions()
for session in sessions:
    print(f"Session: {session['id']}")

Migrating from ManagedAgent

Update deprecated factory patterns to use the new canonical classes:
OldNew
ManagedAgent(provider="openai", config=LocalManagedConfig(model="gpt-4o"))LocalAgent(config=LocalAgentConfig(model="gpt-4o"))
ManagedAgent(provider="ollama", config=LocalManagedConfig(model="llama3"))LocalAgent(config=LocalAgentConfig(model="ollama/llama3"))
ManagedAgent(provider="gemini", config=LocalManagedConfig(...))LocalAgent(config=LocalAgentConfig(model="gemini/gemini-2.0-flash"))
ManagedAgent(provider="e2b", config=LocalManagedConfig(...))LocalAgent(compute="e2b", config=LocalAgentConfig(...))
ManagedAgent(provider="modal", config=LocalManagedConfig(...))LocalAgent(compute="modal", config=LocalAgentConfig(...))
ManagedAgent(provider="local", config=LocalManagedConfig(...))LocalAgent(config=LocalAgentConfig(...))

Best Practices

Choose compute backends based on your trust and security requirements:
  • Use local subprocess for development and trusted environments
  • Use Docker for moderate isolation with good performance
  • Use cloud providers (E2B, Modal) for maximum security and isolation
  • Match compute choice to your specific use case (Modal for ML, Flyio for edge)
Use litellm prefixes correctly for different providers:
  • Always include provider prefix for Gemini: gemini/gemini-2.0-flash
  • Always include provider prefix for Ollama: ollama/llama3
  • OpenAI models can omit prefix: gpt-4o or openai/gpt-4o
  • Test model availability before production deployment
Use the new canonical LocalAgent class instead of the deprecated factory:
  • Avoid the provider= parameter entirely on LocalAgent constructors
  • Use config.model= to specify LLM models with appropriate litellm prefixes
  • Use compute= to specify sandboxing backends separately from LLM choice
  • This provides cleaner separation of concerns and better maintainability
Properly configure API keys and credentials:
  • Set LLM provider keys (OPENAI_API_KEY, GOOGLE_API_KEY, etc.)
  • Set compute provider keys (E2B_API_KEY, MODAL_TOKEN, etc.)
  • Use environment variable management tools for production deployments
  • Test authentication before deploying to avoid runtime failures

Hosted Agent

Run entire agent loops on Anthropic’s managed runtime

Sandbox

Tool execution sandboxing options

ManagedAgent Persistence

Database integration patterns

Session Info

Session metadata and usage tracking