Skip to main content
Sandboxed agents keep the agent loop local while optionally running tools in secure sandboxes.

Quick Start

1

Basic Usage

Local loop, local tools - simplest configuration.
from praisonai import SandboxedAgent, SandboxedAgentConfig
from praisonaiagents import Agent

sandboxed = SandboxedAgent(
    config=SandboxedAgentConfig(
        model="gpt-4o",
        system="You are a coding assistant.",
    )
)

agent = Agent(name="coder", backend=sandboxed)
result = agent.start("Create a Python script that prints hello")
2

With Tool Sandboxing

Local loop, tools run in E2B sandbox for security.
from praisonai import SandboxedAgent, SandboxedAgentConfig

sandboxed = SandboxedAgent(
    compute="e2b",  # Tools run in E2B, loop stays local
    config=SandboxedAgentConfig(
        model="gpt-4o",
        system="You are a coding assistant.",
        tools=["execute_command", "read_file", "write_file"],
        packages={"pip": ["pandas", "numpy"]},
    )
)

agent = Agent(name="secure_coder", backend=sandboxed)
result = agent.start("Analyze CSV data with pandas")

How It Works

ComponentLocationPurpose
Agent LoopLocalLLM calls, decision making, memory
Tool ExecutionLocal or SandboxCode execution, file operations
Memory & StateLocalSession persistence, context

Configuration Options

SandboxedAgentConfig Reference

Full configuration options for sandboxed agents

Essential Configuration

OptionTypeDefaultDescription
modelstr"gpt-4o"LLM model to use
systemstr"You are a helpful coding assistant."System prompt
toolsList[str]["execute_command", "read_file", "write_file", "list_files", "search_web"]Available tools
packagesDict[str, List[str]]NonePackage dependencies
networkingDict[str, Any]{"type": "unrestricted"}Network access rules
host_packages_okboolFalseAllow host package installation

Common Patterns

Secure Development Environment

# Development with package isolation
sandboxed = SandboxedAgent(
    compute="docker",
    config=SandboxedAgentConfig(
        model="claude-sonnet-4-6",
        tools=["execute_command", "read_file", "write_file"],
        packages={
            "pip": ["requests", "beautifulsoup4"],
            "npm": ["express", "lodash"]
        },
        networking={"type": "limited", "allowed_hosts": ["api.github.com"]}
    )
)

Local Development (No Sandbox)

# Fast iteration, local execution
sandboxed = SandboxedAgent(
    config=SandboxedAgentConfig(
        model="gpt-4o-mini",
        host_packages_ok=True,  # Allow host package installs
        tools=["execute_command", "search_web"]
    )
)

Multi-Provider Flexibility

# Use with any LLM provider
sandboxed = SandboxedAgent(
    config=SandboxedAgentConfig(
        model="ollama/llama3.3",  # Local model
        system="You are a Python expert.",
        tools=["execute_command"]
    )
)

Best Practices

Always use sandboxing when running untrusted code or installing packages:
# Secure: Tools run in sandbox
SandboxedAgent(compute="e2b", config=config)

# Insecure: Tools run on host
SandboxedAgent(config=config)  # Only if you trust the code
  • Use local execution for trusted environments and faster iteration
  • Use sandbox for production or when handling user-generated code
  • Consider model choice: gpt-4o-mini for speed, claude-sonnet-4-6 for complex tasks
LocalManagedAgent and SandboxedAgent are the same class:
# Both imports work identically
from praisonai import LocalManagedAgent, LocalManagedConfig
from praisonai import SandboxedAgent, SandboxedAgentConfig

# Same functionality
old_agent = LocalManagedAgent(config=LocalManagedConfig())
new_agent = SandboxedAgent(config=SandboxedAgentConfig())
  • SandboxedAgent: Agent loop stays local, only tools may be sandboxed
  • Managed Runtime: Entire agent loop runs remotely (see Managed Runtime Protocol)

Managed Runtime Protocol

Remote agent runtime for full managed execution

Managed Agents

Core concepts for managed agent backends