Quick Start
How It Works
| Component | Location | Purpose |
|---|---|---|
| Agent Loop | Local Process | Complete execution control |
| LLM | External API | Any provider via litellm routing |
| Tools | Local or Cloud | Configurable execution environment |
| Session State | Local Memory | Process-managed state |
Choosing an LLM
- OpenAI
- Gemini
- Ollama
- Anthropic
- Custom
Use OpenAI models with API key authentication:
Choosing a Compute Backend
- None (Local)
- Docker
- E2B
- Modal
- Flyio
- Daytona
Execute tools in local subprocess (fastest, least secure):
Compute Selection Guide
Configuration Options
LocalAgent API Reference
Complete LocalAgent configuration options
LocalAgentConfig Reference
Configuration object parameters
| Option | Type | Default | Description |
|---|---|---|---|
model | str | Required | LLM model (supports litellm prefixes) |
system | str | "You are a helpful assistant." | System prompt |
tools | List[str] | [] | Available tool names |
packages | Dict | None | Package dependencies for compute |
host_packages_ok | bool | False | Allow host package installation |
Common Patterns
Switching LLMs
Change LLM providers without touching other code:Tool Execution
Configure tools for different execution environments:Multi-turn Conversations
Maintain conversation state locally:Usage Tracking
Monitor local agent resource usage:Migrating from ManagedAgent
Update deprecated factory patterns to use the new canonical classes:| Old | New |
|---|---|
ManagedAgent(provider="openai", config=LocalManagedConfig(model="gpt-4o")) | LocalAgent(config=LocalAgentConfig(model="gpt-4o")) |
ManagedAgent(provider="ollama", config=LocalManagedConfig(model="llama3")) | LocalAgent(config=LocalAgentConfig(model="ollama/llama3")) |
ManagedAgent(provider="gemini", config=LocalManagedConfig(...)) | LocalAgent(config=LocalAgentConfig(model="gemini/gemini-2.0-flash")) |
ManagedAgent(provider="e2b", config=LocalManagedConfig(...)) | LocalAgent(compute="e2b", config=LocalAgentConfig(...)) |
ManagedAgent(provider="modal", config=LocalManagedConfig(...)) | LocalAgent(compute="modal", config=LocalAgentConfig(...)) |
ManagedAgent(provider="local", config=LocalManagedConfig(...)) | LocalAgent(config=LocalAgentConfig(...)) |
Best Practices
Compute Backend Selection
Compute Backend Selection
Choose compute backends based on your trust and security requirements:
- Use local subprocess for development and trusted environments
- Use Docker for moderate isolation with good performance
- Use cloud providers (E2B, Modal) for maximum security and isolation
- Match compute choice to your specific use case (Modal for ML, Flyio for edge)
Model Selection with Litellm
Model Selection with Litellm
Use litellm prefixes correctly for different providers:
- Always include provider prefix for Gemini:
gemini/gemini-2.0-flash - Always include provider prefix for Ollama:
ollama/llama3 - OpenAI models can omit prefix:
gpt-4ooropenai/gpt-4o - Test model availability before production deployment
Preferred LocalAgent Usage
Preferred LocalAgent Usage
Use the new canonical LocalAgent class instead of the deprecated factory:
- Avoid the
provider=parameter entirely on LocalAgent constructors - Use
config.model=to specify LLM models with appropriate litellm prefixes - Use
compute=to specify sandboxing backends separately from LLM choice - This provides cleaner separation of concerns and better maintainability
Environment Variables
Environment Variables
Properly configure API keys and credentials:
- Set LLM provider keys (
OPENAI_API_KEY,GOOGLE_API_KEY, etc.) - Set compute provider keys (
E2B_API_KEY,MODAL_TOKEN, etc.) - Use environment variable management tools for production deployments
- Test authentication before deploying to avoid runtime failures
Related
Hosted Agent
Run entire agent loops on Anthropic’s managed runtime
Sandbox
Tool execution sandboxing options
ManagedAgent Persistence
Database integration patterns
Session Info
Session metadata and usage tracking

