Integration Models
PraisonAI recipes can be integrated into your applications using six distinct models. Each model has specific use cases, trade-offs, and implementation patterns.Model Overview
| Model | Latency | Complexity | Best For |
|---|---|---|---|
| 1. Embedded SDK | Lowest | Low | Python apps, notebooks |
| 2. CLI Invocation | Low | Low | Scripts, CI/CD |
| 3. Local HTTP Sidecar | Medium | Medium | Microservices, polyglot |
| 4. Remote Managed Runner | Medium | High | Multi-tenant, cloud |
| 5. Event-Driven | Variable | High | Async workflows |
| 6. Plugin Mode | Low | Medium | IDE/CMS extensions |
Model 1 — Embedded Python SDK (In-Process)
When to Use
- Python application (backend, notebook, script)
- Need lowest latency (no network hop)
- Single-tenant or trusted environment
- Direct access to recipe outputs
How It Works
Pros
- Zero network latency
- Direct memory access to results
- Simplest integration
- Full Python ecosystem available
Cons
- Python-only
- Recipe runs in same process (resource sharing)
- No built-in multi-tenancy
Step-by-Step Tutorial
1
Install PraisonAI
2
Set API Keys
3
Run a Recipe
4
Stream Results
Troubleshooting
- ImportError: Ensure
pip install praisonaicompleted - Recipe not found: Run
praisonai recipe listto see available recipes - API key error: Verify
OPENAI_API_KEYis set
Model 2 — CLI Invocation (Subprocess)
When to Use
- Shell scripts, CI/CD pipelines
- Language-agnostic invocation
- Quick prototyping
- Batch processing
How It Works
Pros
- Works from any language
- Simple JSON output parsing
- No SDK dependency in calling app
- Easy to debug
Cons
- Process spawn overhead
- Stdout/stderr parsing required
- No streaming (unless using —stream)
Step-by-Step Tutorial
1
Verify CLI Installation
2
List Recipes
3
Run Recipe with JSON Output
4
Parse Output in Your App
Troubleshooting
- Command not found: Add praisonai to PATH or use full path
- JSON parse error: Ensure
--jsonflag is used - Exit code non-zero: Check stderr for error details
Model 3 — Local HTTP “Recipe Runner” Sidecar
When to Use
- Microservices architecture
- Non-Python services need recipe access
- Want HTTP API without cloud deployment
- Development/staging environments
How It Works
Pros
- Language-agnostic (HTTP)
- Supports streaming (SSE)
- Process isolation
- Easy to scale horizontally
Cons
- Network latency (localhost)
- Need to manage server lifecycle
- Port management
Step-by-Step Tutorial
1
Install Serve Dependencies
2
Start the Server
3
Check Health
4
List Recipes via HTTP
5
Run Recipe via HTTP
6
Use Endpoints CLI
Troubleshooting
- Connection refused: Ensure server is running
- Port in use: Use
--portto specify different port - Missing deps: Run
pip install praisonai[serve]
Model 4 — Remote Managed Runner (Self-Hosted or Cloud)
When to Use
- Production multi-tenant deployments
- Need authentication/authorization
- Centralized recipe management
- Cloud-native architecture
How It Works
Pros
- Centralized management
- Built-in auth/audit
- Scalable infrastructure
- Multi-tenant support
Cons
- Network latency
- Infrastructure complexity
- Requires auth setup
Step-by-Step Tutorial
1
Start Server with Auth
2
Configure Client
3
Invoke with Auth
4
HTTP with Auth Header
Troubleshooting
- 401 Unauthorized: Check API key header
- Connection timeout: Verify network/firewall
- TLS errors: Ensure valid certificates
Model 5 — Event-Driven Invocation (Queue/Stream)
When to Use
- Asynchronous processing
- High-volume batch jobs
- Decoupled architectures
- Long-running workflows
How It Works
Pros
- Fully async
- Handles backpressure
- Retry/dead-letter support
- Scales independently
Cons
- Infrastructure complexity
- Eventual consistency
- Debugging harder
Step-by-Step Tutorial
1
Define Worker Script
2
Publish Job
3
Consume Results
Model 6 — Plugin Mode (CMS/IDE/Chat Extensions)
When to Use
- IDE extensions (VS Code, JetBrains)
- CMS plugins (WordPress, Strapi)
- Chat integrations (Slack, Discord)
- Browser extensions
How It Works
Pros
- Native UX integration
- Leverages host app features
- User-friendly
- Context-aware
Cons
- Platform-specific
- Sandboxing limitations
- Update management
Step-by-Step Tutorial
1
Create Plugin Manifest
2
Implement Plugin Handler
Decision Guide
Use this flowchart to choose the right model:Next Steps
- Explore Use Cases for real-world patterns
- Review Personas for role-specific guidance
- Check the CLI Reference for command details

