Integration Models Overview
PraisonAI recipes can be integrated into your applications using six distinct models. Each model has specific use cases, trade-offs, and implementation patterns.Quick Comparison
- By Latency
- By Complexity
- Local vs Remote
| Model | Latency | Best For |
|---|---|---|
| Embedded SDK | Lowest | Python apps, notebooks |
| CLI Invocation | Low | Scripts, CI/CD |
| Plugin Mode | Low | IDE/CMS extensions |
| Local HTTP Sidecar | Medium | Microservices, polyglot |
| Remote Managed Runner | Medium | Multi-tenant, cloud |
| Event-Driven | Variable | Async workflows |
Decision Guide
Choose Your Integration Model
Model 1: Embedded SDK
Lowest latency - Direct Python integration with zero network overhead. Best for Python applications and notebooks.
Model 2: CLI Invocation
Language-agnostic - Invoke recipes from any language via subprocess. Perfect for scripts and CI/CD pipelines.
Model 3: Local HTTP Sidecar
Polyglot friendly - HTTP API running locally. Ideal for microservices and non-Python applications.
Model 4: Remote Managed Runner
Production-ready - Centralized, authenticated recipe execution. Built for multi-tenant cloud deployments.
Model 5: Event-Driven
Async at scale - Queue-based invocation for high-volume batch processing and decoupled architectures.
Model 6: Plugin Mode
Native UX - Embed recipes into IDEs, CMS platforms, and chat applications.
Feature Matrix
| Feature | SDK | CLI | Sidecar | Remote | Event | Plugin |
|---|---|---|---|---|---|---|
| Streaming | ✅ | ✅ | ✅ | ✅ | ❌ | ✅ |
| Multi-tenant | ❌ | ❌ | ❌ | ✅ | ✅ | ❌ |
| Auth built-in | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ |
| Async native | ✅ | ❌ | ✅ | ✅ | ✅ | ✅ |
| Process isolation | ❌ | ✅ | ✅ | ✅ | ✅ | ✅ |
| Hot reload | ❌ | ✅ | ✅ | ✅ | ❌ | ❌ |

