Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.praison.ai/llms.txt

Use this file to discover all available pages before exploring further.

Integration Models

PraisonAI recipes can be integrated into your applications using six distinct models. Each model has specific use cases, trade-offs, and implementation patterns.

Model Overview

ModelLatencyComplexityBest For
1. Embedded SDKLowestLowPython apps, notebooks
2. CLI InvocationLowLowScripts, CI/CD
3. Local HTTP SidecarMediumMediumMicroservices, polyglot
4. Remote Managed RunnerMediumHighMulti-tenant, cloud
5. Event-DrivenVariableHighAsync workflows
6. Plugin ModeLowMediumIDE/CMS extensions

Model 1 — Embedded Python SDK (In-Process)

When to Use

  • Python application (backend, notebook, script)
  • Need lowest latency (no network hop)
  • Single-tenant or trusted environment
  • Direct access to recipe outputs

How It Works

Pros

  • Zero network latency
  • Direct memory access to results
  • Simplest integration
  • Full Python ecosystem available

Cons

  • Python-only
  • Recipe runs in same process (resource sharing)
  • No built-in multi-tenancy

Step-by-Step Tutorial

1

Install PraisonAI

pip install praisonai
2

Set API Keys

export OPENAI_API_KEY=your-key
3

Run a Recipe

from praisonai import recipe

# List available recipes
recipes = recipe.list_recipes()
print(f"Found {len(recipes)} recipes")

# Run a recipe
result = recipe.run(
    "my-recipe",
    input={"query": "Summarize this document"},
    options={"timeout_sec": 60}
)

if result.ok:
    print(f"Success: {result.output}")
else:
    print(f"Error: {result.error}")
4

Stream Results

for event in recipe.run_stream("my-recipe", input={"query": "Hello"}):
    print(f"[{event.event_type}] {event.data}")

Troubleshooting

  • ImportError: Ensure pip install praisonai completed
  • Recipe not found: Run praisonai recipe list to see available recipes
  • API key error: Verify OPENAI_API_KEY is set

Model 2 — CLI Invocation (Subprocess)

When to Use

  • Shell scripts, CI/CD pipelines
  • Language-agnostic invocation
  • Quick prototyping
  • Batch processing

How It Works

Pros

  • Works from any language
  • Simple JSON output parsing
  • No SDK dependency in calling app
  • Easy to debug

Cons

  • Process spawn overhead
  • Stdout/stderr parsing required
  • No streaming (unless using —stream)

Step-by-Step Tutorial

1

Verify CLI Installation

praisonai --help
2

List Recipes

praisonai recipe list --json
3

Run Recipe with JSON Output

praisonai recipe run my-recipe \
  --input '{"query": "Hello"}' \
  --json
4

Parse Output in Your App

import subprocess
import json

result = subprocess.run(
    ["praisonai", "recipe", "run", "my-recipe",
     "--input", '{"query": "Hello"}', "--json"],
    capture_output=True,
    text=True
)

data = json.loads(result.stdout)
print(f"Run ID: {data['run_id']}")
print(f"Output: {data['output']}")

Troubleshooting

  • Command not found: Add praisonai to PATH or use full path
  • JSON parse error: Ensure --json flag is used
  • Exit code non-zero: Check stderr for error details

Model 3 — Local HTTP “Recipe Runner” Sidecar

When to Use

  • Microservices architecture
  • Non-Python services need recipe access
  • Want HTTP API without cloud deployment
  • Development/staging environments

How It Works

Pros

  • Language-agnostic (HTTP)
  • Supports streaming (SSE)
  • Process isolation
  • Easy to scale horizontally

Cons

  • Network latency (localhost)
  • Need to manage server lifecycle
  • Port management

Step-by-Step Tutorial

1

Install Serve Dependencies

pip install praisonai[serve]
2

Start the Server

praisonai serve recipe --port 8765
3

Check Health

curl http://localhost:8765/health
4

List Recipes via HTTP

curl http://localhost:8765/v1/recipes
5

Run Recipe via HTTP

curl -X POST http://localhost:8765/v1/recipes/run \
  -H "Content-Type: application/json" \
  -d '{"recipe": "my-recipe", "input": {"query": "Hello"}}'
6

Use Endpoints CLI

# Health check
praisonai endpoints health

# List endpoints
praisonai endpoints list --format json

# Invoke endpoint
praisonai endpoints invoke my-recipe \
  --input-json '{"query": "Hello"}' \
  --json

Troubleshooting

  • Connection refused: Ensure server is running
  • Port in use: Use --port to specify different port
  • Missing deps: Run pip install praisonai[serve]

Model 4 — Remote Managed Runner (Self-Hosted or Cloud)

When to Use

  • Production multi-tenant deployments
  • Need authentication/authorization
  • Centralized recipe management
  • Cloud-native architecture

How It Works

Pros

  • Centralized management
  • Built-in auth/audit
  • Scalable infrastructure
  • Multi-tenant support

Cons

  • Network latency
  • Infrastructure complexity
  • Requires auth setup

Step-by-Step Tutorial

1

Start Server with Auth

# Set API key
export PRAISONAI_API_KEY=your-secret-key

# Start with auth enabled
praisonai serve recipe \
  --host 0.0.0.0 \
  --port 8765 \
  --auth api-key
2

Configure Client

export PRAISONAI_ENDPOINTS_URL=https://api.example.com
export PRAISONAI_ENDPOINTS_API_KEY=your-secret-key
3

Invoke with Auth

praisonai endpoints invoke my-recipe \
  --input-json '{"query": "Hello"}' \
  --api-key your-secret-key \
  --url https://api.example.com
4

HTTP with Auth Header

curl -X POST https://api.example.com/v1/recipes/run \
  -H "Content-Type: application/json" \
  -H "X-API-Key: your-secret-key" \
  -d '{"recipe": "my-recipe", "input": {"query": "Hello"}}'

Troubleshooting

  • 401 Unauthorized: Check API key header
  • Connection timeout: Verify network/firewall
  • TLS errors: Ensure valid certificates

Model 5 — Event-Driven Invocation (Queue/Stream)

When to Use

  • Asynchronous processing
  • High-volume batch jobs
  • Decoupled architectures
  • Long-running workflows

How It Works

Pros

  • Fully async
  • Handles backpressure
  • Retry/dead-letter support
  • Scales independently

Cons

  • Infrastructure complexity
  • Eventual consistency
  • Debugging harder

Step-by-Step Tutorial

1

Define Worker Script

# worker.py
import redis
import json
from praisonai import recipe

r = redis.Redis()
pubsub = r.pubsub()
pubsub.subscribe('recipe-jobs')

for message in pubsub.listen():
    if message['type'] == 'message':
        job = json.loads(message['data'])
        result = recipe.run(
            job['recipe'],
            input=job['input']
        )
        r.publish('recipe-results', json.dumps({
            'job_id': job['job_id'],
            'result': result.to_dict()
        }))
2

Publish Job

import redis
import json
import uuid

r = redis.Redis()
job_id = str(uuid.uuid4())

r.publish('recipe-jobs', json.dumps({
    'job_id': job_id,
    'recipe': 'my-recipe',
    'input': {'query': 'Process this'}
}))
3

Consume Results

pubsub = r.pubsub()
pubsub.subscribe('recipe-results')

for message in pubsub.listen():
    if message['type'] == 'message':
        result = json.loads(message['data'])
        print(f"Job {result['job_id']}: {result['result']}")

Model 6 — Plugin Mode (CMS/IDE/Chat Extensions)

When to Use

  • IDE extensions (VS Code, JetBrains)
  • CMS plugins (WordPress, Strapi)
  • Chat integrations (Slack, Discord)
  • Browser extensions

How It Works

Pros

  • Native UX integration
  • Leverages host app features
  • User-friendly
  • Context-aware

Cons

  • Platform-specific
  • Sandboxing limitations
  • Update management

Step-by-Step Tutorial

1

Create Plugin Manifest

{
  "name": "praisonai-plugin",
  "version": "1.0.0",
  "recipes": ["code-review", "doc-generator"],
  "endpoints": {
    "base_url": "http://localhost:8765"
  }
}
2

Implement Plugin Handler

// plugin.js
async function invokeRecipe(recipeName, input) {
  const response = await fetch(
    `${config.endpoints.base_url}/v1/recipes/run`,
    {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify({ recipe: recipeName, input })
    }
  );
  return response.json();
}

// Register command
registerCommand('praisonai.runRecipe', async () => {
  const result = await invokeRecipe('code-review', {
    code: getSelectedText()
  });
  showOutput(result.output);
});

Decision Guide

Use this flowchart to choose the right model:

Next Steps