Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.praison.ai/llms.txt

Use this file to discover all available pages before exploring further.

Deploy PraisonAI agents for always-on operation with automatic process supervision.

Quick Start

1

Choose Deployment Method

Select between Docker Compose (recommended) or System Services:
from praisonaiagents import Agent

# Your agent for 24/7 deployment
agent = Agent(
    name="Production Agent",
    instructions="Handle production tasks continuously",
)

agent.start("Deploy for production")
2

Configure Environment

Set up environment variables for production:
# Required API key
export OPENAI_API_KEY="sk-..."

# Optional performance settings
export PRAISONAI_MAX_WORKERS=4
export PRAISONAI_LOG_LEVEL=INFO

Deployment Options

Docker Compose

Recommended for most users. Includes automatic restarts and health checks.

System Services

Native OS integration with systemd, launchd, or Windows Service.

Docker Compose Production

Quick Setup (5 minutes)

1

Install Docker

# Linux/Ubuntu
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh

# macOS with Homebrew
brew install --cask docker

# Windows: Download from docker.com
2

Create deployment directory

mkdir praisonai-deploy && cd praisonai-deploy
3

Create environment file

cat > .env << EOF
# Required: Your OpenAI API key
OPENAI_API_KEY=sk-proj-example...

# Optional: Additional providers
# ANTHROPIC_API_KEY=ant-api-03-...
# GOOGLE_API_KEY=AIza...

# Performance settings
PRAISONAI_MAX_WORKERS=4
PRAISONAI_LOG_LEVEL=INFO
EOF
4

Create docker-compose.yml

version: '3.8'

services:
  # PraisonAI Claw Dashboard (Recommended)
  claw:
    image: mervinpraison/praisonai:claw
    container_name: praisonai-claw
    ports:
      - "8082:8082"
    environment:
      - OPENAI_API_KEY=${OPENAI_API_KEY}
      - ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY:-}
      - GOOGLE_API_KEY=${GOOGLE_API_KEY:-}
    volumes:
      - claw_data:/data
    restart: unless-stopped
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:8082/health"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 30s
    networks:
      - praisonai

  # PraisonAI API Service (Optional)
  api:
    image: mervinpraison/praisonai:api
    container_name: praisonai-api
    ports:
      - "8080:8080"
    environment:
      - OPENAI_API_KEY=${OPENAI_API_KEY}
      - PRAISONAI_MAX_WORKERS=${PRAISONAI_MAX_WORKERS}
    volumes:
      - praisonai_data:/root/.praison
    restart: unless-stopped
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:8080/health"]
      interval: 30s
      timeout: 10s
      retries: 3
    networks:
      - praisonai

volumes:
  claw_data:
    driver: local
  praisonai_data:
    driver: local

networks:
  praisonai:
    driver: bridge
5

Deploy and run 24/7

# Start services
docker-compose up -d

# Check status
docker-compose ps

# View logs
docker-compose logs -f claw
Success! Your PraisonAI is now running at:

Best Practices

Never hardcode API keys in docker-compose.yml. Always use .env files and ensure they’re in .gitignore.
Configure health checks for automatic restarts. The /health endpoint returns service status.
Monitor logs and service status regularly. Use docker-compose logs -f or journalctl -u praisonai -f.
Schedule weekly updates with crontab to pull latest images and restart services automatically.

System Services

For native OS integration without Docker:

Linux (systemd)

1

Install PraisonAI

# Create dedicated user
sudo useradd -r -s /bin/false praisonai
sudo mkdir -p /opt/praisonai /var/log/praisonai
sudo chown praisonai:praisonai /opt/praisonai /var/log/praisonai

# Install in virtual environment
sudo -u praisonai python3 -m venv /opt/praisonai/venv
sudo -u praisonai /opt/praisonai/venv/bin/pip install "praisonai[claw]"
2

Create service configuration

# Environment file
sudo tee /opt/praisonai/.env << EOF
OPENAI_API_KEY=sk-proj-example...
PRAISONAI_LOG_LEVEL=INFO
PRAISONAI_HOST=127.0.0.1
PRAISONAI_PORT=8082
EOF

sudo chown praisonai:praisonai /opt/praisonai/.env
sudo chmod 600 /opt/praisonai/.env
3

Create systemd service

sudo tee /etc/systemd/system/praisonai.service << EOF
[Unit]
Description=PraisonAI Claw Dashboard
After=network.target
Requires=network.target

[Service]
Type=exec
User=praisonai
Group=praisonai
WorkingDirectory=/opt/praisonai
EnvironmentFile=/opt/praisonai/.env
ExecStart=/opt/praisonai/venv/bin/praisonai claw --host \${PRAISONAI_HOST} --port \${PRAISONAI_PORT}
ExecReload=/bin/kill -HUP \$MAINPID
Restart=always
RestartSec=10
StartLimitIntervalSec=0

# Security settings
NoNewPrivileges=true
ProtectSystem=strict
ProtectHome=true
ReadWritePaths=/opt/praisonai /var/log/praisonai

# Logging
StandardOutput=journal
StandardError=journal
SyslogIdentifier=praisonai

[Install]
WantedBy=multi-user.target
EOF
4

Enable and start service

# Reload systemd and enable service
sudo systemctl daemon-reload
sudo systemctl enable praisonai
sudo systemctl start praisonai

# Check status
sudo systemctl status praisonai

# View logs
sudo journalctl -u praisonai -f

macOS (launchd)

1

Install PraisonAI

# Install via pip
pip3 install "praisonai[claw]"

# Create directories
mkdir -p ~/Library/LaunchAgents
mkdir -p ~/.praisonai
2

Create environment file

cat > ~/.praisonai/env << EOF
export OPENAI_API_KEY="sk-proj-example..."
export PRAISONAI_LOG_LEVEL="INFO"
EOF

chmod 600 ~/.praisonai/env
3

Create launch agent

<!-- ~/Library/LaunchAgents/ai.praison.claw.plist -->
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
    <key>Label</key>
    <string>ai.praison.claw</string>
    <key>ProgramArguments</key>
    <array>
        <string>/bin/bash</string>
        <string>-c</string>
        <string>source ~/.praisonai/env && praisonai claw --host 127.0.0.1 --port 8082</string>
    </array>
    <key>RunAtLoad</key>
    <true/>
    <key>KeepAlive</key>
    <true/>
    <key>WorkingDirectory</key>
    <string>/Users/$(whoami)/.praisonai</string>
    <key>StandardOutPath</key>
    <string>/Users/$(whoami)/.praisonai/stdout.log</string>
    <key>StandardErrorPath</key>
    <string>/Users/$(whoami)/.praisonai/stderr.log</string>
</dict>
</plist>
4

Load and start service

# Load the service
launchctl load ~/Library/LaunchAgents/ai.praison.claw.plist

# Start the service
launchctl start ai.praison.claw

# Check status
launchctl list | grep ai.praison.claw

# View logs
tail -f ~/.praisonai/stdout.log

Windows Service

1

Install dependencies

# Install Python and PraisonAI
pip install "praisonai[claw]" pywin32

# Create directories
mkdir C:\PraisonAI
mkdir C:\PraisonAI\logs
2

Create service script

# C:\PraisonAI\praisonai_service.py
import win32serviceutil
import win32service
import win32event
import subprocess
import os
import sys

class PraisonAIService(win32serviceutil.ServiceFramework):
    _svc_name_ = "PraisonAI"
    _svc_display_name_ = "PraisonAI Claw Dashboard"
    _svc_description_ = "PraisonAI AI agent dashboard service"
    
    def __init__(self, args):
        win32serviceutil.ServiceFramework.__init__(self, args)
        self.hWaitStop = win32event.CreateEvent(None, 0, 0, None)
        self.process = None
    
    def SvcStop(self):
        self.ReportServiceStatus(win32service.SERVICE_STOP_PENDING)
        if self.process:
            self.process.terminate()
        win32event.SetEvent(self.hWaitStop)
    
    def SvcDoRun(self):
        # Set environment variables
        os.environ['OPENAI_API_KEY'] = 'sk-proj-example...'
        os.environ['PRAISONAI_LOG_LEVEL'] = 'INFO'
        
        # Start PraisonAI
        self.process = subprocess.Popen([
            sys.executable, '-m', 'praisonai', 'claw',
            '--host', '127.0.0.1',
            '--port', '8082'
        ], cwd='C:\\PraisonAI')
        
        # Wait for stop signal
        win32event.WaitForSingleObject(self.hWaitStop, win32event.INFINITE)

if __name__ == '__main__':
    win32serviceutil.HandleCommandLine(PraisonAIService)
3

Install and start service

# Install service (run as Administrator)
python C:\PraisonAI\praisonai_service.py install

# Start service
python C:\PraisonAI\praisonai_service.py start

# Check status
sc query PraisonAI

# Set to start automatically
sc config PraisonAI start=auto

Verification & Management

Health Check

# Check if service is running
curl http://localhost:8082/health

# Expected response:
# {"status": "healthy", "version": "x.x.x"}

Service Management Commands

# Status
docker-compose ps

# Stop
docker-compose stop

# Restart
docker-compose restart

# Update
docker-compose pull && docker-compose up -d

# Logs
docker-compose logs -f


Bot Integration

Connect to Slack, Discord, Telegram, and WhatsApp

Advanced Deployment

Scaling, monitoring, and advanced configurations

Support