Skip to main content
Deploy agents as Docker containers for portable, reproducible deployments.

CLI

pip install praisonaiagents
export OPENAI_API_KEY="your-key"

python -m praisonai --init "helpful assistant"

Python

app.py:
from praisonaiagents import Agent

agent = Agent(
    name="Assistant",
    instructions="You are a helpful assistant.",
    llm="gpt-4o-mini"
)
agent.launch(path="/ask", port=8080, host="0.0.0.0")
Expected Output:
🚀 Agent 'Assistant' available at http://0.0.0.0:8080
✅ FastAPI server started at http://0.0.0.0:8080
📚 API documentation available at http://0.0.0.0:8080/docs
🔌 Available endpoints: /ask

Dockerfile

FROM python:3.11-slim

WORKDIR /app

COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

COPY app.py .

EXPOSE 8080

CMD ["python", "app.py"]
requirements.txt:
praisonaiagents[api]

Build and Run

# Build image
docker build -t praisonai-agent .

# Run container
docker run -p 8080:8080 -e OPENAI_API_KEY=$OPENAI_API_KEY praisonai-agent
Expected Output:
🚀 Agent 'Assistant' available at http://0.0.0.0:8080
✅ FastAPI server started at http://0.0.0.0:8080
📚 API documentation available at http://0.0.0.0:8080/docs
🔌 Available endpoints: /ask
Verify:
curl http://localhost:8080/health
curl -X POST http://localhost:8080/ask \
  -H "Content-Type: application/json" \
  -d '{"query": "Hello"}'

agents.yaml

agents.yaml:
framework: praisonai
topic: helpful assistant
roles:
  assistant:
    role: Assistant
    goal: Help users with their questions
    backstory: You are a helpful assistant
    tasks:
      help_task:
        description: Answer user questions
        expected_output: Helpful response
app.py (YAML-based):
import subprocess
subprocess.run(["python", "-m", "praisonai", "--serve", "--port", "8080", "--host", "0.0.0.0"])

Docker Compose

docker-compose.yml:
version: '3.8'
services:
  agent:
    build: .
    ports:
      - "8080:8080"
    environment:
      - OPENAI_API_KEY=${OPENAI_API_KEY}
    restart: unless-stopped
docker-compose up -d

Multi-Agent Docker

app.py:
from praisonaiagents import Agent, Agents

researcher = Agent(name="Researcher", instructions="Research topics", llm="gpt-4o-mini")
writer = Agent(name="Writer", instructions="Write content", llm="gpt-4o-mini")

agents = Agents(agents=[researcher, writer])
agents.launch(path="/content", port=8080, host="0.0.0.0")

MCP Server Docker

app.py:
from praisonaiagents import Agent

agent = Agent(instructions="Create tweets", llm="gpt-4o-mini")
agent.launch(port=8080, protocol="mcp")
requirements.txt:
praisonaiagents[mcp]

Environment Variables

VariableRequiredDescription
OPENAI_API_KEYYes*OpenAI API key
ANTHROPIC_API_KEYNoAnthropic API key
GROQ_API_KEYNoGroq API key
*Required if using OpenAI models.

Health Check

Add health check to Dockerfile:
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
  CMD curl -f http://localhost:8080/health || exit 1

Troubleshooting

IssueFix
Port in useChange host port: -p 9000:8080
Missing API keyPass via -e OPENAI_API_KEY=...
Container exitsCheck logs: docker logs <container>
Build failsEnsure requirements.txt is correct
Push to Docker Hub or private registry:
# Tag image
docker tag praisonai-agent:latest your-registry/praisonai-agent:latest

# Push
docker push your-registry/praisonai-agent:latest