Skip to main content
The serve command starts an HTTP API server for agents defined in a YAML file, enabling programmatic access to your agents via REST endpoints.

Quick Start

praisonai serve agents.yaml

Usage

Basic Server

# Start server with default settings (port 8005, host 127.0.0.1)
praisonai serve agents.yaml
Expected Output:
📄 Loading agents from: agents.yaml
  ✓ Loaded: Researcher
  ✓ Loaded: Writer
  ✓ Loaded: Editor

🚀 Starting PraisonAI API server...
   Host: 127.0.0.1
   Port: 8005
   Agents: 3
🚀 Multi-Agent HTTP API available at http://127.0.0.1:8005/agents
📊 Available agents for this endpoint (3): Researcher, Writer, Editor
🔗 Per-agent endpoints: /agents/researcher, /agents/writer, /agents/editor
✅ FastAPI server started at http://127.0.0.1:8005
📚 API documentation available at http://127.0.0.1:8005/docs

Custom Port and Host

# Custom port
praisonai serve agents.yaml --port 9000

# Custom host (allow external connections)
praisonai serve agents.yaml --host 0.0.0.0

# Both custom
praisonai serve agents.yaml --port 8080 --host 0.0.0.0

Alternative Flag Style

# Using --serve flag instead of serve command
praisonai agents.yaml --serve

# With options
praisonai agents.yaml --serve --port 8005

API Endpoints

When the server starts, it automatically creates these endpoints:
EndpointMethodDescription
/agentsPOSTRun ALL agents sequentially
/agents/{name}POSTRun a specific agent
/agents/listGETList all available agents
/healthGETHealth check
/docsGETSwagger API documentation

Run All Agents

curl -X POST http://127.0.0.1:8005/agents \
  -H "Content-Type: application/json" \
  -d '{"query": "Research AI trends and write a summary"}'
Response:
{
  "query": "Research AI trends and write a summary",
  "results": [
    {"agent": "Researcher", "response": "...research findings..."},
    {"agent": "Writer", "response": "...written summary..."},
    {"agent": "Editor", "response": "...edited content..."}
  ],
  "final_response": "...final edited content..."
}

Run Specific Agent

# Run only the researcher agent
curl -X POST http://127.0.0.1:8005/agents/researcher \
  -H "Content-Type: application/json" \
  -d '{"query": "What are the latest AI trends?"}'
Response:
{
  "agent": "Researcher",
  "query": "What are the latest AI trends?",
  "response": "...research findings..."
}

List Available Agents

curl http://127.0.0.1:8005/agents/list
Response:
{
  "agents": [
    {"name": "Researcher", "id": "researcher"},
    {"name": "Writer", "id": "writer"},
    {"name": "Editor", "id": "editor"}
  ]
}

Example agents.yaml

name: Content Creation Pipeline
description: Research, write, and edit content

agents:
  researcher:
    name: Researcher
    role: Research Specialist
    goal: Find accurate and relevant information
    backstory: Expert at finding and synthesizing information
    llm: gpt-4o-mini

  writer:
    name: Writer
    role: Content Writer
    goal: Create engaging content from research
    backstory: Skilled writer who transforms research into readable content
    llm: gpt-4o-mini

  editor:
    name: Editor
    role: Content Editor
    goal: Polish and improve written content
    backstory: Meticulous editor ensuring quality and clarity
    llm: gpt-4o-mini

Integration with n8n

The serve command works seamlessly with n8n workflows:
# Terminal 1: Start the API server
praisonai serve agents.yaml --port 8005

# Terminal 2: Create n8n workflow
praisonai agents.yaml --n8n
The n8n workflow will call individual agent endpoints, allowing you to:
  • Visualize agent execution flow
  • Add conditional logic between agents
  • Integrate with other n8n nodes

Use Cases

Microservices

Expose agents as REST APIs for microservice architectures

n8n Integration

Connect agents to n8n workflows for automation

Web Applications

Backend API for web or mobile applications

Testing

Test agents via HTTP requests during development

Python SDK Equivalent

The serve command is equivalent to:
from praisonaiagents import Agent, PraisonAIAgents
import yaml

# Load agents from YAML
with open('agents.yaml', 'r') as f:
    config = yaml.safe_load(f)

agents = []
for agent_id, cfg in config['agents'].items():
    agent = Agent(
        name=cfg.get('name', agent_id),
        role=cfg.get('role', ''),
        goal=cfg.get('goal', ''),
        llm=cfg.get('llm', 'gpt-4o-mini')
    )
    agents.append(agent)

# Start server
praison = PraisonAIAgents(agents=agents)
praison.launch(port=8005, host='127.0.0.1')

Command Options

OptionDefaultDescription
--port8005Server port
--host127.0.0.1Server host
--serve-Flag to start server (alternative to serve command)