Skip to main content
The praisonai serve command launches various PraisonAI server types with unified discovery support.

Server Types

CommandDescription
praisonai serve agentsLaunch agents as HTTP API
praisonai serve recipeLaunch recipe runner server
praisonai serve mcpLaunch MCP server
praisonai serve toolsLaunch tools as MCP server
praisonai serve a2aLaunch A2A protocol server
praisonai serve a2uLaunch A2U event stream server
praisonai serve unifiedLaunch unified server with all providers

Quick Start

# Agents server
praisonai serve agents --file agents.yaml --port 8000

# Unified server (all providers)
praisonai serve unified --port 8765

# Legacy syntax (still supported)
praisonai serve agents.yaml

Usage

Basic Server

# Start server with default settings (port 8005, host 127.0.0.1)
praisonai serve agents.yaml
Expected Output:
📄 Loading agents from: agents.yaml
  ✓ Loaded: Researcher
  ✓ Loaded: Writer
  ✓ Loaded: Editor

🚀 Starting PraisonAI API server...
   Host: 127.0.0.1
   Port: 8005
   Agents: 3
🚀 Multi-Agent HTTP API available at http://127.0.0.1:8005/agents
📊 Available agents for this endpoint (3): Researcher, Writer, Editor
🔗 Per-agent endpoints: /agents/researcher, /agents/writer, /agents/editor
✅ FastAPI server started at http://127.0.0.1:8005
📚 API documentation available at http://127.0.0.1:8005/docs

Custom Port and Host

# Custom port
praisonai serve agents.yaml --port 9000

# Custom host (allow external connections)
praisonai serve agents.yaml --host 0.0.0.0

# Both custom
praisonai serve agents.yaml --port 8080 --host 0.0.0.0

Alternative Flag Style

# Using --serve flag instead of serve command
praisonai agents.yaml --serve

# With options
praisonai agents.yaml --serve --port 8005

API Endpoints

When the server starts, it automatically creates these endpoints:
EndpointMethodDescription
/agentsPOSTRun ALL agents sequentially
/agents/{name}POSTRun a specific agent
/agents/listGETList all available agents
/healthGETHealth check
/docsGETSwagger API documentation

Run All Agents

curl -X POST http://127.0.0.1:8005/agents \
  -H "Content-Type: application/json" \
  -d '{"query": "Research AI trends and write a summary"}'
Response:
{
  "query": "Research AI trends and write a summary",
  "results": [
    {"agent": "Researcher", "response": "...research findings..."},
    {"agent": "Writer", "response": "...written summary..."},
    {"agent": "Editor", "response": "...edited content..."}
  ],
  "final_response": "...final edited content..."
}

Run Specific Agent

# Run only the researcher agent
curl -X POST http://127.0.0.1:8005/agents/researcher \
  -H "Content-Type: application/json" \
  -d '{"query": "What are the latest AI trends?"}'
Response:
{
  "agent": "Researcher",
  "query": "What are the latest AI trends?",
  "response": "...research findings..."
}

List Available Agents

curl http://127.0.0.1:8005/agents/list
Response:
{
  "agents": [
    {"name": "Researcher", "id": "researcher"},
    {"name": "Writer", "id": "writer"},
    {"name": "Editor", "id": "editor"}
  ]
}

Example agents.yaml

name: Content Creation Pipeline
description: Research, write, and edit content

agents:
  researcher:
    name: Researcher
    role: Research Specialist
    goal: Find accurate and relevant information
    backstory: Expert at finding and synthesizing information
    llm: gpt-4o-mini

  writer:
    name: Writer
    role: Content Writer
    goal: Create engaging content from research
    backstory: Skilled writer who transforms research into readable content
    llm: gpt-4o-mini

  editor:
    name: Editor
    role: Content Editor
    goal: Polish and improve written content
    backstory: Meticulous editor ensuring quality and clarity
    llm: gpt-4o-mini

Integration with n8n

The serve command works seamlessly with n8n workflows:
# Terminal 1: Start the API server
praisonai serve agents.yaml --port 8005

# Terminal 2: Create n8n workflow
praisonai agents.yaml --n8n
The n8n workflow will call individual agent endpoints, allowing you to:
  • Visualize agent execution flow
  • Add conditional logic between agents
  • Integrate with other n8n nodes

Use Cases

Microservices

Expose agents as REST APIs for microservice architectures

n8n Integration

Connect agents to n8n workflows for automation

Web Applications

Backend API for web or mobile applications

Testing

Test agents via HTTP requests during development

Python SDK Equivalent

The serve command is equivalent to:
from praisonaiagents import Agent, Agents
import yaml

# Load agents from YAML
with open('agents.yaml', 'r') as f:
    config = yaml.safe_load(f)

agents = []
for agent_id, cfg in config['agents'].items():
    agent = Agent(
        name=cfg.get('name', agent_id),
        role=cfg.get('role', ''),
        goal=cfg.get('goal', ''),
        llm=cfg.get('llm', 'gpt-4o-mini')
    )
    agents.append(agent)

# Start server
praison = Agents(agents=agents)
praison.launch(port=8005, host='127.0.0.1')

Command Options

Global Options

OptionDefaultDescription
--port8765Server port
--host127.0.0.1Server host
--authnoneAuth type (none, api-key, jwt)
--api-key-API key for authentication

Agents Server Options

OptionDescription
--fileYAML file with agent definitions
--streamEnable SSE streaming

MCP Server Options

OptionDescription
--transportTransport type (http, sse, stdio)
--toolsComma-separated tool names

Recipe Server Options

OptionDescription
--configServer configuration file
--preloadPreload recipes on startup

Discovery Endpoint

All servers expose a unified discovery endpoint at /__praisonai__/discovery:
curl http://localhost:8765/__praisonai__/discovery
Response:
{
  "schema_version": "1.0.0",
  "server_name": "praisonai-unified",
  "providers": [
    {"type": "agents-api", "name": "Agents API"},
    {"type": "mcp", "name": "MCP Server"}
  ],
  "endpoints": [
    {"name": "agents", "provider_type": "agents-api"},
    {"name": "mcp/tools", "provider_type": "mcp"}
  ]
}

Server-Specific Commands

A2A Server

# Start A2A server
praisonai serve a2a --port 8082

# Test agent card
curl http://localhost:8082/.well-known/agent.json

# Send A2A message
curl -X POST http://localhost:8082/a2a \
  -H "Content-Type: application/json" \
  -d '{"jsonrpc":"2.0","method":"message/send","id":"1","params":{"message":{"role":"user","parts":[{"type":"text","text":"Hello!"}]}}}'

A2U Server

# Start A2U event stream server
praisonai serve a2u --port 8083

# Get info
curl http://localhost:8083/a2u/info

# Subscribe to events (SSE)
curl -N http://localhost:8083/a2u/events/events

MCP Server

# HTTP transport
praisonai serve mcp --transport http --port 8080

# SSE transport
praisonai serve mcp --transport sse --port 8080

# List tools
curl http://localhost:8080/mcp/tools

Tools MCP Server

# Start tools as MCP server
praisonai serve tools --port 8081

# SSE endpoint for Claude Desktop
curl http://localhost:8081/sse