Skip to main content
The praisonai serve command launches various PraisonAI server types with unified discovery support.

Server Types

CommandProtocolPortDescription
praisonai serve agentsHTTP8000Agents as HTTP REST API
praisonai serve gatewayWebSocket8765Multi-agent real-time coordination
praisonai serve mcpSTDIO/SSE8080MCP server for Claude/Cursor
praisonai serve acpSTDIO-Agent Client Protocol for IDEs
praisonai serve lspSTDIO-Language Server Protocol
praisonai serve uiHTTP8082Chainlit web interface
praisonai serve ragHTTP9000RAG query server
praisonai serve registryHTTP7777Package registry server
praisonai serve docsHTTP3000Documentation preview
praisonai serve schedulerBackground-Job scheduler daemon
praisonai serve recipeHTTP8765Recipe runner server
praisonai serve a2aJSON-RPC8001Agent-to-Agent protocol
praisonai serve a2uSSE8002Agent-to-User event stream
praisonai serve unifiedHTTP/SSE8765All providers combined

Bot Servers (Messaging Platforms)

CommandProtocolDescription
praisonai bot telegramTelegram APIConnect agent to Telegram
praisonai bot discordDiscord APIConnect agent to Discord
praisonai bot slackSlack APIConnect agent to Slack

Quick Start

# Agents server
praisonai serve agents --file agents.yaml --port 8000

# Unified server (all providers)
praisonai serve unified --port 8765

# Legacy syntax (still supported)
praisonai serve agents.yaml

Usage

Basic Server

# Start server with default settings (port 8005, host 127.0.0.1)
praisonai serve agents.yaml
Expected Output:
📄 Loading agents from: agents.yaml
  ✓ Loaded: Researcher
  ✓ Loaded: Writer
  ✓ Loaded: Editor

🚀 Starting PraisonAI API server...
   Host: 127.0.0.1
   Port: 8005
   Agents: 3
🚀 Multi-Agent HTTP API available at http://127.0.0.1:8005/agents
📊 Available agents for this endpoint (3): Researcher, Writer, Editor
🔗 Per-agent endpoints: /agents/researcher, /agents/writer, /agents/editor
✅ FastAPI server started at http://127.0.0.1:8005
📚 API documentation available at http://127.0.0.1:8005/docs

Custom Port and Host

# Custom port
praisonai serve agents.yaml --port 9000

# Custom host (allow external connections)
praisonai serve agents.yaml --host 0.0.0.0

# Both custom
praisonai serve agents.yaml --port 8080 --host 0.0.0.0

Alternative Flag Style

# Using --serve flag instead of serve command
praisonai agents.yaml --serve

# With options
praisonai agents.yaml --serve --port 8005

API Endpoints

When the server starts, it automatically creates these endpoints:
EndpointMethodDescription
/agentsPOSTRun ALL agents sequentially
/agents/{name}POSTRun a specific agent
/agents/listGETList all available agents
/healthGETHealth check
/docsGETSwagger API documentation

Run All Agents

curl -X POST http://127.0.0.1:8005/agents \
  -H "Content-Type: application/json" \
  -d '{"query": "Research AI trends and write a summary"}'
Response:
{
  "query": "Research AI trends and write a summary",
  "results": [
    {"agent": "Researcher", "response": "...research findings..."},
    {"agent": "Writer", "response": "...written summary..."},
    {"agent": "Editor", "response": "...edited content..."}
  ],
  "final_response": "...final edited content..."
}

Run Specific Agent

# Run only the researcher agent
curl -X POST http://127.0.0.1:8005/agents/researcher \
  -H "Content-Type: application/json" \
  -d '{"query": "What are the latest AI trends?"}'
Response:
{
  "agent": "Researcher",
  "query": "What are the latest AI trends?",
  "response": "...research findings..."
}

List Available Agents

curl http://127.0.0.1:8005/agents/list
Response:
{
  "agents": [
    {"name": "Researcher", "id": "researcher"},
    {"name": "Writer", "id": "writer"},
    {"name": "Editor", "id": "editor"}
  ]
}

Example agents.yaml

name: Content Creation Pipeline
description: Research, write, and edit content

agents:
  researcher:
    name: Researcher
    role: Research Specialist
    goal: Find accurate and relevant information
    backstory: Expert at finding and synthesizing information
    llm: gpt-4o-mini

  writer:
    name: Writer
    role: Content Writer
    goal: Create engaging content from research
    backstory: Skilled writer who transforms research into readable content
    llm: gpt-4o-mini

  editor:
    name: Editor
    role: Content Editor
    goal: Polish and improve written content
    backstory: Meticulous editor ensuring quality and clarity
    llm: gpt-4o-mini

Integration with n8n

The serve command works seamlessly with n8n workflows:
# Terminal 1: Start the API server
praisonai serve agents.yaml --port 8005

# Terminal 2: Create n8n workflow
praisonai agents.yaml --n8n
The n8n workflow will call individual agent endpoints, allowing you to:
  • Visualize agent execution flow
  • Add conditional logic between agents
  • Integrate with other n8n nodes

Use Cases

Microservices

Expose agents as REST APIs for microservice architectures

n8n Integration

Connect agents to n8n workflows for automation

Web Applications

Backend API for web or mobile applications

Testing

Test agents via HTTP requests during development

Python SDK Equivalent

The serve command is equivalent to:
from praisonaiagents import Agent, AgentTeam
import yaml

# Load agents from YAML
with open('agents.yaml', 'r') as f:
    config = yaml.safe_load(f)

agents = []
for agent_id, cfg in config['agents'].items():
    agent = Agent(
        name=cfg.get('name', agent_id),
        role=cfg.get('role', ''),
        goal=cfg.get('goal', ''),
        llm=cfg.get('llm', 'gpt-4o-mini')
    )
    agents.append(agent)

# Start server
praison = AgentTeam(agents=agents)
praison.launch(port=8005, host='127.0.0.1')

Command Options

Global Options

OptionDefaultDescription
--host127.0.0.1Server host to bind to
--portvariesServer port

Agents Server Options

OptionDefaultDescription
--host127.0.0.1Host to bind to
--port8000Port to bind to
--fileagents.yamlAgents YAML file
--reloadfalseEnable hot reload
--api-key-API key for authentication

Gateway Server Options

OptionDefaultDescription
--host127.0.0.1Host to bind to
--port8765Port to bind to
--agents-Agents YAML file

MCP Server Options

OptionDefaultDescription
--host127.0.0.1Host to bind to
--port8080Port to bind to
--transportstdioTransport: stdio, sse, http-stream
--name-Server name from config

ACP Server Options

OptionDefaultDescription
--workspace.Project workspace path
--agentdefaultAgent name or config file
--model-LLM model to use
--debugfalseEnable debug logging

LSP Server Options

OptionDefaultDescription
--languagepythonLanguage server type

UI Server Options

OptionDefaultDescription
--host127.0.0.1Host to bind to
--port8082Port to bind to
--typeagentsUI type: agents, chat, code, realtime

RAG Server Options

OptionDefaultDescription
--host127.0.0.1Host to bind to
--port9000Port to bind to
--collectiondefaultCollection name

Registry Server Options

OptionDefaultDescription
--host127.0.0.1Host to bind to
--port7777Port to bind to
--token-Authentication token
--read-onlyfalseRead-only mode

Docs Server Options

OptionDefaultDescription
--host127.0.0.1Host to bind to
--port3000Port to bind to
--path.Documentation path

Scheduler Options

OptionDefaultDescription
--config-Scheduler config file
--daemonfalseRun as daemon

Recipe Server Options

OptionDefaultDescription
--host127.0.0.1Host to bind to
--port8765Port to bind to
--config-Config file path
--reloadfalseEnable hot reload

A2A Server Options

OptionDefaultDescription
--host127.0.0.1Host to bind to
--port8001Port to bind to
--fileagents.yamlAgents YAML file

A2U Server Options

OptionDefaultDescription
--host127.0.0.1Host to bind to
--port8002Port to bind to

Unified Server Options

OptionDefaultDescription
--host127.0.0.1Host to bind to
--port8765Port to bind to
--fileagents.yamlAgents YAML file
--reloadfalseEnable hot reload

Bot Server Options (Telegram, Discord, Slack)

OptionDefaultDescription
--token-Bot API token (or use env var)
--agent-file-Agent configuration file
Environment Variables:
  • TELEGRAM_BOT_TOKEN - Telegram bot token
  • DISCORD_BOT_TOKEN - Discord bot token
  • SLACK_BOT_TOKEN - Slack bot token

Discovery Endpoint

All servers expose a unified discovery endpoint at /__praisonai__/discovery:
curl http://localhost:8765/__praisonai__/discovery
Response:
{
  "schema_version": "1.0.0",
  "server_name": "praisonai-unified",
  "providers": [
    {"type": "agents-api", "name": "Agents API"},
    {"type": "mcp", "name": "MCP Server"}
  ],
  "endpoints": [
    {"name": "agents", "provider_type": "agents-api"},
    {"name": "mcp/tools", "provider_type": "mcp"}
  ]
}

Server-Specific Commands

A2A Server

# Start A2A server
praisonai serve a2a --port 8082

# Test agent card
curl http://localhost:8082/.well-known/agent.json

# Send A2A message
curl -X POST http://localhost:8082/a2a \
  -H "Content-Type: application/json" \
  -d '{"jsonrpc":"2.0","method":"message/send","id":"1","params":{"message":{"role":"user","parts":[{"type":"text","text":"Hello!"}]}}}'

A2U Server

# Start A2U event stream server
praisonai serve a2u --port 8083

# Get info
curl http://localhost:8083/a2u/info

# Subscribe to events (SSE)
curl -N http://localhost:8083/a2u/events/events

MCP Server

# HTTP transport
praisonai serve mcp --transport http --port 8080

# SSE transport
praisonai serve mcp --transport sse --port 8080

# List tools
curl http://localhost:8080/mcp/tools

Tools MCP Server

# Start tools as MCP server
praisonai serve tools --port 8081

# SSE endpoint for Claude Desktop
curl http://localhost:8081/sse