Documentation Index
Fetch the complete documentation index at: https://docs.praison.ai/llms.txt
Use this file to discover all available pages before exploring further.
The praisonai serve command launches various PraisonAI server types with unified discovery support.
Server Types
| Command | Protocol | Port | Description |
|---|
praisonai serve agents | HTTP | 8000 | Agents as HTTP REST API |
praisonai serve gateway | WebSocket | 8765 | Multi-agent real-time coordination |
praisonai serve mcp | STDIO/SSE | 8080 | MCP server for Claude/Cursor |
praisonai serve acp | STDIO | - | Agent Client Protocol for IDEs |
praisonai serve lsp | STDIO | - | Language Server Protocol |
praisonai serve ui | HTTP | 8082 | Chainlit web interface |
praisonai serve rag | HTTP | 9000 | RAG query server |
praisonai serve registry | HTTP | 7777 | Package registry server |
praisonai serve docs | HTTP | 3000 | Documentation preview |
praisonai serve scheduler | Background | - | Job scheduler daemon |
praisonai serve recipe | HTTP | 8765 | Recipe runner server |
praisonai serve a2a | JSON-RPC | 8001 | Agent-to-Agent protocol |
praisonai serve a2u | SSE | 8002 | Agent-to-User event stream |
praisonai serve unified | HTTP/SSE | 8765 | All providers combined |
| Command | Protocol | Description |
|---|
praisonai bot telegram | Telegram API | Connect agent to Telegram |
praisonai bot discord | Discord API | Connect agent to Discord |
praisonai bot slack | Slack API | Connect agent to Slack |
Quick Start
# Agents server
praisonai serve agents --file agents.yaml --port 8000
# Unified server (all providers)
praisonai serve unified --port 8765
# Legacy syntax (still supported)
praisonai serve agents.yaml
Usage
Basic Server
# Start server with default settings (port 8005, host 127.0.0.1)
praisonai serve agents.yaml
Expected Output:
📄 Loading agents from: agents.yaml
✓ Loaded: Researcher
✓ Loaded: Writer
✓ Loaded: Editor
🚀 Starting PraisonAI API server...
Host: 127.0.0.1
Port: 8005
Agents: 3
🚀 Multi-Agent HTTP API available at http://127.0.0.1:8005/agents
📊 Available agents for this endpoint (3): Researcher, Writer, Editor
🔗 Per-agent endpoints: /agents/researcher, /agents/writer, /agents/editor
✅ FastAPI server started at http://127.0.0.1:8005
📚 API documentation available at http://127.0.0.1:8005/docs
Custom Port and Host
# Custom port
praisonai serve agents.yaml --port 9000
# Custom host (allow external connections)
praisonai serve agents.yaml --host 0.0.0.0
# Both custom
praisonai serve agents.yaml --port 8080 --host 0.0.0.0
Alternative Flag Style
# Using --serve flag instead of serve command
praisonai agents.yaml --serve
# With options
praisonai agents.yaml --serve --port 8005
API Endpoints
When the server starts, it automatically creates these endpoints:
| Endpoint | Method | Description |
|---|
/agents | POST | Run ALL agents sequentially |
/agents/{name} | POST | Run a specific agent |
/agents/list | GET | List all available agents |
/health | GET | Health check |
/docs | GET | Swagger API documentation |
Run All Agents
curl -X POST http://127.0.0.1:8005/agents \
-H "Content-Type: application/json" \
-d '{"query": "Research AI trends and write a summary"}'
Response:
{
"query": "Research AI trends and write a summary",
"results": [
{"agent": "Researcher", "response": "...research findings..."},
{"agent": "Writer", "response": "...written summary..."},
{"agent": "Editor", "response": "...edited content..."}
],
"final_response": "...final edited content..."
}
Run Specific Agent
# Run only the researcher agent
curl -X POST http://127.0.0.1:8005/agents/researcher \
-H "Content-Type: application/json" \
-d '{"query": "What are the latest AI trends?"}'
Response:
{
"agent": "Researcher",
"query": "What are the latest AI trends?",
"response": "...research findings..."
}
List Available Agents
curl http://127.0.0.1:8005/agents/list
Response:
{
"agents": [
{"name": "Researcher", "id": "researcher"},
{"name": "Writer", "id": "writer"},
{"name": "Editor", "id": "editor"}
]
}
Example agents.yaml
name: Content Creation Pipeline
description: Research, write, and edit content
agents:
researcher:
name: Researcher
role: Research Specialist
goal: Find accurate and relevant information
backstory: Expert at finding and synthesizing information
llm: gpt-4o-mini
writer:
name: Writer
role: Content Writer
goal: Create engaging content from research
backstory: Skilled writer who transforms research into readable content
llm: gpt-4o-mini
editor:
name: Editor
role: Content Editor
goal: Polish and improve written content
backstory: Meticulous editor ensuring quality and clarity
llm: gpt-4o-mini
Integration with n8n
The serve command works seamlessly with n8n workflows:
# Terminal 1: Start the API server
praisonai serve agents.yaml --port 8005
# Terminal 2: Create n8n workflow
praisonai agents.yaml --n8n
The n8n workflow will call individual agent endpoints, allowing you to:
- Visualize agent execution flow
- Add conditional logic between agents
- Integrate with other n8n nodes
Use Cases
Microservices
Expose agents as REST APIs for microservice architectures
n8n Integration
Connect agents to n8n workflows for automation
Web Applications
Backend API for web or mobile applications
Testing
Test agents via HTTP requests during development
Python SDK Equivalent
The serve command is equivalent to:
from praisonaiagents import Agent, AgentTeam
import yaml
# Load agents from YAML
with open('agents.yaml', 'r') as f:
config = yaml.safe_load(f)
agents = []
for agent_id, cfg in config['agents'].items():
agent = Agent(
name=cfg.get('name', agent_id),
role=cfg.get('role', ''),
goal=cfg.get('goal', ''),
llm=cfg.get('llm', 'gpt-4o-mini')
)
agents.append(agent)
# Start server
praison = AgentTeam(agents=agents)
praison.launch(port=8005, host='127.0.0.1')
Command Options
Global Options
| Option | Default | Description |
|---|
--host | 127.0.0.1 | Server host to bind to |
--port | varies | Server port |
Agents Server Options
| Option | Default | Description |
|---|
--host | 127.0.0.1 | Host to bind to |
--port | 8000 | Port to bind to |
--file | agents.yaml | Agents YAML file |
--reload | false | Enable hot reload |
--api-key | - | API key for authentication |
Gateway Server Options
| Option | Default | Description |
|---|
--host | 127.0.0.1 | Host to bind to |
--port | 8765 | Port to bind to |
--agents | - | Agents YAML file |
MCP Server Options
| Option | Default | Description |
|---|
--host | 127.0.0.1 | Host to bind to |
--port | 8080 | Port to bind to |
--transport | stdio | Transport: stdio, sse, http-stream |
--name | - | Server name from config |
ACP Server Options
| Option | Default | Description |
|---|
--workspace | . | Project workspace path |
--agent | default | Agent name or config file |
--model | - | LLM model to use |
--debug | false | Enable debug logging |
LSP Server Options
| Option | Default | Description |
|---|
--language | python | Language server type |
UI Server Options
| Option | Default | Description |
|---|
--host | 127.0.0.1 | Host to bind to |
--port | 8082 | Port to bind to |
--type | agents | UI type: agents, chat, code, realtime |
RAG Server Options
| Option | Default | Description |
|---|
--host | 127.0.0.1 | Host to bind to |
--port | 9000 | Port to bind to |
--collection | default | Collection name |
Registry Server Options
| Option | Default | Description |
|---|
--host | 127.0.0.1 | Host to bind to |
--port | 7777 | Port to bind to |
--token | - | Authentication token |
--read-only | false | Read-only mode |
Docs Server Options
| Option | Default | Description |
|---|
--host | 127.0.0.1 | Host to bind to |
--port | 3000 | Port to bind to |
--path | . | Documentation path |
Scheduler Options
| Option | Default | Description |
|---|
--config | - | Scheduler config file |
--daemon | false | Run as daemon |
Recipe Server Options
| Option | Default | Description |
|---|
--host | 127.0.0.1 | Host to bind to |
--port | 8765 | Port to bind to |
--config | - | Config file path |
--reload | false | Enable hot reload |
A2A Server Options
| Option | Default | Description |
|---|
--host | 127.0.0.1 | Host to bind to |
--port | 8001 | Port to bind to |
--file | agents.yaml | Agents YAML file |
A2U Server Options
| Option | Default | Description |
|---|
--host | 127.0.0.1 | Host to bind to |
--port | 8002 | Port to bind to |
Unified Server Options
| Option | Default | Description |
|---|
--host | 127.0.0.1 | Host to bind to |
--port | 8765 | Port to bind to |
--file | agents.yaml | Agents YAML file |
--reload | false | Enable hot reload |
Bot Server Options (Telegram, Discord, Slack)
| Option | Default | Description |
|---|
--token | - | Bot API token (or use env var) |
--agent-file | - | Agent configuration file |
Environment Variables:
TELEGRAM_BOT_TOKEN - Telegram bot token
DISCORD_BOT_TOKEN - Discord bot token
SLACK_BOT_TOKEN - Slack bot token
Discovery Endpoint
All servers expose a unified discovery endpoint at /__praisonai__/discovery:
curl http://localhost:8765/__praisonai__/discovery
Response:
{
"schema_version": "1.0.0",
"server_name": "praisonai-unified",
"providers": [
{"type": "agents-api", "name": "Agents API"},
{"type": "mcp", "name": "MCP Server"}
],
"endpoints": [
{"name": "agents", "provider_type": "agents-api"},
{"name": "mcp/tools", "provider_type": "mcp"}
]
}
Server-Specific Commands
A2A Server
# Start A2A server
praisonai serve a2a --port 8082
# Test agent card
curl http://localhost:8082/.well-known/agent.json
# Send A2A message
curl -X POST http://localhost:8082/a2a \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"message/send","id":"1","params":{"message":{"role":"user","parts":[{"type":"text","text":"Hello!"}]}}}'
A2U Server
# Start A2U event stream server
praisonai serve a2u --port 8083
# Get info
curl http://localhost:8083/a2u/info
# Subscribe to events (SSE)
curl -N http://localhost:8083/a2u/events/events
MCP Server
# HTTP transport
praisonai serve mcp --transport http --port 8080
# SSE transport
praisonai serve mcp --transport sse --port 8080
# List tools
curl http://localhost:8080/mcp/tools
# Start tools as MCP server
praisonai serve tools --port 8081
# SSE endpoint for Claude Desktop
curl http://localhost:8081/sse