Skip to main content
The --n8n flag exports your PraisonAI workflow to n8n format and optionally auto-imports it into your n8n instance.

Quick Start

praisonai agents.yaml --n8n

Usage

Basic Export

# Export workflow to n8n JSON and open in browser
praisonai agents.yaml --n8n
Expected Output:
✅ Workflow converted successfully!
📄 JSON saved to: agents_n8n.json
🌐 Opening: http://localhost:5678/workflow/new

Auto-Import with API Key

# Set n8n API key for automatic import
export N8N_API_KEY="your-api-key"

# Export and auto-import
praisonai agents.yaml --n8n
Expected Output:
✅ Workflow converted successfully!
📄 JSON saved to: agents_n8n.json
🚀 Workflow created in n8n!
✅ Workflow activated!

🔗 Webhook URL (to trigger workflow):
   POST http://localhost:5678/webhook/your-workflow-name
🌐 Opening: http://localhost:5678/workflow/abc123

Custom n8n URL

# Use custom n8n instance
praisonai agents.yaml --n8n --n8n-url http://n8n.example.com:5678

Custom API URL (Cloud/Tunnel)

When n8n is in the cloud and PraisonAI runs locally, use --api-url to specify a tunnel or cloud URL:
# With Cloudflare Tunnel
praisonai agents.yaml --n8n --api-url https://praisonai.yourdomain.com

# With ngrok
praisonai agents.yaml --n8n --api-url https://abc123.ngrok-free.app

# With cloud deployment
praisonai agents.yaml --n8n --api-url https://praisonai-api.railway.app

Generated Workflow Structure

The n8n workflow includes:
┌─────────────┐     ┌────────────┐     ┌────────────┐     ┌────────────┐
│   Webhook   │────▶│ Researcher │────▶│   Writer   │────▶│   Editor   │
│   Trigger   │     │            │     │            │     │            │
└─────────────┘     └────────────┘     └────────────┘     └────────────┘
                          │                  │                  │
                          ▼                  ▼                  ▼
                    /agents/researcher  /agents/writer   /agents/editor
Each agent becomes an HTTP Request node that calls the corresponding PraisonAI API endpoint.

Complete Workflow

Step 1: Start the API Server

# Terminal 1
praisonai serve agents.yaml --port 8005

Step 2: Create n8n Workflow

# Terminal 2
export N8N_API_KEY="your-api-key"
praisonai agents.yaml --n8n

Step 3: Trigger the Workflow

# Via webhook
curl -X POST "http://localhost:5678/webhook/your-workflow-name" \
  -H "Content-Type: application/json" \
  -d '{"query": "Research AI trends and write a blog post"}'

Getting n8n API Key

  1. Open n8n UI (http://localhost:5678)
  2. Go to SettingsAPI
  3. Click Create API Key
  4. Copy the key and set it:
export N8N_API_KEY="your-api-key"

Example agents.yaml

name: Create Movie Script About Cat in Mars
description: Research, design narrative, and write script

agents:
  researcher:
    name: Researcher
    role: Research Specialist
    goal: Research about cats and Mars for the movie
    backstory: Expert researcher with knowledge of space and animals
    llm: gpt-4o-mini

  narrative_designer:
    name: Narrative Designer
    role: Story Designer
    goal: Design the narrative structure
    backstory: Creative storyteller who crafts compelling narratives
    llm: gpt-4o-mini

  scriptwriter:
    name: Scriptwriter
    role: Script Writer
    goal: Write the final movie script
    backstory: Professional screenwriter with Hollywood experience
    llm: gpt-4o-mini

n8n Workflow Features

Webhook Trigger

The workflow uses a webhook trigger for programmatic execution:
  • Path: Auto-generated from workflow name
  • Method: POST
  • Response Mode: Returns final agent output

Per-Agent HTTP Nodes

Each agent gets its own HTTP Request node:
NodeEndpointPurpose
Researcher/agents/researcherFirst agent, receives webhook input
Narrative Designer/agents/narrative_designerReceives researcher output
Scriptwriter/agents/scriptwriterReceives designer output, returns final

Data Flow

// Webhook input
{"query": "Create a movie about a cat on Mars"}

// Passed to Researcher
{"query": "Create a movie about a cat on Mars"}

// Researcher output → Narrative Designer input
{"query": "Research findings about cats and Mars..."}

// Narrative Designer output → Scriptwriter input
{"query": "Narrative structure: Act 1..."}

// Final output returned to webhook caller
{"response": "FADE IN: EXT. MARS SURFACE..."}

Use Cases

Visual Workflow

See agent execution flow in n8n’s visual editor

Conditional Logic

Add IF nodes between agents for branching

Integration

Connect to other n8n nodes (Slack, Email, etc.)

Scheduling

Use n8n’s scheduler to run workflows periodically

Advanced: Manual Import

If auto-import fails, manually import the generated JSON:
  1. Run praisonai agents.yaml --n8n
  2. Open n8n UI
  3. Click Add WorkflowImport from File
  4. Select agents_n8n.json
  5. Click Import

Troubleshooting

Connection Refused

Error: ECONNREFUSED 127.0.0.1:8005
Solution: Start the PraisonAI server first:
praisonai serve agents.yaml --port 8005

API Key Invalid

Error: 401 Unauthorized
Solution: Verify your n8n API key:
curl -H "X-N8N-API-KEY: $N8N_API_KEY" http://localhost:5678/api/v1/workflows

Workflow Not Activating

Solution: Manually activate in n8n UI or check webhook settings.

Command Options

OptionDefaultDescription
--n8n-Enable n8n export
--n8n-urlhttp://localhost:5678n8n instance URL
--api-urlhttp://127.0.0.1:8005PraisonAI API URL (for tunnel/cloud)

Environment Variables

VariableDescription
N8N_API_KEYn8n API key for auto-import

Cloud/Tunnel Setup

When n8n runs in the cloud but PraisonAI runs locally, you need to expose your local API. Free, stable URLs, unlimited bandwidth.
# Install cloudflared
brew install cloudflared  # macOS
# or: https://developers.cloudflare.com/cloudflare-one/connections/connect-apps/install-and-setup/

# Authenticate
cloudflared tunnel login

# Create tunnel
cloudflared tunnel create praisonai

# Create config (~/.cloudflared/config.yml)
cat > ~/.cloudflared/config.yml << EOF
tunnel: <TUNNEL_ID>
credentials-file: ~/.cloudflared/<TUNNEL_ID>.json
ingress:
  - hostname: praisonai.yourdomain.com
    service: http://localhost:8005
  - service: http_status:404
EOF

# Run tunnel
cloudflared tunnel run praisonai
Then use:
praisonai agents.yaml --n8n --api-url https://praisonai.yourdomain.com

Option 2: ngrok (Quick Testing)

Easy setup, URL changes on restart (free tier).
# Install
brew install ngrok

# Auth (one-time)
ngrok config add-authtoken <YOUR_TOKEN>

# Start tunnel
ngrok http 8005
# Output: https://abc123.ngrok-free.app
Then use:
praisonai agents.yaml --n8n --api-url https://abc123.ngrok-free.app

Option 3: Deploy to Cloud

Deploy PraisonAI API to Railway, Render, or Fly.io. Dockerfile:
FROM python:3.11-slim
WORKDIR /app
COPY agents.yaml .
RUN pip install praisonai praisonaiagents
EXPOSE 8005
CMD ["praisonai", "serve", "agents.yaml", "--host", "0.0.0.0", "--port", "8005"]
Deploy to Railway:
railway up
Then use:
praisonai agents.yaml --n8n --api-url https://your-app.railway.app