Skip to main content

Server Adapters

Deploy AI agents as HTTP servers using popular Node.js frameworks.

Quick Start

import { Agent, createHttpHandler } from 'praisonai-ts';
import http from 'http';

const agent = new Agent({
  name: 'APIAgent',
  instructions: 'You are a helpful API assistant.',
});

const handler = createHttpHandler({ agent });

const server = http.createServer(handler);
server.listen(3000, () => {
  console.log('Server running on http://localhost:3000');
});

Express

import express from 'express';
import { Agent, createExpressHandler } from 'praisonai-ts';

const app = express();
app.use(express.json());

const agent = new Agent({
  name: 'ExpressAgent',
  instructions: 'You are a helpful assistant.',
});

const handler = createExpressHandler({ agent });

app.post('/api/chat', handler);
app.post('/api/chat/stream', handler); // Streaming endpoint

app.listen(3000);

Hono

import { Hono } from 'hono';
import { Agent, createHonoHandler } from 'praisonai-ts';

const app = new Hono();

const agent = new Agent({
  name: 'HonoAgent',
  instructions: 'You are a helpful assistant.',
});

const handler = createHonoHandler({ agent });

app.post('/api/chat', handler);

export default app;

Fastify

import Fastify from 'fastify';
import { Agent, createFastifyHandler } from 'praisonai-ts';

const fastify = Fastify();

const agent = new Agent({
  name: 'FastifyAgent',
  instructions: 'You are a helpful assistant.',
});

const handler = createFastifyHandler({ agent });

fastify.post('/api/chat', handler);

fastify.listen({ port: 3000 });

Nest.js

import { Controller, Post, Body, Res } from '@nestjs/common';
import { Response } from 'express';
import { Agent, createNestHandler } from 'praisonai-ts';

@Controller('api')
export class ChatController {
  private agent = new Agent({
    name: 'NestAgent',
    instructions: 'You are a helpful assistant.',
  });

  @Post('chat')
  async chat(@Body() body: { message: string }, @Res() res: Response) {
    const handler = createNestHandler({ agent: this.agent });
    return handler(body, res);
  }
}

Configuration Options

const handler = createExpressHandler({
  agent: agent,
  
  // Streaming
  streaming: true,           // Enable streaming responses
  
  // CORS
  cors: {
    origin: '*',
    methods: ['POST'],
  },
  
  // Rate limiting
  rateLimit: {
    windowMs: 60000,         // 1 minute
    max: 100,                // 100 requests per window
  },
  
  // Tracing
  tracing: true,             // Enable request tracing
  
  // Custom response format
  formatResponse: (result) => ({
    success: true,
    data: result,
  }),
});

Request Format

{
  "message": "Hello, how are you?",
  "sessionId": "optional-session-id",
  "metadata": {
    "userId": "user-123"
  }
}

Response Format

Non-Streaming

{
  "response": "I'm doing well, thank you!",
  "sessionId": "session-123",
  "usage": {
    "promptTokens": 10,
    "completionTokens": 8,
    "totalTokens": 18
  }
}

Streaming

Server-Sent Events (SSE) format:
data: {"type":"text","content":"I'm"}
data: {"type":"text","content":" doing"}
data: {"type":"text","content":" well"}
data: {"type":"done","usage":{"totalTokens":18}}

With Tools

const agent = new Agent({
  name: 'ToolAgent',
  instructions: 'You help with calculations.',
  tools: [
    {
      name: 'calculate',
      description: 'Perform calculations',
      execute: async ({ expression }) => eval(expression),
    },
  ],
});

const handler = createExpressHandler({
  agent,
  streaming: true,
});

Authentication Middleware

import express from 'express';

const app = express();

// Auth middleware
app.use('/api', (req, res, next) => {
  const token = req.headers.authorization?.split(' ')[1];
  if (!token || !validateToken(token)) {
    return res.status(401).json({ error: 'Unauthorized' });
  }
  next();
});

app.post('/api/chat', handler);

Environment Variables

VariableRequiredDescription
OPENAI_API_KEYYesFor the agent
PORTNoServer port (default: 3000)

Best Practices

  1. Enable CORS - Configure for your frontend domains
  2. Add authentication - Protect your endpoints
  3. Rate limit - Prevent abuse
  4. Use streaming - Better UX for long responses
  5. Log requests - Track usage and errors