PraisonAI agents can be easily deployed as RESTful APIs, allowing you to integrate them into various applications and services. This guide covers how to deploy both single and multiple agents as APIs.

Quick Start

1

Install Dependencies

Make sure you have the required packages installed:

pip install "praisonaiagents[api]"
2

Set API Key

export OPENAI_API_KEY="your_api_key"
3

Deploy a Simple Agent API

Create a file named simple-api.py with the following code:

from praisonaiagents import Agent

agent = Agent(instructions="""You are a helpful assistant.""", llm="gpt-4o-mini")
agent.launch(path="/ask", port=3030)
4

Run the API Server

python simple-api.py

Your API will be available at http://localhost:3030/ask

Making API Requests

Once your agent is deployed, you can make POST requests to interact with it:

curl -X POST http://localhost:3030/ask \
  -H "Content-Type: application/json" \
  -d '{"message": "What is artificial intelligence?"}'

The response will be in JSON format:

{
  "response": "Artificial intelligence (AI) refers to the simulation of human intelligence in machines that are programmed to think and learn like humans. It encompasses various technologies including machine learning, natural language processing, computer vision, and robotics. AI systems can perform tasks that typically require human intelligence such as understanding natural language, recognizing patterns, solving problems, and making decisions."
}

Multi-Agent API Deployment

You can deploy multiple agents on the same server, each with its own endpoint:

from praisonaiagents import Agent

weather_agent = Agent(
    instructions="""You are a weather agent that can provide weather information for a given city.""",
    llm="gpt-4o-mini"
)

stock_agent = Agent(
    instructions="""You are a stock market agent that can provide information about stock prices and market trends.""",
    llm="gpt-4o-mini"
)

travel_agent = Agent(
    instructions="""You are a travel agent that can provide recommendations for destinations, hotels, and activities.""",
    llm="gpt-4o-mini"
)

weather_agent.launch(path="/weather", port=3030)
stock_agent.launch(path="/stock", port=3030)
travel_agent.launch(path="/travel", port=3030) 

With this setup, you can access:

  • Weather agent at http://localhost:3030/weather
  • Stock agent at http://localhost:3030/stock
  • Travel agent at http://localhost:3030/travel

Production Deployment Options

For production environments, consider the following deployment options:

Docker Deployment

1

Create a Dockerfile

FROM python:3.11-slim

WORKDIR /app

COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

COPY . .

EXPOSE 3030

CMD ["python", "api.py"]
2

Create requirements.txt

praisonaiagents[api]>=0.0.79
3

Build and Run Docker Container

docker build -t praisonai-api .
docker run -p 3030:3030 -e OPENAI_API_KEY=your_api_key praisonai-api

Cloud Deployment

Deploying to AWS

1

Create an EC2 Instance

Launch an EC2 instance with Ubuntu or Amazon Linux.

2

Install Dependencies

sudo apt update
sudo apt install -y python3-pip
pip install "praisonaiagents[api]"
3

Configure Security Group

Make sure to open port 3030 in your security group settings.

4

Run with Systemd

Create a systemd service file for automatic startup and management.

Deploying to Google Cloud Run

1

Build Docker Image

docker build -t gcr.io/your-project/praisonai-api .
2

Push to Container Registry

docker push gcr.io/your-project/praisonai-api
3

Deploy to Cloud Run

gcloud run deploy praisonai-api \
  --image gcr.io/your-project/praisonai-api \
  --platform managed \
  --allow-unauthenticated \
  --set-env-vars="OPENAI_API_KEY=your_api_key"

API Configuration Options

When launching your agent as an API, you can customize various parameters:

agent.launch(
    path="/custom-endpoint",  # API endpoint path
    port=8080,                # Port number
    host="0.0.0.0",           # Host address (0.0.0.0 for external access)
    debug=True,               # Enable debug mode
    cors_origins=["*"],       # CORS configuration
    api_key="your-api-key"    # Optional API key for authentication
)

Securing Your API

For production deployments, consider implementing:

  1. API Key Authentication: Require API keys for all requests
  2. Rate Limiting: Limit the number of requests per client
  3. HTTPS: Use SSL/TLS certificates for encrypted communication
  4. Input Validation: Validate all input data before processing

Monitoring and Scaling

For production environments, consider:

  1. Load Balancing: Distribute traffic across multiple instances
  2. Auto-Scaling: Automatically adjust resources based on demand
  3. Logging: Implement comprehensive logging for debugging
  4. Monitoring: Set up alerts for errors and performance issues

Features

RESTful API

Deploy your agents as standard RESTful APIs for easy integration.

Multi-Agent Support

Deploy multiple agents with different endpoints on the same server.

Customizable

Configure ports, paths, CORS settings, and more.

Production-Ready

Easily deploy to Docker, AWS, Google Cloud, or other cloud platforms.

Was this page helpful?