Skip to main content

Overview

PraisonAI uses LiteLLM under the hood, supporting 100+ LLM providers. Use the format provider/model-name for any supported model.

LiteLLM Provider Format

ProviderFormatExample
OpenAIgpt-* or openai/*gpt-4o, openai/gpt-4o
Anthropicclaude-*claude-sonnet-4-5
Googlegemini/*gemini/gemini-2.5-flash
Azureazure/*azure/gpt-4
AWS Bedrockbedrock/*bedrock/anthropic.claude-3-5-sonnet
Vertex AIvertex_ai/*vertex_ai/gemini-pro
Hugging Facehuggingface/*huggingface/meta-llama/Llama-2-7b
Together AItogether_ai/*together_ai/togethercomputer/llama-2-70b
Replicatereplicate/*replicate/meta/llama-2-70b
Anyscaleanyscale/*anyscale/meta-llama/Llama-2-70b

Python (Generic Pattern)

# Set the appropriate API key for your provider
# export PROVIDER_API_KEY=your-api-key
from praisonaiagents import Agent

agent = Agent(
    instructions="You are a helpful assistant",
    llm="provider/model-name"  # Replace with your provider/model
)
agent.start("Hello, how can you help me?")

OpenAI-Compatible Endpoints

# For any OpenAI-compatible API
from praisonaiagents import Agent

agent = Agent(
    instructions="You are a helpful assistant",
    llm={
        "model": "your-model-name",
        "api_base": "https://your-api-endpoint.com/v1",
        "api_key": "your-api-key"
    }
)
agent.start("What can you do?")

LM Studio (Local)

# LM Studio runs on localhost:1234 by default
from praisonaiagents import Agent

agent = Agent(
    instructions="You are a helpful assistant",
    llm={
        "model": "local-model",
        "api_base": "http://localhost:1234/v1",
        "api_key": "not-needed"
    }
)
agent.start("Explain AI")

vLLM Server

# vLLM OpenAI-compatible server
from praisonaiagents import Agent

agent = Agent(
    instructions="You are a helpful assistant",
    llm={
        "model": "meta-llama/Llama-2-7b-hf",
        "api_base": "http://localhost:8000/v1",
        "api_key": "not-needed"
    }
)
agent.start("What is machine learning?")

CLI

# Generic pattern
python -m praisonai "Your prompt" --llm provider/model-name

# With custom endpoint
export OPENAI_API_BASE=http://localhost:1234/v1
export OPENAI_API_KEY=not-needed
python -m praisonai "Your prompt" --llm local-model

# Run agents.yaml
python -m praisonai

YAML

framework: praisonai
topic: Custom model usage
agents:
  assistant:
    role: General Assistant
    goal: Help with various tasks
    instructions: You are a helpful assistant
    llm:
      model: provider/model-name  # Replace with your provider/model
    tasks:
      help_task:
        description: Assist with the user's request
        expected_output: Helpful response

Custom Endpoint YAML

framework: praisonai
topic: Local model usage
agents:
  assistant:
    role: Local Assistant
    goal: Help with tasks using local model
    instructions: You are a helpful assistant
    llm:
      model: local-model
      api_base: http://localhost:1234/v1
      api_key: not-needed
    tasks:
      help_task:
        description: Assist with the user's request
        expected_output: Helpful response

Environment Variables

Common environment variables for different providers:
# OpenAI
export OPENAI_API_KEY=your-key

# Anthropic
export ANTHROPIC_API_KEY=your-key

# Google
export GEMINI_API_KEY=your-key

# Azure
export AZURE_API_KEY=your-key
export AZURE_API_BASE=https://your-resource.openai.azure.com

# AWS Bedrock
export AWS_ACCESS_KEY_ID=your-key
export AWS_SECRET_ACCESS_KEY=your-secret
export AWS_REGION=us-east-1

# Custom OpenAI-compatible
export OPENAI_API_BASE=http://your-endpoint/v1
export OPENAI_API_KEY=your-key

Resources