> ## Documentation Index
> Fetch the complete documentation index at: https://docs.praison.ai/llms.txt
> Use this file to discover all available pages before exploring further.

# Other Models

> Use any LiteLLM-supported model with PraisonAI Agents

## Overview

PraisonAI uses LiteLLM under the hood, supporting 100+ LLM providers. Use the format `provider/model-name` for any supported model.

## LiteLLM Provider Format

| Provider     | Format                | Example                                    |
| ------------ | --------------------- | ------------------------------------------ |
| OpenAI       | `gpt-*` or `openai/*` | `gpt-4o`, `openai/gpt-4o`                  |
| Anthropic    | `claude-*`            | `claude-sonnet-4-5`                        |
| Google       | `gemini/*`            | `gemini/gemini-2.5-flash`                  |
| Azure        | `azure/*`             | `azure/gpt-4`                              |
| AWS Bedrock  | `bedrock/*`           | `bedrock/anthropic.claude-3-5-sonnet`      |
| Vertex AI    | `vertex_ai/*`         | `vertex_ai/gemini-pro`                     |
| Hugging Face | `huggingface/*`       | `huggingface/meta-llama/Llama-2-7b`        |
| Together AI  | `together_ai/*`       | `together_ai/togethercomputer/llama-2-70b` |
| Replicate    | `replicate/*`         | `replicate/meta/llama-2-70b`               |
| Anyscale     | `anyscale/*`          | `anyscale/meta-llama/Llama-2-70b`          |

## Python (Generic Pattern)

```python  theme={"theme":{"light":"vitesse-light","dark":"vitesse-dark"}}
# Set the appropriate API key for your provider
# export PROVIDER_API_KEY=your-api-key
from praisonaiagents import Agent

agent = Agent(
    instructions="You are a helpful assistant",
    llm="provider/model-name"  # Replace with your provider/model
)
agent.start("Hello, how can you help me?")
```

### OpenAI-Compatible Endpoints

```python  theme={"theme":{"light":"vitesse-light","dark":"vitesse-dark"}}
# For any OpenAI-compatible API
from praisonaiagents import Agent

agent = Agent(
    instructions="You are a helpful assistant",
    llm={
        "model": "your-model-name",
        "api_base": "https://your-api-endpoint.com/v1",
        "api_key": "your-api-key"
    }
)
agent.start("What can you do?")
```

### LM Studio (Local)

```python  theme={"theme":{"light":"vitesse-light","dark":"vitesse-dark"}}
# LM Studio runs on localhost:1234 by default
from praisonaiagents import Agent

agent = Agent(
    instructions="You are a helpful assistant",
    llm={
        "model": "local-model",
        "api_base": "http://localhost:1234/v1",
        "api_key": "not-needed"
    }
)
agent.start("Explain AI")
```

### vLLM Server

```python  theme={"theme":{"light":"vitesse-light","dark":"vitesse-dark"}}
# vLLM OpenAI-compatible server
from praisonaiagents import Agent

agent = Agent(
    instructions="You are a helpful assistant",
    llm={
        "model": "meta-llama/Llama-2-7b-hf",
        "api_base": "http://localhost:8000/v1",
        "api_key": "not-needed"
    }
)
agent.start("What is machine learning?")
```

## CLI

```bash  theme={"theme":{"light":"vitesse-light","dark":"vitesse-dark"}}
# Generic pattern
python -m praisonai "Your prompt" --llm provider/model-name

# With custom endpoint
export OPENAI_API_BASE=http://localhost:1234/v1
export OPENAI_API_KEY=not-needed
python -m praisonai "Your prompt" --llm local-model

# Run agents.yaml
python -m praisonai
```

## YAML

```yaml  theme={"theme":{"light":"vitesse-light","dark":"vitesse-dark"}}
framework: praisonai
topic: Custom model usage
agents:
  assistant:
    role: General Assistant
    goal: Help with various tasks
    instructions: You are a helpful assistant
    llm:
      model: provider/model-name  # Replace with your provider/model
    tasks:
      help_task:
        description: Assist with the user's request
        expected_output: Helpful response
```

### Custom Endpoint YAML

```yaml  theme={"theme":{"light":"vitesse-light","dark":"vitesse-dark"}}
framework: praisonai
topic: Local model usage
agents:
  assistant:
    role: Local Assistant
    goal: Help with tasks using local model
    instructions: You are a helpful assistant
    llm:
      model: local-model
      api_base: http://localhost:1234/v1
      api_key: not-needed
    tasks:
      help_task:
        description: Assist with the user's request
        expected_output: Helpful response
```

## Environment Variables

Common environment variables for different providers:

```bash  theme={"theme":{"light":"vitesse-light","dark":"vitesse-dark"}}
# OpenAI
export OPENAI_API_KEY=your-key

# Anthropic
export ANTHROPIC_API_KEY=your-key

# Google
export GEMINI_API_KEY=your-key

# Azure
export AZURE_API_KEY=your-key
export AZURE_API_BASE=https://your-resource.openai.azure.com

# AWS Bedrock
export AWS_ACCESS_KEY_ID=your-key
export AWS_SECRET_ACCESS_KEY=your-secret
export AWS_REGION=us-east-1

# Custom OpenAI-compatible
export OPENAI_API_BASE=http://your-endpoint/v1
export OPENAI_API_KEY=your-key
```

## Resources

* [LiteLLM Providers](https://docs.litellm.ai/docs/providers) - Full list of supported providers
* [LiteLLM Models](https://models.litellm.ai/) - Model discovery and pricing


Built with [Mintlify](https://mintlify.com).