Documentation Index
Fetch the complete documentation index at: https://docs.praison.ai/llms.txt
Use this file to discover all available pages before exploring further.
Agent LLM Providers
PraisonAI TypeScript supports 60+ AI providers through AI SDK v6. Switch between providers by changing the llm parameter - all providers use the same unified API.
Supported Providers (60+)
Core Providers
| Provider | Model Examples | Modalities | Env Variable |
|---|
| OpenAI | gpt-4o, gpt-4o-mini, gpt-5 | Chat, Embeddings, Image, Audio | OPENAI_API_KEY |
| Anthropic | claude-sonnet-4, claude-3-5-sonnet | Chat, Image | ANTHROPIC_API_KEY |
| Google | gemini-2.0-flash, gemini-1.5-pro | Chat, Embeddings, Image, Audio | GOOGLE_API_KEY |
| Google Vertex | gemini-pro, palm-2 | Chat, Embeddings, Image | GOOGLE_APPLICATION_CREDENTIALS |
| Azure OpenAI | gpt-4, gpt-35-turbo | Chat, Embeddings, Image | AZURE_API_KEY |
| Amazon Bedrock | claude-3, titan | Chat, Embeddings | AWS_ACCESS_KEY_ID |
Inference Providers
| Provider | Model Examples | Modalities | Env Variable |
|---|
| xAI | grok-4, grok-3-fast | Chat, Image | XAI_API_KEY |
| Groq | llama-3.3-70b, mixtral-8x7b | Chat | GROQ_API_KEY |
| Fireworks | llama-v3, mixtral | Chat, Embeddings | FIREWORKS_API_KEY |
| Together.ai | llama-3, mistral-7b | Chat, Embeddings | TOGETHER_API_KEY |
| DeepInfra | llama-3, mistral | Chat, Embeddings | DEEPINFRA_API_KEY |
| Replicate | llama, stable-diffusion | Chat, Image | REPLICATE_API_TOKEN |
| Baseten | custom models | Chat | BASETEN_API_KEY |
| Hugging Face | various | Chat, Embeddings | HUGGINGFACE_API_KEY |
Model Providers
| Provider | Model Examples | Modalities | Env Variable |
|---|
| Mistral | mistral-large, mistral-medium | Chat, Embeddings | MISTRAL_API_KEY |
| Cohere | command-r, command-r-plus | Chat, Embeddings | COHERE_API_KEY |
| DeepSeek | deepseek-chat, deepseek-reasoner | Chat | DEEPSEEK_API_KEY |
| Cerebras | llama3.1-8b, llama3.3-70b | Chat | CEREBRAS_API_KEY |
| Perplexity | pplx-7b, pplx-70b | Chat | PERPLEXITY_API_KEY |
Image Generation
| Provider | Model Examples | Modalities | Env Variable |
|---|
| Fal | flux, stable-diffusion | Image | FAL_KEY |
| Black Forest Labs | FLUX.1 | Image | BFL_API_KEY |
| Luma | dream-machine | Image, Video | LUMA_API_KEY |
Audio/Speech Providers
| Provider | Model Examples | Modalities | Env Variable |
|---|
| ElevenLabs | eleven_multilingual_v2 | Speech | ELEVENLABS_API_KEY |
| AssemblyAI | transcription | Audio | ASSEMBLYAI_API_KEY |
| Deepgram | nova-2 | Audio, Speech | DEEPGRAM_API_KEY |
| Gladia | transcription | Audio | GLADIA_API_KEY |
| LMNT | speech | Speech | LMNT_API_KEY |
| Hume | emotion | Audio | HUME_API_KEY |
| Rev.ai | transcription | Audio | REVAI_API_KEY |
Gateway/Proxy Providers
| Provider | Description | Env Variable |
|---|
| AI Gateway | Unified gateway | AI_GATEWAY_API_KEY |
| OpenRouter | Multi-provider routing | OPENROUTER_API_KEY |
| Portkey | AI gateway | PORTKEY_API_KEY |
| Helicone | Observability proxy | HELICONE_API_KEY |
| Cloudflare Workers AI | Edge inference | CLOUDFLARE_API_TOKEN |
Local/Self-hosted
| Provider | Description | Env Variable |
|---|
| Ollama | Local models | OLLAMA_BASE_URL |
| LM Studio | Local inference | LM_STUDIO_BASE_URL |
| NVIDIA NIM | Enterprise local | NVIDIA_API_KEY |
| OpenAI Compatible | Any OpenAI-compatible API | OPENAI_COMPATIBLE_API_KEY |
Regional/Specialized
| Provider | Description | Env Variable |
|---|
| Qwen (Alibaba) | Chinese LLM | DASHSCOPE_API_KEY |
| Zhipu AI | GLM models | ZHIPU_API_KEY |
| MiniMax | Chinese provider | MINIMAX_API_KEY |
| Spark (iFlytek) | Chinese provider | SPARK_API_KEY |
| SambaNova | Enterprise | SAMBANOVA_API_KEY |
Embedding Specialists
| Provider | Description | Env Variable |
|---|
| Voyage AI | High-quality embeddings | VOYAGE_API_KEY |
| Jina AI | Embeddings & search | JINA_API_KEY |
| Mixedbread | Embeddings | MIXEDBREAD_API_KEY |
Memory/Agent Providers
| Provider | Description | Env Variable |
|---|
| Mem0 | Memory layer | MEM0_API_KEY |
| Letta | Agent memory | LETTA_API_KEY |
Enterprise/Cloud
| Provider | Description | Env Variable |
|---|
| Azure AI | Azure services | AZURE_API_KEY |
| SAP AI Core | SAP integration | SAP_AI_CORE_KEY |
| Heroku | Heroku AI | HEROKU_API_KEY |
| Anthropic Vertex | Claude via Vertex | GOOGLE_APPLICATION_CREDENTIALS |
Agent with Different Models
import { Agent } from 'praisonai';
// OpenAI (default)
const openaiAgent = new Agent({
instructions: 'You are helpful.',
llm: 'gpt-4o-mini'
});
// Anthropic Claude
const claudeAgent = new Agent({
instructions: 'You are helpful.',
llm: 'anthropic/claude-3-5-sonnet'
});
// Google Gemini
const geminiAgent = new Agent({
instructions: 'You are helpful.',
llm: 'google/gemini-2.0-flash'
});
// All work the same way
await openaiAgent.chat('Hello!');
await claudeAgent.chat('Hello!');
await geminiAgent.chat('Hello!');
Multi-Agent with Mixed Providers
Use different models for different Agent roles:
import { Agent, AgentTeam } from 'praisonai';
// Fast model for quick tasks
const triageAgent = new Agent({
name: 'Triage',
instructions: 'Quickly categorize incoming requests.',
llm: 'gpt-4o-mini' // Fast and cheap
});
// Powerful model for complex reasoning
const analysisAgent = new Agent({
name: 'Analyst',
instructions: 'Perform deep analysis of complex problems.',
llm: 'anthropic/claude-sonnet-4' // Best reasoning
});
// Creative model for content
const writerAgent = new Agent({
name: 'Writer',
instructions: 'Write engaging content.',
llm: 'gpt-4o' // Good for creative tasks
});
const agents = new AgentTeam([triageAgent, analysisAgent, writerAgent]);
await agents.start();
Agent Model Selection by Task
import { Agent } from 'praisonai';
function createAgentForTask(taskType: string) {
const modelMap: Record<string, string> = {
'quick': 'gpt-4o-mini',
'reasoning': 'anthropic/claude-sonnet-4',
'creative': 'gpt-4o',
'code': 'anthropic/claude-3-5-sonnet',
'multimodal': 'google/gemini-2.0-flash'
};
return new Agent({
instructions: `You handle ${taskType} tasks.`,
llm: modelMap[taskType] || 'gpt-4o-mini'
});
}
const codeAgent = createAgentForTask('code');
await codeAgent.chat('Write a function to sort an array');
Agent with Streaming
import { Agent } from 'praisonai';
const agent = new Agent({
instructions: 'You tell stories.',
llm: 'gpt-4o',
stream: true // Enable streaming
});
// Response streams to console
await agent.chat('Tell me a short story about a robot');
Environment-Based Model Selection
import { Agent } from 'praisonai';
// Model from environment variable
const agent = new Agent({
instructions: 'You are helpful.',
llm: process.env.PRAISONAI_MODEL || 'gpt-4o-mini'
});
// Or use different models per environment
const model = process.env.NODE_ENV === 'production'
? 'gpt-4o' // Better quality in prod
: 'gpt-4o-mini'; // Cheaper in dev
const prodAgent = new Agent({
instructions: 'You are helpful.',
llm: model
});
| Format | Example |
|---|
| Model only | gpt-4o-mini |
| Provider/Model | openai/gpt-4o |
| Anthropic | anthropic/claude-3-5-sonnet |
| Google | google/gemini-2.0-flash |
| xAI | xai/grok-3 |
| Groq | groq/llama-3.3-70b-versatile |
| Mistral | mistral/mistral-large-latest |
| DeepSeek | deepseek/deepseek-chat |
Provider Aliases
Use short aliases for convenience:
| Alias | Provider |
|---|
oai | openai |
claude | anthropic |
gemini | google |
grok | xai |
vertex | google-vertex |
aws, bedrock | amazon-bedrock |
together | togetherai |
flux, bfl | black-forest-labs |
local, ollama | ollama |
nim, nvidia | nvidia-nim |
OpenAI-Compatible Providers
Use any OpenAI-compatible API:
import { Agent } from 'praisonai';
// Local LM Studio
const lmStudioAgent = new Agent({
instructions: 'You are helpful.',
llm: 'openai-compatible/local-model',
llmConfig: {
baseUrl: 'http://localhost:1234/v1',
apiKey: 'not-needed'
}
});
// Custom OpenAI-compatible endpoint
const customAgent = new Agent({
instructions: 'You are helpful.',
llm: 'openai-compatible/my-model',
llmConfig: {
baseUrl: process.env.CUSTOM_API_BASE,
apiKey: process.env.CUSTOM_API_KEY
}
});
Local Providers (Ollama, LM Studio)
import { Agent } from 'praisonai';
// Ollama (local)
const ollamaAgent = new Agent({
instructions: 'You are helpful.',
llm: 'ollama/llama3.2',
llmConfig: {
baseUrl: 'http://localhost:11434'
}
});
// LM Studio
const lmStudioAgent = new Agent({
instructions: 'You are helpful.',
llm: 'lm-studio/local-model',
llmConfig: {
baseUrl: 'http://localhost:1234/v1'
}
});
// NVIDIA NIM
const nimAgent = new Agent({
instructions: 'You are helpful.',
llm: 'nvidia-nim/llama-3.1-8b-instruct'
});
Agent with Custom Provider Config
For advanced use cases:
import { Agent, createProvider } from 'praisonai';
// Create custom provider with options
const customProvider = createProvider('openai/gpt-4o', {
maxRetries: 3,
timeout: 60000
});
// Use in Agent (advanced)
const agent = new Agent({
instructions: 'You are helpful.',
llm: 'gpt-4o' // Simple string is usually sufficient
});
await agent.chat('Hello!');
Custom Provider Extension
Register your own provider:
import { registerProvider, BaseProvider } from 'praisonai';
class MyCustomProvider extends BaseProvider {
async generateText(options) {
// Your implementation
return { text: 'response', usage: { totalTokens: 10 } };
}
}
// Register globally
registerProvider('my-provider', MyCustomProvider);
// Use in Agent
const agent = new Agent({
instructions: 'You are helpful.',
llm: 'my-provider/my-model'
});
Environment Variables
# Core providers
export OPENAI_API_KEY=sk-...
export ANTHROPIC_API_KEY=sk-ant-...
export GOOGLE_API_KEY=AIza...
# Inference providers
export XAI_API_KEY=xai-...
export GROQ_API_KEY=gsk_...
export TOGETHER_API_KEY=...
export FIREWORKS_API_KEY=...
# Local providers
export OLLAMA_BASE_URL=http://localhost:11434
export LM_STUDIO_BASE_URL=http://localhost:1234/v1
# Set default model
export PRAISONAI_MODEL=openai/gpt-4o-mini