Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.praison.ai/llms.txt

Use this file to discover all available pages before exploring further.

Failover automatically switches to backup providers when the primary fails.

Quick Start

1

Create Failover-Ready Agent

use praisonai::Agent;

// Primary agent with fast model
let primary = Agent::new()
    .name("Primary Assistant")
    .model("gpt-4o")
    .build()?;

// Backup agent with different provider
let backup = Agent::new()
    .name("Backup Assistant")
    .model("claude-3-sonnet")
    .build()?;

// Implement failover at application level
async fn chat_with_failover(query: &str) -> Result<String> {
    match primary.chat(query).await {
        Ok(response) => Ok(response),
        Err(_) => backup.chat(query).await,
    }
}
2

Local Fallback

use praisonai::Agent;

// Cloud-first, local fallback
let cloud = Agent::new()
    .name("Cloud Assistant")
    .model("gpt-4o")
    .build()?;

let local = Agent::new()
    .name("Local Assistant")
    .model("ollama/llama3")
    .base_url("http://localhost:11434")
    .build()?;

// Try cloud, fall back to local if unavailable

When Failover Triggers

TriggerAction
API timeoutTry next provider
Rate limitTry next provider
Service downTry next provider
All failReturn error

Best Practices

Put best provider first, cheapest backup last.
Add Ollama as last resort for offline resilience.

LLM

LLM providers

Gateway

Unified API