Skip to main content
Choose the AI model that powers your agent - OpenAI, Anthropic, local models, and more.

Quick Start

1

Use Default Model

use praisonai::Agent;

// Uses gpt-4o-mini by default
let agent = Agent::new()
    .name("Assistant")
    .instructions("You are helpful")
    .build()?;
2

Choose a Model

use praisonai::Agent;

let agent = Agent::new()
    .name("Assistant")
    .model("gpt-4o")  // or "claude-3-opus", "ollama/llama3"
    .build()?;
3

Custom API Endpoint

use praisonai::Agent;

let agent = Agent::new()
    .name("Assistant")
    .model("llama3")
    .base_url("http://localhost:11434/v1")
    .build()?;

How It Works


Choosing a Model

ModelBest ForSpeed
gpt-4o-miniFast responses, cost effective⚡ Fast
gpt-4oHigh quality, complex tasks🔵 Medium
claude-3-opusLong documents, analysis🔵 Medium
ollama/llama3Privacy, offline use🟢 Local

Configuration

OptionTypeDefaultDescription
modelStringgpt-4o-miniModel name
api_keyStringFrom ENVAPI key
base_urlStringOpenAI defaultAPI endpoint
temperaturef320.7Randomness (0-1)
max_tokensu32NoneMax response length

Best Practices

Set OPENAI_API_KEY instead of hardcoding keys.
Fast and cheap - upgrade to gpt-4o only when needed.
Ollama runs models locally with no data leaving your machine.