Skip to main content

Ollama Provider

Run models locally with Ollama.

Environment Variables

export OLLAMA_BASE_URL=http://localhost:11434

Supported Modalities

ModalitySupported
Text/Chat
Embeddings
Tools

Prerequisites

  1. Install Ollama: https://ollama.ai
  2. Pull a model: ollama pull llama3.2

Quick Start

import { Agent } from 'praisonai';

const agent = new Agent({
  name: 'LocalAgent',
  instructions: 'You are a helpful assistant.',
  llm: 'ollama/llama3.2'
});

const response = await agent.chat('Hello!');

Available Models

ModelDescription
llama3.2Llama 3.2
llama3.1Llama 3.1
mistralMistral 7B
codellamaCode Llama
phi3Phi-3