Skip to main content

Overview

Ollama allows you to run embedding models locally on your machine with no API costs.

Quick Start

from praisonaiagents import embedding

result = embedding(
    input="Hello world",
    model="ollama/nomic-embed-text"
)
print(f"Dimensions: {len(result.embeddings[0])}")

CLI Usage

praisonai embed "Hello world" --model ollama/nomic-embed-text

Setup

  1. Install Ollama: https://ollama.ai
  2. Pull an embedding model:
ollama pull nomic-embed-text

Available Models

ModelDimensionsSize
ollama/nomic-embed-text768274MB
ollama/mxbai-embed-large1024669MB
ollama/all-minilm38445MB
ollama/snowflake-arctic-embed1024669MB

Custom API Base

from praisonaiagents import embedding

result = embedding(
    input="Hello world",
    model="ollama/nomic-embed-text",
    api_base="http://localhost:11434"
)

Batch Embeddings

from praisonaiagents import embedding

texts = ["Document 1", "Document 2", "Document 3"]
result = embedding(
    input=texts,
    model="ollama/nomic-embed-text"
)
print(f"Generated {len(result.embeddings)} embeddings")