Documentation Index
Fetch the complete documentation index at: https://docs.praison.ai/llms.txt
Use this file to discover all available pages before exploring further.
Overview
Ollama allows you to run embedding models locally on your machine with no API costs.
Quick Start
from praisonaiagents import embedding
result = embedding(
input="Hello world",
model="ollama/nomic-embed-text"
)
print(f"Dimensions: {len(result.embeddings[0])}")
CLI Usage
praisonai embed "Hello world" --model ollama/nomic-embed-text
Setup
- Install Ollama: https://ollama.ai
- Pull an embedding model:
ollama pull nomic-embed-text
Available Models
| Model | Dimensions | Size |
|---|
ollama/nomic-embed-text | 768 | 274MB |
ollama/mxbai-embed-large | 1024 | 669MB |
ollama/all-minilm | 384 | 45MB |
ollama/snowflake-arctic-embed | 1024 | 669MB |
Custom API Base
from praisonaiagents import embedding
result = embedding(
input="Hello world",
model="ollama/nomic-embed-text",
api_base="http://localhost:11434"
)
Batch Embeddings
from praisonaiagents import embedding
texts = ["Document 1", "Document 2", "Document 3"]
result = embedding(
input=texts,
model="ollama/nomic-embed-text"
)
print(f"Generated {len(result.embeddings)} embeddings")