Documentation Index
Fetch the complete documentation index at: https://docs.praison.ai/llms.txt
Use this file to discover all available pages before exploring further.
Adapters Module
The Adapters module provides concrete implementations of knowledge base components including readers, vector stores, retrievers, and rerankers.
Import
from praisonai.adapters import (
# Readers
AutoReader, TextReader, MarkItDownReader, DirectoryReader,
# Vector Stores
ChromaVectorStore,
# Retrievers
BasicRetriever, FusionRetriever,
# Rerankers
LLMReranker
)
Quick Example
from praisonai.adapters import AutoReader, ChromaVectorStore, BasicRetriever
# Load documents
reader = AutoReader()
docs = reader.load("./documents/")
# Store in vector database
store = ChromaVectorStore(namespace="my_docs")
store.add(
texts=[d.content for d in docs],
embeddings=get_embeddings([d.content for d in docs]),
metadatas=[d.metadata for d in docs]
)
# Retrieve
retriever = BasicRetriever(
vector_store=store,
embedding_fn=get_embedding
)
results = retriever.retrieve("search query", top_k=5)
Features
- Readers: Load documents from files, directories, URLs, and glob patterns
- Vector Stores: Store and query document embeddings (ChromaDB, Pinecone)
- Retrievers: Find relevant documents (Basic, Fusion, Recursive, AutoMerge)
- Rerankers: Improve result relevance (LLM, CrossEncoder, Cohere)
Module Structure
praisonai/adapters/
├── __init__.py # Lazy loading exports
├── readers.py # Document readers
├── vector_stores.py # Vector store adapters
├── retrievers.py # Retrieval strategies
└── rerankers.py # Reranking implementations
Available Components
Readers
| Class | Description |
|---|
AutoReader | Automatic source detection and routing |
TextReader | Plain text files (.txt, .log) |
MarkItDownReader | Rich documents (PDF, DOCX, etc.) |
DirectoryReader | Recursive directory loading |
GlobReader | Glob pattern matching |
URLReader | Web page content |
Vector Stores
| Class | Description | Requirements |
|---|
ChromaVectorStore | Local persistent storage | chromadb |
PineconeVectorStore | Cloud vector database | pinecone |
Retrievers
| Class | Description |
|---|
BasicRetriever | Simple vector similarity |
FusionRetriever | Multi-query with RRF |
RecursiveRetriever | Depth-limited expansion |
AutoMergeRetriever | Adjacent chunk merging |
Rerankers
| Class | Description | Requirements |
|---|
LLMReranker | LLM-based scoring | OpenAI/Anthropic API |
CrossEncoderReranker | Neural reranking | sentence-transformers |
CohereReranker | Cohere Rerank API | cohere |
Lazy Loading
All adapters use lazy loading to minimize import time:
# Only loads when accessed
from praisonai.adapters import ChromaVectorStore # Fast import
# Actual loading happens on first use
store = ChromaVectorStore() # chromadb loaded here
Example: Full RAG Pipeline
from praisonai.adapters import (
AutoReader,
ChromaVectorStore,
FusionRetriever,
LLMReranker
)
from praisonaiagents import Agent
# 1. Load documents
reader = AutoReader()
docs = reader.load("./knowledge_base/")
# 2. Store with embeddings
store = ChromaVectorStore(namespace="kb")
store.add(
texts=[d.content for d in docs],
embeddings=get_embeddings([d.content for d in docs])
)
# 3. Create retriever with fusion
agent = Agent(instructions="Query assistant")
retriever = FusionRetriever(
vector_store=store,
embedding_fn=get_embedding,
llm=agent,
num_queries=3
)
# 4. Create reranker
reranker = LLMReranker(model="gpt-4o-mini")
# 5. Query pipeline
query = "How to deploy Python apps?"
results = retriever.retrieve(query, top_k=20)
reranked = reranker.rerank(query, [r.text for r in results], top_k=5)
for r in reranked:
print(f"Score: {r.score:.3f} - {r.text[:100]}...")
CLI Integration
The adapters power the praisonai knowledge CLI commands:
# Add documents (uses readers)
praisonai knowledge add ./docs/
# Query (uses vector store + retriever + reranker)
praisonai knowledge query "search query" \
--vector-store chroma \
--retrieval fusion \
--reranker llm