Documentation Index Fetch the complete documentation index at: https://docs.praison.ai/llms.txt
Use this file to discover all available pages before exploring further.
Generate vector embeddings for semantic search, similarity matching, and RAG applications.
Quick Start
Generate Single Embedding
import { embed } from ' praisonai ' ;
const result = embed ( " Hello, world! " );
console . log ( result . embedding ); // number[]
console . log ( result . dimensions ); // 1536
console . log ( result . model ); // 'text-embedding-3-small'
Generate Batch Embeddings
import { embeddings } from ' praisonai ' ;
const result = embeddings ([
" First document " ,
" Second document " ,
" Third document "
]);
console . log ( result . embeddings . length ); // 3
Embedding Functions
Sync Functions
import { embed , embedding , embeddings } from ' praisonai ' ;
// Single text
const single = embed ( " Hello world " );
// Alias for embed
const single2 = embedding ( " Hello world " );
// Multiple texts
const batch = embeddings ([ " Text 1 " , " Text 2 " , " Text 3 " ]);
Async Functions
import { aembed , aembedding , aembeddings } from ' praisonai ' ;
// Single text (async)
const single = await aembed ( " Hello world " );
// Alias for aembed
const single2 = await aembedding ( " Hello world " );
// Multiple texts (async)
const batch = await aembeddings ([ " Text 1 " , " Text 2 " , " Text 3 " ]);
Configuration
import { setEmbeddingConfig , getDimensions } from ' praisonai ' ;
// Set global config
setEmbeddingConfig ({
model : ' text-embedding-3-large ' ,
dimensions : 3072 ,
apiKey : process . env . OPENAI_API_KEY
});
// Get dimensions for a model
const dims = getDimensions ( ' text-embedding-3-small ' ); // 1536
Configuration Options
Option Type Default Description modelstring'text-embedding-3-small'Embedding model dimensionsnumber1536Vector dimensions apiKeystringundefinedAPI key (uses env var if not set) baseUrlstringundefinedCustom API base URL
Supported Models
Model Dimensions Provider text-embedding-3-small1536 OpenAI text-embedding-3-large3072 OpenAI text-embedding-ada-0021536 OpenAI embed-english-v3.01024 Cohere embed-multilingual-v3.01024 Cohere
Similarity Functions
import { cosineSimilarity , euclideanDistance , normalize } from ' praisonai ' ;
const vec1 = [ 0.1 , 0.2 , 0.3 ];
const vec2 = [ 0.2 , 0.3 , 0.4 ];
// Cosine similarity (0-1, higher = more similar)
const similarity = cosineSimilarity ( vec1 , vec2 );
console . log ( similarity ); // ~0.99
// Euclidean distance (lower = more similar)
const distance = euclideanDistance ( vec1 , vec2 );
console . log ( distance ); // ~0.17
// Normalize vector to unit length
const normalized = normalize ( vec1 );
Result Types
EmbeddingResult
interface EmbeddingResult {
embedding : number []; // Vector array
model : string ; // Model used
dimensions : number ; // Vector dimensions
usage ? : {
promptTokens : number ;
totalTokens : number ;
};
}
BatchEmbeddingResult
interface BatchEmbeddingResult {
embeddings : number [][]; // Array of vectors
model : string ; // Model used
dimensions : number ; // Vector dimensions
usage ? : {
promptTokens : number ;
totalTokens : number ;
};
}
Common Patterns
Semantic Search
Document Clustering
RAG Context
import { embed , embeddings , cosineSimilarity } from ' praisonai ' ;
// Index documents
const docs = [ " AI is amazing " , " Machine learning rocks " , " TypeScript is typed " ];
const docVectors = embeddings ( docs );
// Search
const query = embed ( " artificial intelligence " );
const scores = docVectors . embeddings . map (( vec , i ) => ({
doc : docs [ i ],
score : cosineSimilarity ( query . embedding , vec )
}));
// Sort by similarity
scores . sort (( a , b ) => b . score - a . score );
console . log ( scores [ 0 ]); // Most similar document
import { embeddings , cosineSimilarity } from ' praisonai ' ;
const docs = [ " Doc 1 " , " Doc 2 " , " Doc 3 " ];
const vectors = embeddings ( docs );
// Build similarity matrix
const matrix = vectors . embeddings . map (( v1 , i ) =>
vectors . embeddings . map (( v2 , j ) =>
cosineSimilarity ( v1 , v2 )
)
);
import { embed , cosineSimilarity } from ' praisonai ' ;
async function findRelevantContext ( query : string , chunks : string []) {
const queryVec = await aembed ( query );
const chunkVecs = await aembeddings ( chunks );
return chunks
. map (( chunk , i ) => ({
chunk ,
score : cosineSimilarity ( queryVec . embedding , chunkVecs . embeddings [ i ])
}))
. sort (( a , b ) => b . score - a . score )
. slice ( 0 , 3 ); // Top 3 chunks
}
API Reference
Embeddings Embeddings module reference
Best Practices
Use batch embeddings for multiple texts
Call embeddings() once with an array instead of calling embed() multiple times.
Cache embeddings for static content
Store embeddings in a database to avoid regenerating them for unchanged content.
Use text-embedding-3-small for speed, text-embedding-3-large for accuracy.
Knowledge Base Build knowledge bases with embeddings
RAG Agent Retrieval-augmented generation