Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.praison.ai/llms.txt

Use this file to discover all available pages before exploring further.

Agent Memory

Memory gives your Agents the ability to recall relevant information from past interactions. Unlike sessions (which store sequential history), memory enables semantic search - finding information by meaning, not just recency.

Agent with Semantic Memory

import { Agent, Memory } from 'praisonai';

const memory = new Memory();

const agent = new Agent({
  name: 'Memory Agent',
  instructions: 'You remember important information from conversations.',
  memory  // Agent uses semantic memory
});

// Agent learns information
await agent.chat('My favorite color is blue and I work as a software engineer');
await agent.chat('I prefer morning meetings and use TypeScript daily');

// Later, Agent can recall relevant info
await agent.chat('What do you know about my work?');
// Agent recalls: software engineer, TypeScript, morning meetings

Agent with Memory Search Tool

Give your Agent explicit control over memory:
import { Agent, Memory, createTool } from 'praisonai';

const memory = new Memory();

// Tool to save to memory
const rememberTool = createTool({
  name: 'remember',
  description: 'Save important information to memory for later recall',
  parameters: {
    type: 'object',
    properties: {
      information: { type: 'string', description: 'Information to remember' },
      category: { type: 'string', description: 'Category (preferences, facts, tasks)' }
    },
    required: ['information']
  },
  execute: async ({ information, category = 'general' }) => {
    await memory.add(information, 'memory', { category });
    return `Remembered: ${information}`;
  }
});

// Tool to search memory
const recallTool = createTool({
  name: 'recall',
  description: 'Search memory for relevant information',
  parameters: {
    type: 'object',
    properties: {
      query: { type: 'string', description: 'What to search for' }
    },
    required: ['query']
  },
  execute: async ({ query }) => {
    const results = await memory.search(query);
    if (results.length === 0) return 'No relevant memories found';
    return results.map(r => r.entry.content).join('\n');
  }
});

const agent = new Agent({
  name: 'Learning Agent',
  instructions: `You can remember and recall information.
Use 'remember' to save important facts the user tells you.
Use 'recall' to find relevant information when answering questions.`,
  tools: [rememberTool, recallTool]
});

await agent.chat('Remember that I am allergic to peanuts');
await agent.chat('What food restrictions do I have?'); // Agent recalls allergy

Multi-Agent Shared Memory

Agents can share a memory pool:
import { Agent, Memory, Agents } from 'praisonai';

const sharedMemory = new Memory();

// Agent 1: Learns from documents
const learnerAgent = new Agent({
  name: 'Learner',
  instructions: 'Extract and remember key facts from documents.',
  memory: sharedMemory
});

// Agent 2: Answers questions using shared memory
const answererAgent = new Agent({
  name: 'Answerer',
  instructions: 'Answer questions using information from memory.',
  memory: sharedMemory
});

// Learner processes documents
await learnerAgent.chat('Learn: The company was founded in 2020. CEO is Jane Smith. HQ in Austin.');

// Answerer can access learned information
await answererAgent.chat('Who is the CEO?'); // Recalls: Jane Smith
await answererAgent.chat('Where is the headquarters?'); // Recalls: Austin

Agent with Long-Term Memory

Persist memory across sessions:
import { Agent, Memory, createUpstashRedis } from 'praisonai';

const redis = createUpstashRedis({ url, token });

// Memory with persistence
class PersistentMemory extends Memory {
  private redis: any;
  private userId: string;
  
  constructor(redis: any, userId: string) {
    super();
    this.redis = redis;
    this.userId = userId;
  }
  
  async add(content: string, role: string, metadata?: any) {
    await super.add(content, role, metadata);
    // Persist to Redis
    const memories = this.toJSON();
    await this.redis.set(`memory:${this.userId}`, memories);
  }
  
  async load() {
    const data = await this.redis.get(`memory:${this.userId}`);
    if (data) this.fromJSON(data);
  }
}

const memory = new PersistentMemory(redis, 'user-123');
await memory.load(); // Load previous memories

const agent = new Agent({
  name: 'Long-Term Memory Agent',
  instructions: 'You remember everything about the user across all sessions.',
  memory
});

// Agent remembers from previous sessions
await agent.chat('What do you remember about me?');

Agent Context Building

Build context from memory for Agent prompts:
import { Agent, Memory } from 'praisonai';

const memory = new Memory();

// Add various information
await memory.add('User prefers dark mode', 'system');
await memory.add('User is learning React', 'user');
await memory.add('User works at TechCorp', 'user');
await memory.add('User timezone is PST', 'system');

const agent = new Agent({
  name: 'Context-Aware Agent',
  instructions: 'Personalize responses based on user context.'
});

async function contextualChat(message: string) {
  // Search memory for relevant context
  const relevantMemories = await memory.search(message);
  
  // Build context string
  const context = memory.buildContext({
    entries: relevantMemories.slice(0, 5),
    format: 'bullet'
  });
  
  // Agent responds with context
  return await agent.chat(`
User Context:
${context}

User Message: ${message}
  `);
}

await contextualChat('Help me with a coding problem');
// Agent knows: user is learning React, works at TechCorp

Agent Memory with Embeddings

Semantic search with vector embeddings:
import { Agent, Memory } from 'praisonai';

const memory = new Memory({
  embeddingProvider: {
    embed: async (text) => {
      // Use OpenAI embeddings
      const response = await openai.embeddings.create({
        model: 'text-embedding-3-small',
        input: text
      });
      return response.data[0].embedding;
    },
    embedBatch: async (texts) => {
      const response = await openai.embeddings.create({
        model: 'text-embedding-3-small',
        input: texts
      });
      return response.data.map(d => d.embedding);
    }
  }
});

const agent = new Agent({
  name: 'Semantic Memory Agent',
  instructions: 'You have semantic memory - you can find information by meaning.',
  memory
});

// Add diverse information
await memory.add('The quarterly revenue was $5.2 million', 'data');
await memory.add('Customer satisfaction score improved to 4.8/5', 'data');
await memory.add('We hired 15 new engineers this quarter', 'data');

// Semantic search finds relevant info
await agent.chat('How is the company doing financially?');
// Finds: quarterly revenue $5.2 million (by semantic similarity)

Memory Configuration

const memory = new Memory({
  maxEntries: 1000,        // Maximum memories to store
  maxTokens: 50000,        // Token limit for context
  embeddingProvider: {...} // Optional: for semantic search
});