Skip to main content

Memory

Defined in the memory module.
A single-file memory manager covering:
  • Short-term memory (STM) for ephemeral context
  • Long-term memory (LTM) for persistent knowledge
  • Entity memory (structured data about named entities)
  • User memory (preferences/history for each user)
  • Quality score logic for deciding which data to store in LTM
  • Context building from multiple memory sources
  • Graph memory support for complex relationship storage (via Mem0)
Config example: { “provider”: “rag” or “mem0” or “mongodb” or “none”, “use_embedding”: True, “short_db”: “short_term.db”, “long_db”: “long_term.db”, “rag_db_path”: “rag_db”, # optional path for local embedding store “config”: { “api_key”: ”…”, # if mem0 usage “org_id”: ”…”, “project_id”: ”…“,

MongoDB configuration (if provider is “mongodb”)

“connection_string”: “mongodb://localhost:27017/” or “mongodb+srv://user:[email protected]/”, “database”: “praisonai”, “use_vector_search”: True, # Enable Atlas Vector Search “max_pool_size”: 50, “min_pool_size”: 10, “max_idle_time”: 30000, “server_selection_timeout”: 5000,

Graph memory configuration (optional)

“graph_store”: { “provider”: “neo4j” or “memgraph”, “config”: { “url”: “neo4j+s://xxx” or “bolt://localhost:7687”, “username”: “neo4j” or “memgraph”, “password”: “xxx” } },

Optional additional configurations for graph memory

“vector_store”: { “provider”: “qdrant”, “config”: {“host”: “localhost”, “port”: 6333} }, “llm”: { “provider”: “openai”, “config”: {“model”: “gpt-4o-mini”, “api_key”: ”…”} }, “embedder”: { “provider”: “openai”, “config”: {“model”: “text-embedding-3-small”, “api_key”: ”…”} } } } Note: Graph memory requires “mem0ai[graph]” installation and works alongside vector-based memory for enhanced relationship-aware retrieval.

Constructor

config
Dict
required
No description available.
verbose
int
default:"0"
No description available.

Methods

compute_quality_score()

Combine multiple sub-metrics into one final score, as an example.

store_short_term()

Store in short-term memory with optional quality metrics

search_short_term()

Search short-term memory with optional quality filter

reset_short_term()

Completely clears short-term memory.

store_long_term()

Store in long-term memory with optional quality metrics

search_long_term()

Search long-term memory with optional quality filter

reset_long_term()

Clear local LTM DB, plus Chroma, MongoDB, or mem0 if in use.

delete_short_term()

Delete a specific short-term memory by ID.

delete_long_term()

Delete a specific long-term memory by ID.

delete_memory()

Delete a specific memory by ID.

delete_memories()

Delete multiple memories by their IDs.

delete_memories_matching()

Delete memories matching a search query.

store_entity()

Save entity info in LTM (or mem0/rag).

search_entity()

Filter to items that have metadata ‘category=entity’.

reset_entity_only()

If you only want to drop entity items from LTM, you’d do a custom

store_user_memory()

If mem0 is used, do user-based addition. Otherwise store in LTM with user in metadata.

search_user_memory()

If mem0 is used, pass user_id in. Otherwise fallback to local filter on user in metadata.

search()

Generic search method that delegates to appropriate specific search methods.

reset_user_memory()

Clear all user-based info. For simplicity, we do a full LTM reset.

finalize_task_output()

Store task output in memory with appropriate metadata

build_context_for_task()

Merges relevant short-term, long-term, entity, user memories

reset_all()

Fully wipes short-term, long-term, and any memory in mem0 or rag.

calculate_quality_metrics()

Calculate quality metrics using LLM

store_quality()

Store quality metrics in memory

search_with_quality()

Search with quality filter

get_all_memories()

Get all memories from both short-term and long-term storage

learn()

Get the LearnManager for continuous learning capabilities.

get_learn_context()

Get learning context suitable for injection into system prompt.