Enable persistent memory for agents without any extra packages. Memory is automatically injected into conversations.
Copy
from praisonaiagents import Agent# Enable memory with a single parameteragent = Agent( name="Personal Assistant", instructions="You are a helpful assistant that remembers user preferences.", memory=True # Enables file-based memory (no extra deps!))# Memory is automatically injected into conversationsresult = agent.start("My name is John and I prefer dark mode")result = agent.start("What's my name?") # Agent recalls: "John"
from praisonaiagents import Agent, Task, PraisonAIAgentsfrom praisonaiagents.tools import duckduckgo# Create research agent with memoryresearch_agent = Agent( role="Research Analyst", goal="Research and document key information about topics", backstory="Expert at analyzing and storing information in memory", llm="gpt-4o-mini", tools=[duckduckgo])# Create blog writer agentblog_agent = Agent( role="Blog Writer", goal="Write a blog post about the research", backstory="Expert at writing blog posts", llm="gpt-4o-mini")# Create tasksresearch_task = Task( description="Research and document key information about AI trends", expected_output="Detailed research findings about AI trends", agent=research_agent)blog_task = Task( description="Write a blog post about the research findings", expected_output="Well-written blog post based on research", agent=blog_agent)# Create and start the agents with memory enabledagents = PraisonAIAgents( agents=[research_agent, blog_agent], tasks=[research_task, blog_task], memory=True) result = agents.start()print(result)
4
Run
Copy
python app.py
Copy
framework: praisonaiprocess: sequentialmemory: trueagents: # Canonical: use 'agents' instead of 'roles' researcher: instructions: # Canonical: use 'instructions' instead of 'backstory' Expert at analyzing and storing information in memory. goal: Research and document key information about topics role: Research Analyst llm: gpt-4o-mini tools: - duckduckgo tasks: research_task: description: Research and document key information about AI trends. expected_output: Detailed research findings. writer: instructions: # Canonical: use 'instructions' instead of 'backstory' Expert at writing blog posts. goal: Write a blog post about the research role: Blog Writer llm: gpt-4o-mini tasks: blog_task: description: Write a blog post about the research. expected_output: Well-written blog post based on research.
Save and resume conversation sessions for later continuation:
Copy
from praisonaiagents.memory import FileMemorymemory = FileMemory(user_id="user_123")# Add context during conversationmemory.add_short_term("User is working on ML project")memory.add_long_term("User prefers Python", importance=0.9)# Save session with conversation historyconversation = [ {"role": "user", "content": "Help me with ML"}, {"role": "assistant", "content": "I'd be happy to help..."}]memory.save_session("ml_project", conversation_history=conversation)# Later: Resume the sessionsession_data = memory.resume_session("ml_project")# List all saved sessionssessions = memory.list_sessions()for s in sessions: print(f"{s['name']} - saved at {s['saved_at']}")# Delete a sessionmemory.delete_session("old_session")
Compress short-term memory to save context window space:
Copy
from praisonaiagents.memory import FileMemorymemory = FileMemory(user_id="user_123", config={"short_term_limit": 100})# Add many items during conversationfor i in range(50): memory.add_short_term(f"Discussion point {i}")# Manual compression with LLM summarizationdef llm_summarize(prompt): return agent.chat(prompt)summary = memory.compress(llm_func=llm_summarize, max_items=10)# Compresses older items into a summary, keeps recent 10# Auto-compress when memory gets full (70% threshold)memory.auto_compress_if_needed(threshold_percent=0.7, llm_func=llm_summarize)
Create checkpoints before risky operations and restore if needed:
Copy
from praisonaiagents.memory import FileMemorymemory = FileMemory(user_id="user_123")# Create checkpoint before making changescheckpoint_id = memory.create_checkpoint("before_refactor")# Optionally include file snapshotscheckpoint_id = memory.create_checkpoint( "before_refactor", include_files=["main.py", "config.yaml"])# Make changes...memory.clear_all()# Something went wrong!# Restore from checkpointmemory.restore_checkpoint(checkpoint_id)# Restore with file snapshotsmemory.restore_checkpoint(checkpoint_id, restore_files=True)# List all checkpointscheckpoints = memory.list_checkpoints()# Delete old checkpointmemory.delete_checkpoint("old_checkpoint")
Automatically extract and store memories from conversations without manual intervention:
Copy
from praisonaiagents.memory import FileMemory, AutoMemory# Create base memorymemory = FileMemory(user_id="user123")# Wrap with auto-generationauto = AutoMemory(memory, enabled=True)# Process interactions - memories are automatically extractedmemories = auto.process_interaction( user_message="My name is John and I prefer Python for backend work", assistant_response="Nice to meet you, John! Python is great for backend.")# Extracted memories:# - name: "John" (entity)# - preference: "Python for backend work" (long-term)print(f"Extracted {len(memories)} memories automatically")
Loads existing memories from storage on initialization
Builds a memory context string with important facts, entities, and recent context
Injects the context into the system prompt before each LLM call
Persists new memories to storage after interactions
Copy
# System prompt with memory injection looks like:"""You are a helpful assistant.Your Role: AssistantYour Goal: Help users with their tasks## Memory (Information you remember about the user)## Important Facts- User's name is Alice- User works as a software engineer## Known Entities- Alice (person): role=developer, company=Acme## Recent Context- User prefers detailed explanations"""
Set higher importance (0.8-1.0) for critical facts like user names, preferences, and key information. Lower importance (0.3-0.5) for transient context.
Isolate memory per user
Always set user_id when building multi-user applications to prevent memory leakage between users.
Clean up old memories
Call cleanup_episodic() periodically to remove old date-based memories and save storage space.
Use entities for structured data
Store people, places, and organizations as entities with attributes rather than plain text for better retrieval.
Configure memory based on use case
Use file-based memory for simple single-agent apps, RAG for multi-agent semantic search, and graph memory for complex relationships.