Quick Start
Simple Shorthand (Recommended)
The easiest way to enable Agent Learn:
learn=True is a top-level Agent parameter — peer to memory=. It auto-creates a minimal memory backend if needed.How It Works
Auto-Injection: When
learn=True is enabled, learned context is automatically injected into the agent’s system prompt before each response. No manual wiring needed!| Phase | Description |
|---|---|
| Retrieve | get_learn_context() fetches learnings from all enabled stores |
| Auto-Inject | Context automatically added to system prompt as “Learned Context” section |
| Generate | Agent uses learned context to personalize response |
| Persist | Learnings stored in JSON files for cross-session persistence |
What Gets Injected
Whenlearn=True is enabled, the agent’s system prompt automatically includes a “Learned Context” section with:
Learning Stores
Agent Learn organizes knowledge into specialized stores:| Store | Purpose | Default |
|---|---|---|
persona | User preferences, communication style, profile | True |
insights | Observations and learnings from interactions | True |
thread | Session and conversation context | True |
patterns | Reusable knowledge patterns | False |
decisions | Decision logging and rationale | False |
feedback | Outcome signals and corrections | False |
improvements | Self-improvement proposals | False |
Configuration Options
| Option | Type | Default | Description |
|---|---|---|---|
persona | bool | True | Capture user preferences and profile |
insights | bool | True | Store observations and learnings |
thread | bool | True | Maintain session context |
patterns | bool | False | Store reusable knowledge patterns |
decisions | bool | False | Log decisions with rationale |
feedback | bool | False | Capture outcome signals |
improvements | bool | False | Track self-improvement proposals |
scope | str | "private" | Learning visibility: "private" or "shared" |
store_path | str | None | Custom storage directory |
auto_consolidate | bool | True | Automatically consolidate learnings |
retention_days | int | None | Days to retain entries (None = forever) |
CLI Commands
Manage learning data via the command line:Show Status
Show Learned Entries
Add Learning Entry
Search Learnings
Clear Learnings
Common Patterns
Personal Assistant with Memory
Team Knowledge Base
Feedback-Driven Learning
Active Learning Tools
For agents that need explicit control over what they learn and recall, use thestore_learning and search_learning tool functions — the Learn system counterparts to store_memory / search_memory.
store_learning
Store a learning entry in the agent’s learn system.| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
content | str | Yes | — | The learning to store |
category | str | No | "persona" | Category: persona, insights, patterns, decisions, feedback, improvements |
search_learning
Search previously stored learnings.| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
query | str | Yes | — | Search query |
limit | int | No | 5 | Max results to return |
Best Practices
Use private scope for personal data
Use private scope for personal data
Keep
scope="private" (default) when storing user-specific preferences or sensitive information. Use scope="shared" only for team knowledge that should benefit all agents.Enable stores incrementally
Enable stores incrementally
Start with the default stores (
persona, insights, thread) and enable additional stores (patterns, decisions, feedback, improvements) as your use case requires them.Set retention for transient data
Set retention for transient data
Use
retention_days for stores that capture temporal patterns. Thread context often benefits from 7-30 day retention to avoid clutter.Consolidate periodically
Consolidate periodically
Keep
auto_consolidate=True to automatically merge and summarize learnings over time, preventing store bloat.Related
Agent Train
Active iterative training
Learn vs Train
Compare passive learning vs active training
Memory
Understanding agent memory systems
Knowledge
RAG and knowledge retrieval

