Agent Train enables active improvement of agent behavior through iterative feedback loops. Unlike Agent Learn which captures patterns passively, training uses explicit human or LLM feedback to refine responses.
Quick Start
Simple CLI Training
Train an agent with a single input: praisonai train agents --input " What is Python? "
Human-in-the-Loop
Get human feedback instead of automated grading: praisonai train agents --input " Explain machine learning " --human
Multiple Iterations
Run multiple training iterations: praisonai train agents --input " Write a poem " --iterations 5
How It Works
Detailed Control Flow
Phase Description Scenario Define input and expected output Execute Run agent and capture response Grade Score output (human or LLM) Improve Build enhanced prompt from feedback Iterate Repeat for N iterations Report Generate training summary
SDK Usage
from praisonai . train . agents import AgentTrainer , TrainingScenario
from praisonaiagents import Agent
# Create agent
agent = Agent (
name = " Assistant " ,
instructions = " You are a helpful assistant "
)
# Create trainer
trainer = AgentTrainer (
agent = agent ,
iterations = 3 ,
human_mode = False # Use LLM grading
)
# Add training scenario
trainer . add_scenario ( TrainingScenario (
id = " greeting " ,
input_text = " Hello, how are you? " ,
expected_output = " A friendly greeting response "
))
# Run training
report = trainer . run ()
print ( f "Final score: { report . avg_score } /10" )
print ( f "Improvement: { report . improvement :+.1f } " )
Applying Training at Runtime
After training, apply the learned improvements to your agent using apply_training():
from praisonai . train . agents import apply_training
from praisonaiagents import Agent
agent = Agent ( name = " assistant " , instructions = " Be helpful " )
# Apply best iteration from a session
apply_training ( agent , session_id = " train-abc123 " )
# Now the agent uses learned improvements
response = agent . start ( " Hello! " )
Select Specific Iteration
# Apply iteration #2 specifically (not just the best)
apply_training ( agent , session_id = " train-abc123 " , iteration = 2 )
Inspect Before Applying
from praisonai . train . agents import get_training_profile
# Preview the profile first
profile = get_training_profile ( " train-abc123 " )
print ( f "Score: { profile . quality_score } /10" )
print ( f "Suggestions: { profile . suggestions } " )
# Then apply if it looks good
apply_training ( agent , profile = profile )
Remove Training
from praisonai . train . agents import remove_training
# Remove training hook from agent
remove_training ( agent )
Training is applied via hooks - it doesn’t modify the agent permanently. You can remove it anytime.
CLI Commands
Train Agents
praisonai train agents [OPTIONS] [ AGENT_FILE ]
Option Description Default --input, -iSingle input text - --expected, -eExpected output - --iterations, -nNumber of iterations 3--human, -hUse human feedback false--scenarios, -sScenarios JSON file - --model, -mLLM for grading gpt-4o-mini--storage-backendfile, sqlite, redis://file
List Sessions
Show Session Details
praisonai train show < session_i d > --iterations
The --iterations flag shows detailed suggestions for each iteration.
Apply Training
praisonai train apply < session_i d > [OPTIONS]
Option Description Default --iteration, -nSpecific iteration number best --run, -rRun agent with this prompt - --agent, -aPath to agent YAML file -
Example:
praisonai train apply train-abc123 --iteration 2 --run " Hello "
Grading Modes
LLM-as-Judge (Default)
Human Feedback
Automated grading using an LLM to evaluate responses: praisonai train agents --input " Explain AI " --model gpt-4o
The LLM grades based on:
Relevance to input
Accuracy of information
Clarity and completeness
Match to expected output (if provided)
Interactive mode where you score and provide feedback: praisonai train agents --input " Write a haiku " --human
You’ll be prompted to:
Review the agent’s output
Provide a score (1-10)
Enter improvement suggestions
Storage Backends
Training data persists across sessions:
# File storage (default)
praisonai train agents --input " Hello " --storage-backend file
# SQLite (recommended for production)
praisonai train agents --input " Hello " --storage-backend sqlite
# Redis (distributed systems)
praisonai train agents --input " Hello " --storage-backend redis://localhost:6379
Training data is stored as JSON (not pickle), making it:
✅ Human-readable
✅ Git-friendly
✅ Secure (no pickle vulnerabilities)
✅ Cross-platform compatible
Scenarios File
For batch training, use a scenarios file:
[
{
" id " : " greeting " ,
" input_text " : " Hello, how are you? " ,
" expected_output " : " A friendly response "
},
{
" id " : " coding " ,
" input_text " : " Write a Python hello world " ,
" expected_output " : " print('Hello, World!') "
}
]
praisonai train agents --scenarios scenarios.json --iterations 5
Learn vs Train
Aspect Agent Learn Agent Training Purpose Passive learning during interactions Active iterative improvement Trigger Automatic during agent.start() Explicit CLI/SDK call Feedback Implicit (patterns, insights) Explicit (score, suggestions) Storage Persona, insights, patterns stores Scenarios, iterations, reports Use Case Remember user preferences Improve agent behavior
Agent Learn and Agent Training are complementary . Use Learn for continuous adaptation and Training for focused improvement sessions.
See Learn vs Train Comparison for detailed differences.
Agent Learn Passive continuous learning
Learn vs Train Detailed comparison