Agentic Evaluator Optimizer
Learn how to create AI agents that can generate and optimize solutions through iterative feedback.
A feedback loop workflow where LLM-generated outputs are evaluated, refined, and optimized iteratively to improve accuracy and relevance.
Quick Start
Install Package
First, install the PraisonAI Agents package:
Set API Key
Set your OpenAI API key as an environment variable in your terminal:
Create a file
Create a new file app.py
with the basic setup:
Start Agents
Type this in your terminal to run your agents:
Requirements
- Python 3.10 or higher
- OpenAI API key. Generate OpenAI API key here. Use Other models using this guide.
- Basic understanding of Python
Understanding Evaluator-Optimizer
What is Evaluator-Optimizer?
Evaluator-Optimizer pattern enables:
- Iterative solution generation and refinement
- Automated quality evaluation
- Feedback-driven optimization
- Continuous improvement loops
Features
Solution Generation
Generate solutions based on requirements and feedback.
Quality Evaluation
Automatically assess solution quality and completeness.
Feedback Loop
Implement iterative improvement through feedback cycles.
Process Control
Monitor and control the optimization process.
Configuration Options
Troubleshooting
Generation Issues
If generation is not improving:
- Review generator instructions
- Check feedback integration
- Enable verbose mode for debugging
Evaluation Flow
If evaluation cycle is incorrect:
- Verify evaluation criteria
- Check condition mappings
- Review feedback loop connections
Next Steps
AutoAgents
Learn about automatically created and managed AI agents
Mini Agents
Explore lightweight, focused AI agents
For optimal results, ensure your generator instructions and evaluation criteria are clear and well-defined to achieve the desired optimization outcomes.
Was this page helpful?