Performance Benchmarks
PraisonAI Agents includes comprehensive benchmarks to measure and compare performance against other popular AI agent frameworks.Benchmark Types
| Benchmark | Description | File |
|---|---|---|
| Simple | Agent instantiation time (no API calls) | simple_benchmark.py |
| Tools | Agent instantiation with tools | tools_benchmark.py |
| Execution | Real agent execution with LLM API calls | execution_benchmark.py |
Quick Start
Execution Benchmark
The execution benchmark tests real agent execution with actual LLM API calls.CLI Options
| Option | Short | Default | Description |
|---|---|---|---|
--model | -m | gpt-4o-mini | Model to use |
--iterations | -i | 3 | Number of iterations |
--prompt | -p | "What is 2+2?..." | Prompt to use |
--no-save | - | False | Don’t save results to file |
Examples
Results
Agent Instantiation (Without Tools)
| Framework | Avg Time (μs) | Relative |
|---|---|---|
| PraisonAI | 3.77 | 1.00x (fastest) |
| OpenAI Agents SDK | 5.26 | 1.39x |
| Agno | 5.64 | 1.49x |
| PraisonAI (LiteLLM) | 7.56 | 2.00x |
| PydanticAI | 226.94 | 60x |
| LangGraph | 4,558.71 | 1,209x |
| CrewAI | 15,607.92 | 4,138x |
Agent Instantiation (With Tools)
| Framework | Avg Time (μs) | Relative |
|---|---|---|
| PraisonAI | 3.24 | 1.00x (fastest) |
| Agno | 5.12 | 1.58x |
| PraisonAI (LiteLLM) | 8.59 | 2.65x |
| OpenAI Agents SDK | 279.95 | 86x |
| LangGraph | 2,310.82 | 713x |
| CrewAI | 15,773.44 | 4,870x |
Real Execution (With API Calls)
| Framework | Method | Avg Time | Relative |
|---|---|---|---|
| PraisonAI | agent.start() | 0.45s | 1.00x (fastest) |
| PraisonAI (LiteLLM) | agent.start() | 0.55s | 1.22x |
| CrewAI | crew.kickoff() | 0.58s | 1.28x |
| Agno | agent.run() | 0.92s | 2.05x |
Frameworks Compared
| Framework | Execution Method |
|---|---|
| PraisonAI | agent.start() |
| PraisonAI (LiteLLM) | agent.start() |
| Agno | agent.run() |
| CrewAI | crew.kickoff() |
| OpenAI Agents SDK | Runner.run() |
| LangGraph | create_react_agent() |
| PydanticAI | Agent() |
Output Files
Benchmarks save results to markdown files in thebenchmarks/ directory:
| File | Description |
|---|---|
BENCHMARK_RESULTS.md | Instantiation benchmark results |
TOOLS_BENCHMARK_RESULTS.md | Tools benchmark results |
EXECUTION_BENCHMARK_RESULTS.md | Execution benchmark results |
Key Findings
Fastest Instantiation
PraisonAI is 1.49x faster than Agno and 4,138x faster than CrewAI for agent instantiation.
Fastest Execution
PraisonAI is 2x faster than Agno and 1.28x faster than CrewAI for real agent execution.
Running Your Own Benchmarks
- Clone the repository:
- Install dependencies:
- Set your API key:
- Run benchmarks:

