Skip to main content

Performance Benchmarks

PraisonAI Agents includes comprehensive benchmarks to measure and compare performance against other popular AI agent frameworks.

Benchmark Types

BenchmarkDescriptionFile
SimpleAgent instantiation time (no API calls)simple_benchmark.py
ToolsAgent instantiation with toolstools_benchmark.py
ExecutionReal agent execution with LLM API callsexecution_benchmark.py

Quick Start

cd praisonai-agents

# Run instantiation benchmark
python benchmarks/simple_benchmark.py

# Run tools benchmark
python benchmarks/tools_benchmark.py

# Run real execution benchmark (requires API key)
export OPENAI_API_KEY=your_key
python benchmarks/execution_benchmark.py

Execution Benchmark

The execution benchmark tests real agent execution with actual LLM API calls.

CLI Options

python benchmarks/execution_benchmark.py [OPTIONS]
OptionShortDefaultDescription
--model-mgpt-4o-miniModel to use
--iterations-i3Number of iterations
--prompt-p"What is 2+2?..."Prompt to use
--no-save-FalseDon’t save results to file

Examples

# Default (3 iterations, gpt-4o-mini)
python benchmarks/execution_benchmark.py

# Custom iterations
python benchmarks/execution_benchmark.py --iterations 5
python benchmarks/execution_benchmark.py -i 10

# Custom model
python benchmarks/execution_benchmark.py --model gpt-4o
python benchmarks/execution_benchmark.py -m gpt-4o

# Custom prompt
python benchmarks/execution_benchmark.py --prompt "What is the capital of France?"

# Don't save results
python benchmarks/execution_benchmark.py --no-save

# Combine options
python benchmarks/execution_benchmark.py -m gpt-4o -i 5 --no-save

Results

Agent Instantiation (Without Tools)

FrameworkAvg Time (μs)Relative
PraisonAI3.771.00x (fastest)
OpenAI Agents SDK5.261.39x
Agno5.641.49x
PraisonAI (LiteLLM)7.562.00x
PydanticAI226.9460x
LangGraph4,558.711,209x
CrewAI15,607.924,138x

Agent Instantiation (With Tools)

FrameworkAvg Time (μs)Relative
PraisonAI3.241.00x (fastest)
Agno5.121.58x
PraisonAI (LiteLLM)8.592.65x
OpenAI Agents SDK279.9586x
LangGraph2,310.82713x
CrewAI15,773.444,870x

Real Execution (With API Calls)

FrameworkMethodAvg TimeRelative
PraisonAIagent.start()0.45s1.00x (fastest)
PraisonAI (LiteLLM)agent.start()0.55s1.22x
CrewAIcrew.kickoff()0.58s1.28x
Agnoagent.run()0.92s2.05x

Frameworks Compared

FrameworkExecution Method
PraisonAIagent.start()
PraisonAI (LiteLLM)agent.start()
Agnoagent.run()
CrewAIcrew.kickoff()
OpenAI Agents SDKRunner.run()
LangGraphcreate_react_agent()
PydanticAIAgent()

Output Files

Benchmarks save results to markdown files in the benchmarks/ directory:
FileDescription
BENCHMARK_RESULTS.mdInstantiation benchmark results
TOOLS_BENCHMARK_RESULTS.mdTools benchmark results
EXECUTION_BENCHMARK_RESULTS.mdExecution benchmark results

Key Findings

Fastest Instantiation

PraisonAI is 1.49x faster than Agno and 4,138x faster than CrewAI for agent instantiation.

Fastest Execution

PraisonAI is 2x faster than Agno and 1.28x faster than CrewAI for real agent execution.

Running Your Own Benchmarks

  1. Clone the repository:
git clone https://github.com/MervinPraison/PraisonAI.git
cd PraisonAI/src/praisonai-agents
  1. Install dependencies:
pip install praisonaiagents agno crewai pydantic-ai openai-agents langgraph
  1. Set your API key:
export OPENAI_API_KEY=your_key
  1. Run benchmarks:
python benchmarks/simple_benchmark.py
python benchmarks/tools_benchmark.py
python benchmarks/execution_benchmark.py