Documentation Index Fetch the complete documentation index at: https://docs.praison.ai/llms.txt
Use this file to discover all available pages before exploring further.
Performance Benchmarks
PraisonAI Agents includes comprehensive benchmarks to measure and compare performance against other popular AI agent frameworks.
Benchmark Types
Benchmark Description File Simple Agent instantiation time (no API calls) simple_benchmark.pyTools Agent instantiation with tools tools_benchmark.pyExecution Real agent execution with LLM API calls execution_benchmark.py
Quick Start
cd praisonai-agents
# Run instantiation benchmark
python benchmarks/simple_benchmark.py
# Run tools benchmark
python benchmarks/tools_benchmark.py
# Run real execution benchmark (requires API key)
export OPENAI_API_KEY = your_key
python benchmarks/execution_benchmark.py
Execution Benchmark
The execution benchmark tests real agent execution with actual LLM API calls.
CLI Options
python benchmarks/execution_benchmark.py [OPTIONS]
Option Short Default Description --model-mgpt-4o-miniModel to use --iterations-i3Number of iterations --prompt-p"What is 2+2?..."Prompt to use --no-save- FalseDon’t save results to file
Examples
# Default (3 iterations, gpt-4o-mini)
python benchmarks/execution_benchmark.py
# Custom iterations
python benchmarks/execution_benchmark.py --iterations 5
python benchmarks/execution_benchmark.py -i 10
# Custom model
python benchmarks/execution_benchmark.py --model gpt-4o
python benchmarks/execution_benchmark.py -m gpt-4o
# Custom prompt
python benchmarks/execution_benchmark.py --prompt " What is the capital of France? "
# Don't save results
python benchmarks/execution_benchmark.py --no-save
# Combine options
python benchmarks/execution_benchmark.py -m gpt-4o -i 5 --no-save
Results
Framework Avg Time (μs) Relative PraisonAI 3.77 1.00x (fastest) OpenAI Agents SDK 5.26 1.39x Agno 5.64 1.49x PraisonAI (LiteLLM) 7.56 2.00x PydanticAI 226.94 60x LangGraph 4,558.71 1,209x CrewAI 15,607.92 4,138x
Framework Avg Time (μs) Relative PraisonAI 3.24 1.00x (fastest) Agno 5.12 1.58x PraisonAI (LiteLLM) 8.59 2.65x OpenAI Agents SDK 279.95 86x LangGraph 2,310.82 713x CrewAI 15,773.44 4,870x
Real Execution (With API Calls)
Framework Method Avg Time Relative PraisonAI agent.start()0.45s 1.00x (fastest) PraisonAI (LiteLLM) agent.start()0.55s 1.22x CrewAI crew.kickoff()0.58s 1.28x Agno agent.run()0.92s 2.05x
Frameworks Compared
Framework Execution Method PraisonAI agent.start()PraisonAI (LiteLLM) agent.start()Agno agent.run()CrewAI crew.kickoff()OpenAI Agents SDK Runner.run()LangGraph create_react_agent()PydanticAI Agent()
Output Files
Benchmarks save results to markdown files in the benchmarks/ directory:
File Description BENCHMARK_RESULTS.mdInstantiation benchmark results TOOLS_BENCHMARK_RESULTS.mdTools benchmark results EXECUTION_BENCHMARK_RESULTS.mdExecution benchmark results
Key Findings
Fastest Instantiation PraisonAI is 1.49x faster than Agno and 4,138x faster than CrewAI for agent instantiation.
Fastest Execution PraisonAI is 2x faster than Agno and 1.28x faster than CrewAI for real agent execution.
Running Your Own Benchmarks
Clone the repository:
git clone https://github.com/MervinPraison/PraisonAI.git
cd PraisonAI/src/praisonai-agents
Install dependencies:
pip install praisonaiagents agno crewai pydantic-ai openai-agents langgraph
Set your API key:
export OPENAI_API_KEY = your_key
Run benchmarks:
python benchmarks/simple_benchmark.py
python benchmarks/tools_benchmark.py
python benchmarks/execution_benchmark.py