Skip to main content

Performance Benchmarks

PraisonAI includes a benchmark suite to measure performance with and without AI SDK loaded.

Quick Start

import { resolveBackend, isAISDKAvailable } from 'praisonai';

// Check AI SDK availability
const available = await isAISDKAvailable();
console.log('AI SDK available:', available);

// Resolve backend and check source
const { provider, source } = await resolveBackend('openai/gpt-4o-mini');
console.log('Backend source:', source); // 'ai-sdk' or 'native'

What’s Measured

Import Time

Measures cold start import time:
  • Core Import: Just the Agent class (no AI SDK)
  • AI SDK Import: Loading the ai package
  • Full Import: Complete praisonai module

Memory Usage

Measures heap memory after:
  • Module import
  • Agent creation
  • First LLM call

Latency

Measures time for:
  • Backend resolution
  • First API call (with real keys)
  • Streaming throughput

Embedding Throughput

Measures vectors per second for batch embeddings.

Running Benchmarks

Programmatic

import { resolveBackend, isAISDKAvailable } from 'praisonai';

async function benchmark() {
  const iterations = 10;
  const times: number[] = [];
  
  for (let i = 0; i < iterations; i++) {
    const start = performance.now();
    await resolveBackend('openai/gpt-4o-mini');
    times.push(performance.now() - start);
  }
  
  const mean = times.reduce((a, b) => a + b) / times.length;
  console.log(`Backend resolution: ${mean.toFixed(2)}ms`);
}

Benchmark Script

Create a benchmark script:
// benchmarks/run.ts
import { Agent } from 'praisonai';
import { embed, embedMany } from 'praisonai';

async function runBenchmarks() {
  console.log('=== PraisonAI Benchmarks ===\n');
  
  // 1. Import time
  console.log('1. Import Time');
  const importStart = performance.now();
  const { resolveBackend } = await import('praisonai');
  console.log(`   Full import: ${(performance.now() - importStart).toFixed(2)}ms`);
  
  // 2. Backend resolution
  console.log('\n2. Backend Resolution');
  const resolveStart = performance.now();
  const { source } = await resolveBackend('openai/gpt-4o-mini');
  console.log(`   Resolution: ${(performance.now() - resolveStart).toFixed(2)}ms`);
  console.log(`   Source: ${source}`);
  
  // 3. Agent creation
  console.log('\n3. Agent Creation');
  const agentStart = performance.now();
  const agent = new Agent({ instructions: 'Test' });
  console.log(`   Creation: ${(performance.now() - agentStart).toFixed(2)}ms`);
  
  // 4. Embedding (if API key available)
  if (process.env.OPENAI_API_KEY) {
    console.log('\n4. Embedding Throughput');
    const texts = Array(10).fill('Test text for embedding');
    const embedStart = performance.now();
    await embedMany(texts);
    const embedTime = performance.now() - embedStart;
    console.log(`   10 embeddings: ${embedTime.toFixed(2)}ms`);
    console.log(`   Throughput: ${(10000 / embedTime).toFixed(2)} vectors/sec`);
  }
  
  console.log('\n=== Complete ===');
}

runBenchmarks();

Benchmark Results

Typical results on modern hardware:
MetricWithout AI SDKWith AI SDKOverhead
Core Import~25ms~25ms0ms
AI SDK ImportN/A~35ms+35ms
Full Import~30ms~60ms+30ms
Backend Resolution~1ms~5ms+4ms
Memory (Agent)~2MB~3MB+1MB

Key Findings

  1. AI SDK adds ~35ms import overhead - Only when actually loaded
  2. Lazy loading works - AI SDK not loaded until needed
  3. Memory overhead minimal - ~1MB additional
  4. No runtime penalty - After initial load, performance is identical

Zero Impact Guarantee

PraisonAI guarantees zero performance impact when AI SDK is not used:
// This does NOT load AI SDK
import { Agent } from 'praisonai';
const agent = new Agent({ instructions: 'Test' });

// AI SDK only loaded when:
// 1. Backend resolver selects AI SDK
// 2. Embeddings are used
// 3. PRAISONAI_BACKEND=ai-sdk is set

Verification

// Check if AI SDK is loaded
const modulesBefore = Object.keys(require.cache).filter(k => k.includes('ai'));
console.log('AI SDK modules before:', modulesBefore.length); // 0

// Create agent (doesn't load AI SDK)
const agent = new Agent({ instructions: 'Test' });

const modulesAfter = Object.keys(require.cache).filter(k => k.includes('/ai/'));
console.log('AI SDK modules after:', modulesAfter.length); // Still 0

Environment Control

Force Native Backend

# Skip AI SDK entirely
export PRAISONAI_BACKEND=native

Force AI SDK

# Always use AI SDK (error if not installed)
export PRAISONAI_BACKEND=ai-sdk

Auto Selection (Default)

# Use AI SDK if available, fallback to native
export PRAISONAI_BACKEND=auto

CI/CD Integration

GitHub Actions

- name: Run Benchmarks
  run: |
    npm run benchmark
    
- name: Check Performance Regression
  run: |
    # Fail if import time > 100ms
    node -e "
      const start = Date.now();
      require('praisonai');
      const time = Date.now() - start;
      if (time > 100) process.exit(1);
    "

Benchmark in Tests

describe('Performance', () => {
  it('should import in under 100ms', async () => {
    const start = performance.now();
    await import('praisonai');
    expect(performance.now() - start).toBeLessThan(100);
  });
  
  it('should not load AI SDK unless needed', () => {
    const aiModules = Object.keys(require.cache)
      .filter(k => k.includes('/ai/'));
    expect(aiModules.length).toBe(0);
  });
});