Skip to main content
Configure your agents with powerful presets and type-safe configuration options.

Quick Start

1

Simple Configuration

import { Agent, MemoryConfig } from 'praisonai';

const agent = new Agent({
  name: "Assistant",
  instructions: "Be helpful",
  memory: { backend: 'file', autoMemory: true }
});

await agent.start("Hello!");
2

With Presets

import { Agent, OutputPreset, ExecutionPreset } from 'praisonai';

const agent = new Agent({
  name: "Fast Agent",
  instructions: "Quick responses",
  output: { verbose: true, stream: true },
  execution: { maxIter: 5 }
});

Configuration Interfaces

Memory Configuration

import { MemoryConfig, MemoryBackend } from 'praisonai';

const memoryConfig: MemoryConfig = {
  backend: MemoryBackend.FILE,
  userId: "user-123",
  sessionId: "session-456",
  autoMemory: true,
  history: true,
  historyLimit: 100,
  storePath: "./memory"
};
OptionTypeDefaultDescription
backendMemoryBackend'file'Storage backend (file, sqlite, redis, postgres)
userIdstringundefinedUser identifier for memory isolation
sessionIdstringundefinedSession identifier
autoMemorybooleanfalseEnable automatic memory management
historybooleantrueStore conversation history
historyLimitnumber100Maximum history entries
storePathstring'./memory'File storage path

Output Configuration

import { OutputConfig, OutputPreset } from 'praisonai';

const outputConfig: OutputConfig = {
  verbose: true,
  markdown: true,
  stream: true,
  metrics: false,
  reasoningSteps: true
};
OptionTypeDefaultDescription
verbosebooleanfalseEnable verbose logging
markdownbooleantrueFormat output as markdown
streambooleanfalseEnable streaming responses
metricsbooleanfalseShow performance metrics
reasoningStepsbooleanfalseDisplay reasoning steps

Execution Configuration

import { ExecutionConfig, ExecutionPreset } from 'praisonai';

const executionConfig: ExecutionConfig = {
  maxIter: 10,
  maxRetryLimit: 3,
  maxRpm: 60,
  maxExecutionTime: 300
};
OptionTypeDefaultDescription
maxIternumber10Maximum iterations
maxRetryLimitnumber3Maximum retries on failure
maxRpmnumber60Rate limit (requests per minute)
maxExecutionTimenumber300Timeout in seconds

Enums

Memory Backend

import { MemoryBackend } from 'praisonai';

// Available backends
MemoryBackend.FILE      // File-based storage
MemoryBackend.SQLITE    // SQLite database
MemoryBackend.REDIS     // Redis cache
MemoryBackend.POSTGRES  // PostgreSQL database
MemoryBackend.MEM0      // Mem0 integration
MemoryBackend.MONGODB   // MongoDB database

Output Preset

import { OutputPreset } from 'praisonai';

OutputPreset.SILENT   // No output
OutputPreset.STATUS   // Status updates only
OutputPreset.TRACE    // Trace logging
OutputPreset.VERBOSE  // Full verbose output
OutputPreset.DEBUG    // Debug mode
OutputPreset.STREAM   // Streaming output
OutputPreset.JSON     // JSON format

Execution Preset

import { ExecutionPreset } from 'praisonai';

ExecutionPreset.FAST       // Quick execution (maxIter: 3)
ExecutionPreset.BALANCED   // Balanced (maxIter: 10)
ExecutionPreset.THOROUGH   // Thorough (maxIter: 25)
ExecutionPreset.UNLIMITED  // No limits

Common Patterns

import { Agent, PraisonConfig } from 'praisonai';

const config: PraisonConfig = {
  defaults: { llm: 'gpt-4o-mini', temperature: 0.7 },
  memory: { backend: 'file', autoMemory: true },
  output: { verbose: true, stream: true },
  execution: { maxIter: 10 },
  caching: { enabled: true, ttl: 3600 }
};

const agent = new Agent({
  name: "Configured Agent",
  instructions: "Be helpful",
  ...config
});

API Reference


Best Practices

Start with presets like ExecutionPreset.FAST for quick tasks or ExecutionPreset.THOROUGH for complex research.
Enable autoMemory: true to automatically persist important context across sessions.
Use maxRpm to prevent API rate limiting, especially in production environments.