Skip to main content

Agent

The core class for AI agents with tools, memory, knowledge, and handoffs.

Param Cluster Map

ClusterLegacy ParamsConsolidated ToStatus
Outputverbose, markdown, stream, metrics, reasoning_stepsoutput=
Executionmax_iter, max_rpm, max_execution_time, max_retry_limitexecution=
Memorymemory, auto_memory, claude_memory, user_id, session_id, dbmemory=
Knowledgeknowledge, retrieval_config, embedder_configknowledge=
Planningplanning, plan_mode, planning_tools, planning_reasoningplanning=
Reflectionself_reflect, max_reflect, min_reflect, reflect_llmreflection=
Guardrailsguardrail, max_guardrail_retries, policyguardrails=
Webweb_search, web_fetchweb=
Templatessystem_template, prompt_template, response_templatetemplates=
Cachingcache, prompt_cachingcaching=
LLMllm, llm_config, function_calling_llmllm=
Note: base_url and api_key remain separate (connection/auth constraint).

Precedence Ladder

Instance > Config > Array > Dict > String > Bool > Default

Quick Start

from praisonaiagents import Agent

agent = Agent(instructions="You are a helpful assistant")
response = agent.start("Hello!")

Parameters Table

Core Identity

ParameterTypeDefaultDescription
namestrNoneAgent name for identification
rolestrNoneRole/job title defining expertise
goalstrNonePrimary objective
backstorystrNoneBackground context
instructionsstrNone💡 Canonical - Direct instructions (overrides role/goal/backstory)

LLM Configuration

ParameterTypeDefaultDescription
llmstr | AnyNone✅ Model name ("gpt-4o") or LLM object
modelstr | AnyNoneAlias for llm=
base_urlstrNoneCustom endpoint URL (kept separate)
api_keystrNoneAPI key (kept separate)
function_calling_llmAnyNone⚠️ Deprecated - use llm=
llm_configDictNone⚠️ Deprecated - use llm=

Tools & Capabilities

ParameterTypeDefaultDescription
toolsList[Any]NoneTools, functions, or MCP instances
handoffsList[Agent | Handoff]NoneAgents for task delegation
allow_delegationboolFalse⚠️ Deprecated — use handoffs=
allow_code_executionboolFalse⚠️ Deprecated — use execution=ExecutionConfig(code_execution=True)
code_execution_mode"safe" | "unsafe""safe"⚠️ Deprecated — use execution=ExecutionConfig(code_mode=)

Deprecated Standalone Params

These still work for backward compatibility but emit DeprecationWarning. Use the consolidated config objects instead.
ParameterTypeDefaultReplacement
auto_savestrNonememory=MemoryConfig(auto_save="name")
rate_limiterAnyNoneexecution=ExecutionConfig(rate_limiter=obj)
verification_hooksList[VerificationHook]Noneautonomy=AutonomyConfig(verification_hooks=[...])

Consolidated Feature Params

Each follows: False=disabled, True=defaults, Config=custom
ParameterTypeDefaultDescription
memorybool | MemoryConfigNoneMemory system
knowledgebool | List[str] | KnowledgeConfigNoneKnowledge sources
planningbool | PlanningConfigFalsePlanning mode
reflectionbool | ReflectionConfigNoneSelf-reflection
guardrailsbool | Callable | GuardrailConfigNoneOutput validation
webbool | WebConfigNoneWeb search/fetch
contextbool | ContextConfigFalseContext management
autonomybool | Dict | AutonomyConfigNoneAutonomy settings
outputstr | OutputConfigNoneOutput preset or config
executionstr | ExecutionConfigNoneExecution preset or config
cachingbool | CachingConfigNoneCaching settings
hooksList | HooksConfigNoneEvent hooks
skillsList[str] | SkillsConfigNoneAgent skills
templatesTemplateConfigNoneTemplate configuration

Precedence Ladder

Resolution Order: Instance > Config > Array > Dict > String > Bool > DefaultWhen you pass a consolidated param, the resolver checks in this order:
  1. Instance - Already a config object? Use as-is
  2. Config - Dataclass instance? Use as-is
  3. Array - ["preset", {"override": value}]? Apply overrides
  4. Dict - {"key": value}? Convert to config
  5. String - "preset_name" or URL? Look up preset or parse URL
  6. Bool - True? Use defaults. False? Disable
  7. Default - None? Use default value

Usage Forms Table

FormExampleWhen to Use
Boolmemory=TrueEnable with defaults
String presetoutput="verbose"Use predefined config
URLmemory="redis://localhost"Backend-specific config
Dictmemory={"backend": "redis"}Custom config as dict
Array + overridesoutput=["verbose", {"stream": False}]Preset + customization
Config instancememory=MemoryConfig(backend="redis")Full control

Examples for Each Form

# Bool - enable with defaults
agent = Agent(instructions="...", memory=True)

# String preset
agent = Agent(instructions="...", output="verbose")

# URL scheme
agent = Agent(instructions="...", memory="redis://localhost:6379")

# Dict
agent = Agent(instructions="...", memory={"backend": "postgres", "user_id": "u1"})

# Array with overrides
agent = Agent(instructions="...", output=["verbose", {"stream": False}])

# Config instance
from praisonaiagents import MemoryConfig
agent = Agent(instructions="...", memory=MemoryConfig(backend="redis"))

Presets & Options

Output Presets

PresetDescriptionExample
"silent"Zero output (default)
"status"Tool calls + response, no timestamps▸ get_weather → Sunny ✓
"trace"Full trace with timestamps[14:30:29] ▸ get_weather → Sunny [0.2s] ✓
"debug"trace + metrics (no boxes)Timestamps + token counts, cost
"verbose"Rich panels with MarkdownTask + Response panels
"stream"Real-time token streamingTokens appear as generated
"json"JSONL events{"event": "tool_call", ...}
Aliases: "text", "actions""status" | "plain", "minimal""silent" | "normal""verbose"

Execution Presets

Presetmax_itermax_retry_limit
"fast"101
"balanced"202
"thorough"505
"unlimited"100010

Memory Presets

PresetBackend
"file"Local file storage
"sqlite"SQLite database
"redis"Redis server
"postgres"PostgreSQL
"mongodb"MongoDB

Web Presets

Presetsearchfetchprovider
"duckduckgo"DuckDuckGo
"tavily"Tavily
"google"Google
"search_only"Default
"fetch_only"Default

Reflection Presets

Presetmin_iterationsmax_iterations
"minimal"11
"standard"13
"thorough"25

Guardrail Presets

Presetmax_retrieson_fail
"strict"5raise
"permissive"1skip
"safety"3retry

Methods

Execution Methods

MethodStreams by DefaultDisplayUse Case
start(prompt, **kwargs)✅ Yes (in TTY)✅ AutoInteractive/terminal use
run(prompt, **kwargs)❌ No❌ NoProduction/scripted use
iter_stream(prompt, **kwargs)✅ Always❌ NoApp integration (yields chunks)
chat(prompt, ...)ConfigurableConfigurableLow-level execution

Async Execution Methods

MethodStreams by DefaultUse Case
astart(prompt, **kwargs)✅ Yes (in TTY)Async interactive
arun(prompt, **kwargs)❌ NoAsync production
achat(prompt, ...)ConfigurableAsync low-level

Other Methods

MethodDescription
query(question, **kwargs)Query knowledge base (RAG)
retrieve(query, **kwargs)Retrieve context from knowledge
clear_history()Clear chat history
execute_tool(name, args)Execute a tool dynamically

Class Methods

MethodDescription
from_template(uri, config, offline, **kwargs)Create agent from template

Common Recipes

Simple Agent with Instructions

from praisonaiagents import Agent

agent = Agent(
    name="Assistant",
    instructions="You are a helpful AI assistant that provides concise, accurate answers."
)
response = agent.start("What is the capital of France?")
print(response)

Agent with Tools

from praisonaiagents import Agent, tool

@tool
def get_weather(city: str) -> str:
    """Get the current weather for a city."""
    return f"Weather in {city}: Sunny, 22°C"

agent = Agent(
    name="Weather Assistant",
    instructions="Help users check the weather",
    tools=[get_weather]
)
response = agent.start("What's the weather in Paris?")

Agent with Handoffs

from praisonaiagents import Agent

billing_agent = Agent(name="Billing", instructions="Handle billing inquiries")
tech_agent = Agent(name="Tech Support", instructions="Solve technical issues")

main_agent = Agent(
    name="Customer Service",
    instructions="Route customers to the right department",
    handoffs=[billing_agent, tech_agent]
)
response = main_agent.start("I have a billing question")

Agent with Knowledge Base

from praisonaiagents import Agent

agent = Agent(
    name="Research Assistant",
    instructions="Answer questions using the knowledge base",
    knowledge=["research_papers/", "data.pdf"]
)
response = agent.start("Summarize the key findings")

Agent with Streaming

Choose the right method based on your use case:
from praisonaiagents import Agent

agent = Agent(
    name="Story Writer",
    instructions="Write creative stories"
)

# Method 1: start() - streams automatically in terminal (TTY)
# Best for interactive/beginner use
for chunk in agent.start("Write a short story"):
    print(chunk, end="", flush=True)

# Method 2: iter_stream() - always streams, no display
# Best for app integration
full_response = ""
for chunk in agent.iter_stream("Write a short story"):
    full_response += chunk
    # Custom processing here

# Method 3: run() - silent, returns result directly
# Best for production/scripted use
result = agent.run("Write a short story")
print(result)

# Method 2: Use any preset + stream=True in start() (per-call control)
agent = Agent(
    name="Story Writer",
    instructions="Write creative stories",
    output="verbose"  # Any preset
)

for chunk in agent.start("Write a short story", stream=True):
    print(chunk, end="", flush=True)

# Method 3: Silent streaming (no verbose output, clean stream)
agent = Agent(
    name="Writer",
    instructions="You are concise",
    output="silent"
)

for chunk in agent.start("Write a haiku", stream=True):
    print(chunk, end="", flush=True)

# Method 4: Collect full response while streaming
chunks = []
for chunk in agent.start("List 3 tips"):
    chunks.append(chunk)
    print(chunk, end="", flush=True)
full_response = "".join(chunks)
Streaming Precedence: start(stream=True/False) overrides OutputConfig.stream.
  • output="stream" sets stream=True by default
  • start(stream=False) can disable streaming even with output="stream"
  • start(stream=True) enables streaming for any preset
When to use which method:
  • output="stream": Agent always streams, no per-call control needed
  • start(stream=True): Control streaming per-call, useful for conditional streaming
  • output="silent" + stream=True: Clean streaming without agent status messages

Agent with Custom LLM

from praisonaiagents import Agent

# Using Ollama
agent = Agent(
    name="Local Assistant",
    instructions="You are a helpful assistant",
    llm="ollama/llama3",
    base_url="http://localhost:11434"
)

# Using Anthropic
agent = Agent(
    name="Claude Assistant",
    instructions="You are a helpful assistant",
    llm="anthropic/claude-3-sonnet-20240229"
)

Agent with Consolidated Config

from praisonaiagents import Agent

agent = Agent(
    name="Advanced Agent",
    instructions="You are an advanced assistant",
    output="verbose",           # Output preset
    execution="thorough",       # Execution preset
    memory=True,                # Enable memory with defaults
    planning=True,              # Enable planning mode
    reflection=True,            # Enable self-reflection
)

Async Support

The Agent class provides full async support:
import asyncio
from praisonaiagents import Agent

async def main():
    agent = Agent(
        name="AsyncAgent",
        instructions="Handle async operations efficiently"
    )
    
    # Async chat
    result = await agent.achat("Process this request")
    print(result)
    
    # Async start
    result = await agent.astart("Another request")
    print(result)

asyncio.run(main())

Multi-Agent Safe

Agents are designed to be multi-agent safe. Each agent maintains its own:
  • Chat history
  • Memory instance
  • Knowledge base
  • Session state
Multiple agents can run concurrently without interference.

See Also