Skip to main content
PraisonAI provides two distinct execution modes: Autonomy (multi-turn, self-correcting loops) and Interactive (single-turn, human-in-the-loop). Understanding their architecture helps you choose the right approach and extend the tool system correctly.

Execution Flow Comparison

Autonomy Mode Flow

When autonomy=True, the agent runs a multi-turn loop with safety infrastructure:

Interactive Mode Flow

In interactive mode, the human drives the loop — each message is a single-turn chat() call:
Key difference: In autonomy mode, the agent drives the iteration loop internally with safety checks. In interactive mode, the human drives the loop externally by sending messages. Both use the same chat() → tool executor underneath.

Feature Comparison

DimensionAutonomy ModeInteractive Mode
Entry pointrun_autonomous()start()chat()
TurnsMulti-turn loop (up to max_iterations)Single turn per call
Self-correction✅ Doom loop detection + graduated recovery❌ None
File safety✅ Git snapshots before/after❌ None
Completion signalCompletion promise pattern (<promise>DONE</promise>)Return value
Stage escalationdirect → heuristic → planned → autonomousN/A
Session persistenceAuto-saves after each iterationManual
ObservabilityBuilt-in event emissionStandard logging
Key insight: Autonomy mode is a multi-turn orchestration loop around the same chat() method. Both modes use identical tool execution — the difference is the loop and safety infrastructure above it.

Speed Profile

Init Overhead (one-time)

ComponentCostNotes
AutonomyConfig()~0msSimple dataclass
AutonomyTrigger()~1-2msLazy imports EscalationTrigger
DoomLoopTracker()~1msLazy imports DoomLoopDetector
FileSnapshot~50-200msOnly if track_changes=True
ObservabilityHooks~1msOnly if observe=True
With autonomy=True (defaults), init overhead is < 5ms since track_changes=False and observe=False by default.

Per-Iteration Overhead

OperationCost
Timeout check~0μs
Doom-loop check~0.1ms
Action recording~0.1ms
Completion detection~0.1ms
Session auto-save0-5ms
Total~0.5ms
Per-iteration overhead is ~0.5ms — negligible compared to LLM API calls (typically 1-30 seconds each). The real speed difference is the number of LLM calls, not framework overhead.

End-to-End Comparison

MetricNon-AutonomyAutonomy
Minimum LLM calls11 (if completion detected on first turn)
Maximum LLM calls120 (default max_iterations)
Typical for multi-step task1 (all tools in one turn)1-3 (re-injects if not “done”)
Overhead per call0~0.5ms

Important Behavioral Differences

Return type: AutonomyResult.__str__() returns .output, so print(result) works the same. But code that does if result: or len(result) may break — AutonomyResult is always truthy and has no __len__. Use str(result) or result.output.
Early completion risk: If the LLM says "I'm done searching" mid-task, autonomy mode stops early. For multi-step tasks, use structured completion signals:
Agent(autonomy={"completion_promise": "COMPLETED"})
Approval with autonomy=True: Default level is "suggest", which does NOT auto-approve tools. Only level="full_auto" auto-wires AutoApproveBackend.

Two-Layer Tool Architecture

Tools are provisioned at two independent layers. The CLI wrapper assembles tool lists and passes them as tools=[...] to the SDK’s Agent() constructor:

What each layer provides

ACP Tools Performance: ACP tools (acp_edit_file, acp_execute_command) go through a complex orchestration flow and can be slow (174s+ per operation). In autonomy mode (--autonomy full_auto), ACP tools are disabled by default for speed. Use --acp flag to explicitly enable them when needed.

CLI Wrapper (praisonai)

13 tools via get_interactive_tools():
  • ACP (4): create/edit/delete files, execute commands (disabled by default in autonomy)
  • LSP (4): symbols, definitions, references, diagnostics
  • Basic (5): read/write files, list, execute, search
Used by: praisonai tui, praisonai "prompt"

Core SDK (praisonaiagents)

16 tools when autonomy=True via AUTONOMY_PROFILE:
  • file_ops (7): read/write/list/copy/move/delete/info
  • shell (3): execute_command, list_processes, get_system_info
  • web (3): internet_search, search_web, web_crawl
  • code_intelligence (3): ast_grep search/rewrite/scan
Used by: Agent(autonomy=True) in Python

Built-in ToolProfiles

The SDK ships with composable profiles in tools/profiles.py:
ProfileToolsDescription
code_intelligence3ast-grep search, rewrite, scan
file_ops7read, write, list, copy, move, delete, get_file_info
shell3execute_command, list_processes, get_system_info
web3internet_search, search_web, web_crawl
code_exec4execute_code, analyze_code, format_code, lint_code
schedule3schedule_add, schedule_list, schedule_remove
autonomy16Composite: file_ops + shell + web + code_intelligence

Tools by entry point

Entry PointTools AvailableSource
praisonai tui13 (ACP + LSP + Basic)CLI wrapper
praisonai "prompt"13 + autonomy (16)CLI wrapper + SDK
praisonai tracker run30 (expanded set)tracker.py
Agent(autonomy=True) in Python16 (AUTONOMY_PROFILE)Core SDK
Agent() in Python0 (user-provided only)

Design Principles

The current architecture follows a layered separation pattern:
1

Core SDK stays minimal

The praisonaiagents package provides the agent runtime, tool execution, and LLM integration — but does not bundle default tools. This keeps the SDK lightweight and avoids opinionated defaults.
2

CLI wrapper adds batteries

The praisonai wrapper package adds ACP, LSP, file operations, and search tools for CLI users. It assembles toolsets and passes them as tools=[...] to the Agent constructor.
3

Tools are data, not hardcoded

Interactive tools are defined as groups in interactive_tools.py with a TOOL_GROUPS dictionary. New tools are added to a group, and all consumers automatically get them.

Why this is the best approach

The Core SDK has zero dependency on the CLI wrapper. Users embedding praisonaiagents in their own applications bring exactly the tools they need — no surprise defaults, no bloat.
Each entry point assembles its own toolset. praisonai tui loads ACP + LSP + Basic. praisonai tracker run loads a broader set. SDK users pass their own tools. This is intentional — different contexts need different capabilities.
interactive_tools.py with get_interactive_tools() is the canonical provider. Both tui/app.py and main.py call this single function. Adding a new interactive tool means editing one file.

When to use which mode

Use Autonomy Mode

  • Multi-step tasks (refactoring, debugging)
  • Tasks that need self-correction
  • Batch/unattended execution
  • Tasks where you want git safety nets
agent = Agent(
    instructions="Refactor and test",
    autonomy=True,
    tools=[my_tools]
)
agent.start("Refactor the auth module")

Use Interactive Mode

  • Conversational Q&A
  • Quick one-off tasks
  • Human-guided workflows
  • Streaming responses
agent = Agent(
    instructions="Help with coding",
    tools=[my_tools]
)
agent.start("Explain this function")

Extending Tools

Combine built-in profiles or register custom ones:
from praisonaiagents.tools.profiles import (
    resolve_profiles, register_profile, ToolProfile
)

# Combine built-in profiles
tools = resolve_profiles("file_ops", "web", "shell")
agent = Agent(tools=tools)

# Register a custom profile (e.g., from CLI wrapper)
register_profile(ToolProfile(
    name="acp",
    tools=["acp_create_file", "acp_edit_file",
           "acp_delete_file", "acp_execute_command"],
    description="Agentic Change Plan tools",
))

# Now use it alongside built-in profiles
tools = resolve_profiles("autonomy", "acp", "lsp")

Adding tools for CLI users

Add tools to interactive_tools.py:
# In praisonai/cli/features/interactive_tools.py
TOOL_GROUPS = {
    "basic": [read_file, write_file, ...],
    "acp": [acp_create_file, ...],
    "lsp": [lsp_list_symbols, ...],
    "my_group": [my_custom_tool],  # Add new group
}

Adding tools for SDK users

Pass tools directly — the SDK is tool-agnostic:
from praisonaiagents import Agent

def my_tool(query: str) -> str:
    """My custom tool."""
    return "result"

agent = Agent(
    instructions="Use tools wisely",
    tools=[my_tool],
    autonomy=True  # Gets 16 AUTONOMY_PROFILE tools + your tools
)