Execution Flow Comparison
Autonomy Mode Flow
Whenautonomy=True, the agent runs a multi-turn loop with safety infrastructure:
Interactive Mode Flow
In interactive mode, the human drives the loop — each message is a single-turnchat() call:
Key difference: In autonomy mode, the agent drives the iteration loop internally with safety checks. In interactive mode, the human drives the loop externally by sending messages. Both use the same
chat() → tool executor underneath.Feature Comparison
| Dimension | Autonomy Mode | Interactive Mode |
|---|---|---|
| Entry point | run_autonomous() | start() → chat() |
| Turns | Multi-turn loop (up to max_iterations) | Single turn per call |
| Self-correction | ✅ Doom loop detection + graduated recovery | ❌ None |
| File safety | ✅ Git snapshots before/after | ❌ None |
| Completion signal | Completion promise pattern (<promise>DONE</promise>) | Return value |
| Stage escalation | direct → heuristic → planned → autonomous | N/A |
| Session persistence | Auto-saves after each iteration | Manual |
| Observability | Built-in event emission | Standard logging |
Speed Profile
Init Overhead (one-time)
| Component | Cost | Notes |
|---|---|---|
AutonomyConfig() | ~0ms | Simple dataclass |
AutonomyTrigger() | ~1-2ms | Lazy imports EscalationTrigger |
DoomLoopTracker() | ~1ms | Lazy imports DoomLoopDetector |
FileSnapshot | ~50-200ms | Only if track_changes=True |
ObservabilityHooks | ~1ms | Only if observe=True |
With
autonomy=True (defaults), init overhead is < 5ms since track_changes=False and observe=False by default.Per-Iteration Overhead
| Operation | Cost |
|---|---|
| Timeout check | ~0μs |
| Doom-loop check | ~0.1ms |
| Action recording | ~0.1ms |
| Completion detection | ~0.1ms |
| Session auto-save | 0-5ms |
| Total | ~0.5ms |
End-to-End Comparison
| Metric | Non-Autonomy | Autonomy |
|---|---|---|
| Minimum LLM calls | 1 | 1 (if completion detected on first turn) |
| Maximum LLM calls | 1 | 20 (default max_iterations) |
| Typical for multi-step task | 1 (all tools in one turn) | 1-3 (re-injects if not “done”) |
| Overhead per call | 0 | ~0.5ms |
Important Behavioral Differences
Approval with autonomy=True: Default level is
"suggest", which does NOT auto-approve tools. Only level="full_auto" auto-wires AutoApproveBackend.Two-Layer Tool Architecture
Tools are provisioned at two independent layers. The CLI wrapper assembles tool lists and passes them astools=[...] to the SDK’s Agent() constructor:
What each layer provides
CLI Wrapper (praisonai)
13 tools via
get_interactive_tools():- ACP (4): create/edit/delete files, execute commands (disabled by default in autonomy)
- LSP (4): symbols, definitions, references, diagnostics
- Basic (5): read/write files, list, execute, search
praisonai tui, praisonai "prompt"Core SDK (praisonaiagents)
16 tools when
autonomy=True via AUTONOMY_PROFILE:- file_ops (7): read/write/list/copy/move/delete/info
- shell (3): execute_command, list_processes, get_system_info
- web (3): internet_search, search_web, web_crawl
- code_intelligence (3): ast_grep search/rewrite/scan
Agent(autonomy=True) in PythonBuilt-in ToolProfiles
The SDK ships with composable profiles intools/profiles.py:
| Profile | Tools | Description |
|---|---|---|
code_intelligence | 3 | ast-grep search, rewrite, scan |
file_ops | 7 | read, write, list, copy, move, delete, get_file_info |
shell | 3 | execute_command, list_processes, get_system_info |
web | 3 | internet_search, search_web, web_crawl |
code_exec | 4 | execute_code, analyze_code, format_code, lint_code |
schedule | 3 | schedule_add, schedule_list, schedule_remove |
autonomy | 16 | Composite: file_ops + shell + web + code_intelligence |
Tools by entry point
| Entry Point | Tools Available | Source |
|---|---|---|
praisonai tui | 13 (ACP + LSP + Basic) | CLI wrapper |
praisonai "prompt" | 13 + autonomy (16) | CLI wrapper + SDK |
praisonai tracker run | 30 (expanded set) | tracker.py |
Agent(autonomy=True) in Python | 16 (AUTONOMY_PROFILE) | Core SDK |
Agent() in Python | 0 (user-provided only) | — |
Design Principles
The current architecture follows a layered separation pattern:Core SDK stays minimal
The
praisonaiagents package provides the agent runtime, tool execution, and LLM integration — but does not bundle default tools. This keeps the SDK lightweight and avoids opinionated defaults.CLI wrapper adds batteries
The
praisonai wrapper package adds ACP, LSP, file operations, and search tools for CLI users. It assembles toolsets and passes them as tools=[...] to the Agent constructor.Why this is the best approach
SDK independence
SDK independence
The Core SDK has zero dependency on the CLI wrapper. Users embedding
praisonaiagents in their own applications bring exactly the tools they need — no surprise defaults, no bloat.Composable toolsets
Composable toolsets
Each entry point assembles its own toolset.
praisonai tui loads ACP + LSP + Basic. praisonai tracker run loads a broader set. SDK users pass their own tools. This is intentional — different contexts need different capabilities.Single source of truth
Single source of truth
interactive_tools.py with get_interactive_tools() is the canonical provider. Both tui/app.py and main.py call this single function. Adding a new interactive tool means editing one file.When to use which mode
Use Autonomy Mode
- Multi-step tasks (refactoring, debugging)
- Tasks that need self-correction
- Batch/unattended execution
- Tasks where you want git safety nets
Use Interactive Mode
- Conversational Q&A
- Quick one-off tasks
- Human-guided workflows
- Streaming responses
Extending Tools
Using ToolProfiles (recommended)
Combine built-in profiles or register custom ones:Adding tools for CLI users
Add tools tointeractive_tools.py:

