Mode matters: Autonomous loops use
mode="iterative". This is the default for level="full_auto". With autonomy=True (default level), mode is "caller" (single chat) — set mode="iterative" explicitly or use level="full_auto" to enable loops.The most common completion path is tool completion — the model calls tools to do the work, then produces a summary without requesting more tools. This is the same mechanism ChatGPT and Claude use.
Quick Start
How It Works
Key Features
Completion Promise
Agent signals “done” with a promise tag containing TEXT
Context Clearing
Fresh memory each iteration forces file-based state
Doom Loop Detection
Automatically stops on repeated identical actions
Iteration Limits
Prevents runaway execution with configurable max
Configuration
AutonomyConfig Options
| Option | Type | Default | Description |
|---|---|---|---|
max_iterations | int | 20 | Maximum loop iterations |
completion_promise | str | None | Text to detect in promise tags |
clear_context | bool | False | Clear chat history between iterations |
doom_loop_threshold | int | 3 | Repeated actions before stopping |
level | str | "suggest" | Autonomy level: suggest, auto_edit, full_auto |
auto_escalate | bool | True | Automatically escalate task complexity |
observe | bool | False | Emit observability events |
Using Config Dict
CLI Usage
- Basic
- With Promise
- With Context Clearing
- With Timeout
CLI Options
| Flag | Description |
|---|---|
-n, --max-iterations | Maximum iterations (default: 10) |
-p, --completion-promise | Promise text to signal completion |
-c, --clear-context | Clear chat history between iterations |
-t, --timeout | Timeout in seconds |
-m, --model | LLM model to use |
-v, --verbose | Show verbose output |
Result Object
How Completion Detection Works
Completion Reasons
tool_completion (most common)
tool_completion (most common)
The model used tools to complete the task and then produced a summary response without requesting more tools. This is the most reliable completion signal — it’s the same mechanism used by ChatGPT and all major LLM APIs.
caller_mode
caller_mode
Agent was in caller mode (
autonomy=True default). Single chat() call wrapped in AutonomyResult.promise
promise
Agent output contained a promise tag with TEXT matching the configured promise.
goal
goal
Agent output contained completion keywords like “task completed” or “done”.
no_tool_calls
no_tool_calls
The model produced text without calling any tools for 2+ consecutive turns — indicates it has nothing left to do.
max_iterations
max_iterations
Reached the maximum iteration limit without completion signal.
doom_loop
doom_loop
Detected repeated identical actions (agent stuck in a loop).
timeout
timeout
Execution exceeded the configured timeout.
needs_help
needs_help
Doom loop recovery exhausted — task needs human guidance.
error
error
An error occurred during execution.
Best Practices
Use Completion Promises
Always set a
completion_promise for reliable termination instead of relying on keyword detection.Enable Context Clearing for Long Tasks
Use
clear_context=True for tasks that should rely on file state rather than conversation memory.Set Reasonable Limits
Configure
max_iterations based on task complexity. Start low and increase if needed.Async Execution
Run autonomous loops asynchronously for concurrent agent execution:When
autonomy=True, astart() automatically routes to the async autonomous loop, enabling true concurrent execution of multiple agents.Memory Integration
Autonomous loops automatically save sessions between iterations when memory is enabled. This ensures progress is persisted even if the loop is interrupted._auto_save_session() is a no-op when memory or auto_save is not configured, so there is zero overhead for agents without memory.
