Skip to main content

Agent Configuration

This page provides detailed documentation for all agent configuration parameters, including execution controls, context management, and output formatting options.

Core Parameters

Execution Control Parameters

max_iter

  • Type: int
  • Default: 15
  • Description: Maximum number of iterations an agent will attempt to complete a task
  • Range: 1-100 (recommended: 10-25)
The max_iter parameter controls how many times an agent can iterate through its execution loop when working on a task. This prevents infinite loops and ensures tasks complete within reasonable bounds.
agent = Agent(
    name="Researcher",
    max_iter=20,  # Allow up to 20 iterations
    # other parameters...
)
Best Practices:
  • Set lower values (5-10) for simple tasks
  • Use higher values (20-30) for complex research or analysis tasks
  • Monitor iteration count to optimize performance

max_retry_limit

  • Type: int
  • Default: 2
  • Description: Maximum number of times to retry a failed operation
  • Range: 0-10 (recommended: 2-5)
Controls how many times an agent will retry operations that fail due to transient errors like network issues or API rate limits.
agent = Agent(
    name="DataAnalyst",
    max_retry_limit=3,  # Retry failed operations up to 3 times
    # other parameters...
)
Use Cases:
  • API calls that may face rate limits
  • Network operations that may timeout
  • Tool executions that may fail intermittently

Context Management

context_length

  • Type: int
  • Default: None (uses model default)
  • Description: Maximum context window size in tokens
  • Common Values:
    • GPT-4: 128000
    • GPT-3.5: 16385
    • Claude 3: 200000
Defines the maximum amount of context (in tokens) that can be included in a single LLM request.
agent = Agent(
    name="Analyzer",
    context_length=50000,  # Use 50K token context window
    respect_context_window=True,
    # other parameters...
)
Important Considerations:
  • Larger context windows allow more information but increase costs
  • Some models perform better with focused context
  • Always validate against your LLM’s maximum context size

Output Formatting

markdown

  • Type: bool
  • Default: True
  • Description: Enable markdown formatting in agent outputs
Controls whether the agent formats its responses using markdown syntax for better readability.
agent = Agent(
    name="Reporter",
    markdown=True,  # Enable markdown formatting
    response_template="""
    ## Analysis Report
    
    {content}
    
    ### Key Findings
    {findings}
    """,
    # other parameters...
)
Features When Enabled:
  • Headers using #, ##, etc.
  • Bold text with **text**
  • Code blocks with ```
  • Lists and tables
  • Links and images

Advanced Execution Controls

Performance Optimization

agent = Agent(
    name="HighPerformanceAgent",
    # Execution limits
    max_iter=25,
    max_retry_limit=3,
    max_execution_time=300,  # 5 minutes timeout
    
    # Performance settings
    max_rpm=60,  # Rate limit: 60 requests per minute
    cache=True,  # Enable response caching
    
    # Context optimization
    context_length=100000,
    respect_context_window=True,
    
    # Output control
    markdown=True,
    verbose=False,  # Reduce logging for performance
)

Error Handling Configuration

agent = Agent(
    name="ResilientAgent",
    # Retry configuration
    max_retry_limit=5,
    
    # Error handling
    allow_code_execution=True,
    code_execution_mode="safe",  # Safe mode with restrictions
    
    # Fallback behavior
    llm_config={
        "temperature": 0.7,
        "fallback_models": ["gpt-3.5-turbo", "claude-2"],
        "retry_delay": 2.0,  # Seconds between retries
        "exponential_backoff": True
    }
)

Configuration Patterns

Research Agent Configuration

research_agent = Agent(
    name="ResearchSpecialist",
    role="Senior Research Analyst",
    goal="Conduct thorough research and provide comprehensive reports",
    backstory="Expert researcher with 10 years of experience",
    
    # Optimized for long research tasks
    max_iter=30,
    max_retry_limit=5,
    context_length=128000,
    
    # Output preferences
    markdown=True,
    verbose=True,
    stream=True,  # Stream responses
    
    # Quality controls
    self_reflect=True,
    max_reflect=3,
    reasoning_steps=True
)

Quick Task Agent Configuration

quick_agent = Agent(
    name="QuickResponder",
    role="Rapid Task Handler",
    
    # Optimized for speed
    max_iter=5,
    max_retry_limit=1,
    context_length=16000,
    
    # Minimal output
    markdown=False,
    verbose=False,
    stream=False
)

Environment Variable Overrides

Agent parameters can be overridden using environment variables:
# Override default iteration limit
export PRAISONAI_AGENT_MAX_ITER=20

# Override retry limit
export PRAISONAI_AGENT_MAX_RETRY_LIMIT=5

# Set context length
export PRAISONAI_AGENT_CONTEXT_LENGTH=100000

# Disable markdown
export PRAISONAI_AGENT_MARKDOWN=false

Validation and Constraints

Parameter Validation Rules

ParameterValidationError Handling
max_iterMust be > 0Defaults to 15 if invalid
max_retry_limitMust be >= 0Defaults to 2 if invalid
context_lengthMust be greater than 0 and less than or equal to model maxUses model default if invalid
markdownMust be booleanDefaults to True if invalid

Interaction Between Parameters

  1. Context Length and Max Iterations
    • Higher max_iter may require larger context_length
    • Each iteration adds to context usage
  2. Retry Limit and Execution Time
    • Higher max_retry_limit extends total execution time
    • Consider both when setting timeouts
  3. Markdown and Response Templates
    • markdown=True enables template formatting
    • Templates should use markdown syntax when enabled

Troubleshooting

Common Issues

  1. Agent exceeds iteration limit
    # Solution: Increase max_iter or optimize task
    agent.max_iter = 30
    
  2. Context window exceeded
    # Solution: Increase context_length or enable truncation
    agent.context_length = 100000
    agent.respect_context_window = True
    
  3. Frequent retry failures
    # Solution: Increase retry limit and add delays
    agent.max_retry_limit = 5
    agent.llm_config["retry_delay"] = 3.0
    

See Also