Module praisonaiagents.agent.agent

Classes

Agent

The main class representing an AI agent with specific role, goal, and capabilities.

Parameters

  • name: str - Name of the agent
  • role: str - Role of the agent
  • goal: str - Goal the agent aims to achieve
  • backstory: str - Background story of the agent
  • instructions: Optional[str] = None - Direct instructions that override role, goal, and backstory when provided
  • llm: str | Any | None = 'gpt-4o' - Language model to use
  • tools: List[Any] | None = None - List of tools available to the agent
  • function_calling_llm: Any | None = None - LLM for function calling
  • max_iter: int = 20 - Maximum iterations
  • max_rpm: int | None = None - Maximum requests per minute
  • max_execution_time: int | None = None - Maximum execution time
  • memory: bool = True - Enable memory
  • verbose: bool = True - Enable verbose output
  • allow_delegation: bool = False - Allow task delegation
  • step_callback: Any | None = None - Callback for each step
  • cache: bool = True - Enable caching
  • system_template: str | None = None - System prompt template
  • prompt_template: str | None = None - Prompt template
  • response_template: str | None = None - Response template
  • allow_code_execution: bool | None = False - Allow code execution
  • max_retry_limit: int = 2 - Maximum retry attempts
  • respect_context_window: bool = True - Respect context window size
  • code_execution_mode: Literal['safe', 'unsafe'] = 'safe' - Code execution mode
  • embedder_config: Dict[str, Any] | None = None - Embedder configuration
  • knowledge: List[str] | None = None - List of knowledge sources (file paths, URLs, or text)
  • knowledge_config: Dict[str, Any] | None = None - Configuration for knowledge processing
  • use_system_prompt: bool | None = True - Use system prompt
  • markdown: bool = True - Enable markdown
  • self_reflect: bool = True - Enable self reflection
  • max_reflect: int = 3 - Maximum reflections
  • min_reflect: int = 1 - Minimum reflections
  • reflect_llm: str | None = None - LLM for reflection
  • stream: bool = True - Enable streaming responses from the language model
  • guardrail: Optional[Union[Callable[['TaskOutput'], Tuple[bool, Any]], str]] = None - Validation for outputs
  • handoffs: Optional[List[Union['Agent', 'Handoff']]] = None - Agents for task delegation
  • base_url: Optional[str] = None - Base URL for custom LLM endpoints
  • reasoning_steps: int = 0 - Number of reasoning steps to extract

Advanced Parameters

Additional Configuration Options

instructions
str
Alternative to role/goal/backstory. Provide concise instructions for the agent’s behaviour and purpose.
stream
bool
default:"False"
Enable streaming responses from the agent. When True, responses are streamed in real-time.
guardrail
Union[Callable, str]
Output validation for the agent. Can be a function or natural language description of requirements.
handoffs
List[Union[Agent, Handoff]]
List of agents this agent can hand off tasks to. Enables agent-to-agent delegation.
client
Any
Custom LLM client instance. Overrides the default client configuration.
llm_config
Dict[str, Any]
Advanced LLM configuration options. Merged with default configuration.
llm_config = {
    "temperature": 0.7,
    "max_tokens": 1000,
    "top_p": 0.9
}
is_human
bool
default:"False"
Mark agent as human-controlled. Useful for human-in-the-loop workflows.
on_merge_chunks
Callable
Callback function for processing streamed chunks. Called when chunks are merged.
knowledge
Union[List[str], Any]
Knowledge base for the agent. Can be file paths, URLs, or Knowledge instance.
knowledge_sources
List[str]
Named knowledge sources to use. References pre-configured knowledge collections.
show_reasoning
bool
default:"False"
Output reasoning steps. When True, agent explains its thought process.
base_url
str
Custom API endpoint URL. Override default provider endpoints.

Advanced Examples

Agent with Instructions

agent = Agent(
    name="Assistant",
    instructions="You are a helpful AI assistant that provides concise, accurate answers."
)

Agent with Handoffs

billing_agent = Agent(name="Billing", instructions="Handle billing inquiries")
tech_agent = Agent(name="Tech Support", instructions="Solve technical issues")

main_agent = Agent(
    name="Customer Service",
    instructions="Route customers to the right department",
    handoffs=[billing_agent, tech_agent]
)

Agent with Knowledge Base

agent = Agent(
    name="Research Assistant",
    instructions="Answer questions using the knowledge base",
    knowledge=["research_papers/", "data.pdf"],
    knowledge_config={
        "vector_store": {
            "provider": "chroma",
            "config": {"collection_name": "research"}
        }
    }
)

Agent with Streaming

agent = Agent(
    name="Story Writer",
    instructions="Write creative stories",
    stream=True,
    on_merge_chunks=lambda chunks: print(f"Generated: {len(chunks)} chunks")
)

# Streaming response
for chunk in agent.start("Write a short story"):
    print(chunk, end="", flush=True)

Agent with Custom LLM Configuration

agent = Agent(
    name="Precise Assistant",
    instructions="Provide exact, deterministic answers",
    llm_config={
        "temperature": 0,
        "max_tokens": 500,
        "seed": 42
    },
    base_url="https://custom-llm-endpoint.com/v1"
)

Methods

  • chat(self, prompt, temperature=0.2, tools=None, output_json=None) - Chat with the agent
  • achat(self, prompt, temperature=0.2, tools=None, output_json=None) - Async version of chat method
  • clean_json_output(self, output: str) → str - Clean and extract JSON from response text
  • clear_history(self) - Clear chat history
  • execute_tool(self, function_name, arguments) - Execute a tool dynamically based on the function name and arguments
  • _achat_completion(self, response, tools) - Async version of _chat_completion method

Async Support

The Agent class provides async support through the following methods:
  • achat: Async version of the chat method for non-blocking communication
  • _achat_completion: Internal async method for handling chat completions
Example usage:
async def main():
    agent = Agent(
        name="AsyncAgent",
        role="Async Specialist",
        goal="Handle async operations",
        backstory="Expert in async processing"
    )
    
    # Use async chat
    result = await agent.achat(
        prompt="Your prompt here",
        tools=your_tools,
        output_json=your_schema
    )
    print(result)

asyncio.run(main())