Module praisonaiagents.agent.agent
Classes
Agent
The main class representing an AI agent with specific role, goal, and capabilities.Parameters
name: str
- Name of the agentrole: str
- Role of the agentgoal: str
- Goal the agent aims to achievebackstory: str
- Background story of the agentinstructions: Optional[str] = None
- Direct instructions that override role, goal, and backstory when providedllm: str | Any | None = 'gpt-4o'
- Language model to usetools: List[Any] | None = None
- List of tools available to the agentfunction_calling_llm: Any | None = None
- LLM for function callingmax_iter: int = 20
- Maximum iterationsmax_rpm: int | None = None
- Maximum requests per minutemax_execution_time: int | None = None
- Maximum execution timememory: bool = True
- Enable memoryverbose: bool = True
- Enable verbose outputallow_delegation: bool = False
- Allow task delegationstep_callback: Any | None = None
- Callback for each stepcache: bool = True
- Enable cachingsystem_template: str | None = None
- System prompt templateprompt_template: str | None = None
- Prompt templateresponse_template: str | None = None
- Response templateallow_code_execution: bool | None = False
- Allow code executionmax_retry_limit: int = 2
- Maximum retry attemptsrespect_context_window: bool = True
- Respect context window sizecode_execution_mode: Literal['safe', 'unsafe'] = 'safe'
- Code execution modeembedder_config: Dict[str, Any] | None = None
- Embedder configurationknowledge: List[str] | None = None
- List of knowledge sources (file paths, URLs, or text)knowledge_config: Dict[str, Any] | None = None
- Configuration for knowledge processinguse_system_prompt: bool | None = True
- Use system promptmarkdown: bool = True
- Enable markdownself_reflect: bool = True
- Enable self reflectionmax_reflect: int = 3
- Maximum reflectionsmin_reflect: int = 1
- Minimum reflectionsreflect_llm: str | None = None
- LLM for reflectionstream: bool = True
- Enable streaming responses from the language modelguardrail: Optional[Union[Callable[['TaskOutput'], Tuple[bool, Any]], str]] = None
- Validation for outputshandoffs: Optional[List[Union['Agent', 'Handoff']]] = None
- Agents for task delegationbase_url: Optional[str] = None
- Base URL for custom LLM endpointsreasoning_steps: int = 0
- Number of reasoning steps to extract
Advanced Parameters
Additional Configuration Options
Alternative to role/goal/backstory. Provide concise instructions for the agent’s behaviour and purpose.
Enable streaming responses from the agent. When True, responses are streamed in real-time.
Output validation for the agent. Can be a function or natural language description of requirements.
List of agents this agent can hand off tasks to. Enables agent-to-agent delegation.
Custom LLM client instance. Overrides the default client configuration.
Advanced LLM configuration options. Merged with default configuration.
Mark agent as human-controlled. Useful for human-in-the-loop workflows.
Callback function for processing streamed chunks. Called when chunks are merged.
Knowledge base for the agent. Can be file paths, URLs, or Knowledge instance.
Named knowledge sources to use. References pre-configured knowledge collections.
Output reasoning steps. When True, agent explains its thought process.
Custom API endpoint URL. Override default provider endpoints.
Advanced Examples
Agent with Instructions
Agent with Handoffs
Agent with Knowledge Base
Agent with Streaming
Agent with Custom LLM Configuration
Methods
chat(self, prompt, temperature=0.2, tools=None, output_json=None)
- Chat with the agentachat(self, prompt, temperature=0.2, tools=None, output_json=None)
- Async version of chat methodclean_json_output(self, output: str) → str
- Clean and extract JSON from response textclear_history(self)
- Clear chat historyexecute_tool(self, function_name, arguments)
- Execute a tool dynamically based on the function name and arguments_achat_completion(self, response, tools)
- Async version of _chat_completion method
Async Support
The Agent class provides async support through the following methods:achat
: Async version of the chat method for non-blocking communication_achat_completion
: Internal async method for handling chat completions