Documentation for the praisonaiagents.agent.agent module
name: str
- Name of the agentrole: str
- Role of the agentgoal: str
- Goal the agent aims to achievebackstory: str
- Background story of the agentinstructions: Optional[str] = None
- Direct instructions that override role, goal, and backstory when providedllm: str | Any | None = 'gpt-4o'
- Language model to usetools: List[Any] | None = None
- List of tools available to the agentfunction_calling_llm: Any | None = None
- LLM for function callingmax_iter: int = 20
- Maximum iterationsmax_rpm: int | None = None
- Maximum requests per minutemax_execution_time: int | None = None
- Maximum execution timememory: bool = True
- Enable memoryverbose: bool = True
- Enable verbose outputallow_delegation: bool = False
- Allow task delegationstep_callback: Any | None = None
- Callback for each stepcache: bool = True
- Enable cachingsystem_template: str | None = None
- System prompt templateprompt_template: str | None = None
- Prompt templateresponse_template: str | None = None
- Response templateallow_code_execution: bool | None = False
- Allow code executionmax_retry_limit: int = 2
- Maximum retry attemptsrespect_context_window: bool = True
- Respect context window sizecode_execution_mode: Literal['safe', 'unsafe'] = 'safe'
- Code execution modeembedder_config: Dict[str, Any] | None = None
- Embedder configurationknowledge: List[str] | None = None
- List of knowledge sources (file paths, URLs, or text)knowledge_config: Dict[str, Any] | None = None
- Configuration for knowledge processinguse_system_prompt: bool | None = True
- Use system promptmarkdown: bool = True
- Enable markdownself_reflect: bool = True
- Enable self reflectionmax_reflect: int = 3
- Maximum reflectionsmin_reflect: int = 1
- Minimum reflectionsreflect_llm: str | None = None
- LLM for reflectionstream: bool = True
- Enable streaming responses from the language modelguardrail: Optional[Union[Callable[['TaskOutput'], Tuple[bool, Any]], str]] = None
- Validation for outputshandoffs: Optional[List[Union['Agent', 'Handoff']]] = None
- Agents for task delegationbase_url: Optional[str] = None
- Base URL for custom LLM endpointsreasoning_steps: int = 0
- Number of reasoning steps to extractchat(self, prompt, temperature=0.2, tools=None, output_json=None)
- Chat with the agentachat(self, prompt, temperature=0.2, tools=None, output_json=None)
- Async version of chat methodclean_json_output(self, output: str) → str
- Clean and extract JSON from response textclear_history(self)
- Clear chat historyexecute_tool(self, function_name, arguments)
- Execute a tool dynamically based on the function name and arguments_achat_completion(self, response, tools)
- Async version of _chat_completion methodachat
: Async version of the chat method for non-blocking communication_achat_completion
: Internal async method for handling chat completions