Tools are functions that agents can use to interact with external systems and perform actions. They are essential for creating agents that can do more than just process text.
Code
No Code
1
Install PraisonAI
Install the core package:
Terminal
Copy
pip install praisonaiagents duckduckgo-search
2
Configure Environment
Terminal
Copy
export OPENAI_API_KEY=your_openai_key
Generate your OpenAI API key from OpenAI
Use other LLM providers like Ollama, Anthropic, Groq, Google, etc. Please refer to the Models for more information.
3
Create Agent with Tool
Create app.py
Copy
from praisonaiagents import Agentfrom duckduckgo_search import DDGS# 1. Define the tooldef internet_search_tool(query: str): results = [] ddgs = DDGS() for result in ddgs.text(keywords=query, max_results=5): results.append({ "title": result.get("title", ""), "url": result.get("href", ""), "snippet": result.get("body", "") }) return results# 2. Assign the tool to an agentsearch_agent = Agent( instructions="Perform internet searches to collect relevant information.", tools=[internet_search_tool] # <--- Tool Assignment)# 3. Start Agentsearch_agent.start("Search about AI job trends in 2025")
4
Start Agents
Execute your script:
Terminal
Copy
python app.py
1
Install PraisonAI
Install the core package and duckduckgo_search package:
Terminal
Copy
pip install praisonai duckduckgo_search
2
Create Custom Tool
To add additional tools/features you need some coding which can be generated using ChatGPT or any LLM
Create a new file tools.py with the following content:
Copy
from duckduckgo_search import DDGSfrom typing import List, Dict# 1. Tooldef internet_search_tool(query: str) -> List[Dict]: """ Perform Internet Search """ results = [] ddgs = DDGS() for result in ddgs.text(keywords=query, max_results=5): results.append({ "title": result.get("title", ""), "url": result.get("href", ""), "snippet": result.get("body", "") }) return results
3
Create Agent
Create a new file agents.yaml with the following content:
Copy
framework: praisonaitopic: create movie script about cat in marsagents: # Canonical: use 'agents' instead of 'roles' scriptwriter: instructions: Expert in dialogue and script structure, translating concepts into scripts. # Canonical: use 'instructions' instead of 'backstory' goal: Write a movie script about a cat in Mars role: Scriptwriter tools: - internet_search_tool # <-- Tool assigned to Agent here tasks: scriptwriting_task: description: Turn the story concept into a production-ready movie script, including dialogue and scene details. expected_output: Final movie script with dialogue and scene details.
Generate your OpenAI API key from OpenAI
Use other LLM providers like Ollama, Anthropic, Groq, Google, etc. Please refer to the Models for more information.
3
Create Agent with Tool
Create app.py
Copy
from praisonaiagents import Agent, Task, Agentsfrom duckduckgo_search import DDGS# 1. Tool Implementationdef internet_search_tool(query: str): results = [] ddgs = DDGS() for result in ddgs.text(keywords=query, max_results=5): results.append({ "title": result.get("title", ""), "url": result.get("href", ""), "snippet": result.get("body", "") }) return results# 2. Assign the tool to an agentdata_agent = Agent( name="DataCollector", role="Search Specialist", goal="Perform internet searches to collect relevant information.", backstory="Expert in finding and organising internet data.", tools=[internet_search_tool], self_reflect=False)# 3. Task Definitioncollect_task = Task( description="Perform an internet search using the query: 'AI job trends in 2024'. Return results as a list of title, URL, and snippet.", expected_output="List of search results with titles, URLs, and snippets.", agent=data_agent, name="collect_data",)# 4. Start Agentsagents = Agents( agents=[data_agent], tasks=[collect_task], process="sequential")agents.start()
4
Start Agents
Execute your script:
Terminal
Copy
python app.py
1
Install PraisonAI
Install the core package and duckduckgo_search package:
Terminal
Copy
pip install praisonai duckduckgo_search
2
Create Custom Tool
To add additional tools/features you need some coding which can be generated using ChatGPT or any LLM
Create a new file tools.py with the following content:
Copy
from duckduckgo_search import DDGSfrom typing import List, Dict# 1. Tooldef internet_search_tool(query: str) -> List[Dict]: """ Perform Internet Search """ results = [] ddgs = DDGS() for result in ddgs.text(keywords=query, max_results=5): results.append({ "title": result.get("title", ""), "url": result.get("href", ""), "snippet": result.get("body", "") }) return results
3
Create Agent
Create a new file agents.yaml with the following content:
Copy
framework: praisonaitopic: create movie script about cat in marsagents: # Canonical: use 'agents' instead of 'roles' scriptwriter: instructions: # Canonical: use 'instructions' instead of 'backstory' Expert in dialogue and script structure, translating concepts into scripts. goal: Write a movie script about a cat in Mars role: Scriptwriter tools: - internet_search_tool # <-- Tool assigned to Agent here tasks: scriptwriting_task: description: Turn the story concept into a production-ready movie script, including dialogue and scene details. expected_output: Final movie script with dialogue and scene details.
MCP allows agents to use external tools via standardized protocols. This is the recommended way to add powerful tools to your agents.
Copy
from praisonaiagents import Agent, MCP# Use MCP tools with environment variablesagent = Agent( instructions="Search the web for information", tools=MCP( command="npx", args=["-y", "@anthropic/mcp-server-brave-search"], env={"BRAVE_API_KEY": "your-api-key"} ))agent.start("Search for AI trends in 2025")
Fast Context provides rapid parallel code search for AI agents - 10-20x faster than traditional methods:
Copy
from praisonaiagents import Agent# Enable Fast Context for code searchagent = Agent( instructions="You are a code assistant", fast_context=True, fast_context_path="/path/to/codebase")# Delegate code search to Fast Contextcontext = agent.delegate_to_fast_context("find authentication handlers")
Following these best practices will help you create robust, efficient, and secure tools in PraisonAI.
Design Principles
Single Responsibility
Each tool should have one clear purpose and do it well. Avoid creating tools that try to do too many things.
Copy
# Good Exampledef process_image(image: np.array) -> np.array: return processed_image# Avoiddef process_and_save_and_upload(image): # Too many responsibilities pass
Clear Interfaces
Define explicit input/output types and maintain consistent parameter naming.
Copy
def search_tool( query: str, max_results: int = 10) -> List[Dict[str, Any]]: """ Search for information with clear parameters """ pass
Documentation
Always include detailed docstrings and type hints.
Copy
def analyze_text( text: str, language: str = "en") -> Dict[str, float]: """ Analyze text sentiment and emotions. Args: text: Input text to analyze language: ISO language code Returns: Dict with sentiment scores """ pass
Performance Optimization
Efficient Processing
Optimize resource usage and processing time.
Copy
# Use generators for large datasetsdef process_large_data(): for chunk in data_generator(): yield process_chunk(chunk)
Resource Management
Properly handle resource allocation and cleanup.
Copy
async with aiohttp.ClientSession() as session: # Resource automatically managed await process_data(session)
async def fetch_data(urls: List[str]): async with aiohttp.ClientSession() as session: tasks = [fetch_url(session, url) for url in urls] return await asyncio.gather(*tasks)
Security Best Practices
Input Validation
Always validate and sanitize inputs to prevent security vulnerabilities.
Copy
def process_user_input(data: str) -> str: if not isinstance(data, str): raise ValueError("Input must be string") return sanitize_input(data.strip())
Rate Limiting
Implement rate limiting for API calls to prevent abuse.