Prerequisites
- Python 3.10 or higher
- PraisonAI Agents package installed
- Gemini API key
- Gemini 2.0+ model (e.g.,
gemini/gemini-2.0-flash
)
Overview
Google Gemini provides three powerful internal tools that are natively supported by the model without requiring external implementations. These tools can be used directly through PraisonAI’s tool system.Available Internal Tools
Google Search
Real-time web search with automatic result grounding
URL Context
Fetch and analyze content from specific URLs
Code Execution
Execute Python code snippets within the conversation
Quick Start
1
Install Dependencies
Install PraisonAI with LLM support:
2
Set API Key
Set your Gemini API key:
3
Create Agent with Internal Tools
Use internal tools in your agent:
Individual Tool Examples
Google Search Tool
Use Google Search for real-time information retrieval:URL Context Tool
Analyze content from specific web pages:Code Execution Tool
Execute Python code for calculations and data analysis:Combined Tools Example
Use multiple internal tools together for complex tasks:Multi-Agent System with Internal Tools
Create a multi-agent system where different agents use different internal tools:Mixing Internal and External Tools
Combine Gemini’s internal tools with custom external tools:How It Works
Tool Definition Syntax
Gemini internal tools use a special dictionary syntax:Integration Flow
- Tool Definition: Define tools using the special internal tool syntax
- Pass-Through: PraisonAI passes these tools directly to LiteLLM
- Execution: LiteLLM sends them to Gemini as internal tool configurations
- Results: Gemini executes the tools natively and returns integrated responses
Benefits of Internal Tools
Native Integration
Native Integration
- No external API calls required
- Seamless integration with Gemini’s capabilities
- Optimized for performance
Automatic Grounding
Automatic Grounding
- Search results are automatically integrated into responses
- Context-aware information retrieval
- Source attribution built-in
Security
Security
- Code execution is sandboxed within Gemini’s environment
- No local code execution risks
- Controlled access to resources
No Rate Limits
No Rate Limits
- No separate API rate limits
- Included in Gemini API quota
- Simplified billing
Best Practices
Model Selection
Use Gemini 2.0+ models for internal tools:
gemini/gemini-2.0-flash
(recommended)gemini/gemini-2.0-flash-thinking-exp
- Other Gemini 2.0+ models
Error Handling
Always handle potential errors:
Debugging
Enable verbose mode for debugging:
Troubleshooting
- API Key Issues
- Model Support
- Regional Restrictions
Problem: API key not recognizedSolution: Ensure the environment variable is set correctly:
References
- Gemini API: Google Search Grounding
- Gemini API: URL Context
- Gemini API: Code Execution
- LiteLLM Gemini Provider Documentation