Documentation Index
Fetch the complete documentation index at: https://docs.praison.ai/llms.txt
Use this file to discover all available pages before exploring further.
Gemini CLI Integration
PraisonAI provides integration with Google’s Gemini CLI for AI-powered code analysis, generation, and refactoring tasks.
Installation
# Install Gemini CLI
npm install -g @anthropic-ai/gemini-cli
# Verify installation
gemini --version
Quick Start
from praisonai.integrations import GeminiCLIIntegration
# Create integration
gemini = GeminiCLIIntegration(
workspace="/path/to/project",
model="gemini-2.5-pro"
)
# Execute a coding task
result = await gemini.execute("Analyze this codebase and suggest improvements")
print(result)
Configuration Options
| Option | Type | Default | Description |
|---|
workspace | str | ”.” | Working directory for CLI execution |
timeout | int | 300 | Timeout in seconds |
output_format | str | ”json” | Output format: “json”, “text”, “stream-json” |
model | str | ”gemini-2.5-pro” | Gemini model to use |
include_directories | list | None | Additional directories to include in context |
sandbox | bool | False | Run in sandbox mode |
Examples
Basic Execution
from praisonai.integrations import GeminiCLIIntegration
gemini = GeminiCLIIntegration(workspace="/project")
result = await gemini.execute("Explain the architecture of this codebase")
print(result)
Model Selection
# Use Gemini 2.5 Flash for faster responses
gemini = GeminiCLIIntegration(
workspace="/project",
model="gemini-2.5-flash"
)
result = await gemini.execute("Quick code review")
Multi-Directory Context
gemini = GeminiCLIIntegration(
workspace="/project/src",
include_directories=["../lib", "../docs", "../tests"]
)
result = await gemini.execute("Analyze the entire project structure")
With Usage Stats
gemini = GeminiCLIIntegration(workspace="/project")
# Get result with usage statistics
result, stats = await gemini.execute_with_stats("Analyze main.py")
print(f"Result: {result}")
print(f"Stats: {stats}")
# Stats includes token usage, latency, tool calls, etc.
Streaming Output
gemini = GeminiCLIIntegration(workspace="/project")
async for event in gemini.stream("Generate comprehensive documentation"):
event_type = event.get("type")
content = event.get("content", "")
print(f"[{event_type}] {content}")
from praisonai import Agent
from praisonai.integrations import GeminiCLIIntegration
gemini = GeminiCLIIntegration(
workspace="/project",
model="gemini-2.5-pro"
)
# Create tool
tool = gemini.as_tool()
# Use with agent
agent = Agent(
name="Code Analyst",
role="Software Architect",
goal="Analyze and improve code architecture",
tools=[tool]
)
result = agent.start("Review the codebase and suggest improvements")
Environment Variables
# API Key (required)
export GEMINI_API_KEY=your-key
# or
export GOOGLE_API_KEY=your-key
CLI Flags Used
The integration uses the following Gemini CLI flags:
| Flag | Description |
|---|
-p | Print mode (headless) |
-m | Model selection |
--output-format json | JSON output for parsing |
--include-directories | Include additional directories |
--sandbox | Run in sandbox mode |
JSON Output Schema
The JSON output includes:
{
"response": "The main AI-generated content",
"stats": {
"models": {
"gemini-2.5-pro": {
"api": {
"totalRequests": 2,
"totalErrors": 0,
"totalLatencyMs": 5053
},
"tokens": {
"prompt": 24939,
"candidates": 20,
"total": 25113,
"cached": 21263
}
}
},
"tools": {
"totalCalls": 1,
"totalSuccess": 1,
"totalFail": 0
},
"files": {
"totalLinesAdded": 0,
"totalLinesRemoved": 0
}
}
}
Error Handling
from praisonai.integrations import GeminiCLIIntegration
gemini = GeminiCLIIntegration(timeout=120)
try:
result = await gemini.execute("Complex analysis task")
except TimeoutError:
print("Task timed out")
except Exception as e:
print(f"Error: {e}")
Best Practices
- Use gemini-2.5-flash for quick tasks
- Use gemini-2.5-pro for complex analysis
- Include relevant directories for better context
- Use execute_with_stats() to monitor usage
- Set appropriate timeouts for large codebases