Skip to main content

Gemini CLI Integration

PraisonAI provides integration with Google’s Gemini CLI for AI-powered code analysis, generation, and refactoring tasks.

Installation

# Install Gemini CLI
npm install -g @anthropic-ai/gemini-cli

# Verify installation
gemini --version

Quick Start

from praisonai.integrations import GeminiCLIIntegration

# Create integration
gemini = GeminiCLIIntegration(
    workspace="/path/to/project",
    model="gemini-2.5-pro"
)

# Execute a coding task
result = await gemini.execute("Analyze this codebase and suggest improvements")
print(result)

Configuration Options

OptionTypeDefaultDescription
workspacestr”.”Working directory for CLI execution
timeoutint300Timeout in seconds
output_formatstr”json”Output format: “json”, “text”, “stream-json”
modelstr”gemini-2.5-pro”Gemini model to use
include_directorieslistNoneAdditional directories to include in context
sandboxboolFalseRun in sandbox mode

Examples

Basic Execution

from praisonai.integrations import GeminiCLIIntegration

gemini = GeminiCLIIntegration(workspace="/project")

result = await gemini.execute("Explain the architecture of this codebase")
print(result)

Model Selection

# Use Gemini 2.5 Flash for faster responses
gemini = GeminiCLIIntegration(
    workspace="/project",
    model="gemini-2.5-flash"
)

result = await gemini.execute("Quick code review")

Multi-Directory Context

gemini = GeminiCLIIntegration(
    workspace="/project/src",
    include_directories=["../lib", "../docs", "../tests"]
)

result = await gemini.execute("Analyze the entire project structure")

With Usage Stats

gemini = GeminiCLIIntegration(workspace="/project")

# Get result with usage statistics
result, stats = await gemini.execute_with_stats("Analyze main.py")

print(f"Result: {result}")
print(f"Stats: {stats}")
# Stats includes token usage, latency, tool calls, etc.

Streaming Output

gemini = GeminiCLIIntegration(workspace="/project")

async for event in gemini.stream("Generate comprehensive documentation"):
    event_type = event.get("type")
    content = event.get("content", "")
    print(f"[{event_type}] {content}")

As Agent Tool

from praisonai import Agent
from praisonai.integrations import GeminiCLIIntegration

gemini = GeminiCLIIntegration(
    workspace="/project",
    model="gemini-2.5-pro"
)

# Create tool
tool = gemini.as_tool()

# Use with agent
agent = Agent(
    name="Code Analyst",
    role="Software Architect",
    goal="Analyze and improve code architecture",
    tools=[tool]
)

result = agent.start("Review the codebase and suggest improvements")

Environment Variables

# API Key (required)
export GEMINI_API_KEY=your-key
# or
export GOOGLE_API_KEY=your-key

CLI Flags Used

The integration uses the following Gemini CLI flags:
FlagDescription
-pPrint mode (headless)
-mModel selection
--output-format jsonJSON output for parsing
--include-directoriesInclude additional directories
--sandboxRun in sandbox mode

JSON Output Schema

The JSON output includes:
{
  "response": "The main AI-generated content",
  "stats": {
    "models": {
      "gemini-2.5-pro": {
        "api": {
          "totalRequests": 2,
          "totalErrors": 0,
          "totalLatencyMs": 5053
        },
        "tokens": {
          "prompt": 24939,
          "candidates": 20,
          "total": 25113,
          "cached": 21263
        }
      }
    },
    "tools": {
      "totalCalls": 1,
      "totalSuccess": 1,
      "totalFail": 0
    },
    "files": {
      "totalLinesAdded": 0,
      "totalLinesRemoved": 0
    }
  }
}

Error Handling

from praisonai.integrations import GeminiCLIIntegration

gemini = GeminiCLIIntegration(timeout=120)

try:
    result = await gemini.execute("Complex analysis task")
except TimeoutError:
    print("Task timed out")
except Exception as e:
    print(f"Error: {e}")

Best Practices

  1. Use gemini-2.5-flash for quick tasks
  2. Use gemini-2.5-pro for complex analysis
  3. Include relevant directories for better context
  4. Use execute_with_stats() to monitor usage
  5. Set appropriate timeouts for large codebases