Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.praison.ai/llms.txt

Use this file to discover all available pages before exploring further.

PraisonAI Code helps you to interact with your whole codebase using the power of AI.

Different User Interfaces:

InterfaceDescriptionURL
UIMulti-Agent Systems Interfacehttps://docs.praison.ai/ui/ui
ChatChat with 100+ LLMs, single AI Agenthttps://docs.praison.ai/ui/chat
CodeChat with entire Codebase, single AI Agenthttps://docs.praison.ai/ui/code

Table of Contents

Install PraisonAI Code

pip install "praisonai[code]"
export OPENAI_API_KEY=xxxxxxxx

AICoder Component Dependencies

The AICoder UI component requires all dependencies to be installed: Required dependencies:
pip install litellm tavily crawl4ai
Environment variables (optional):
export TAVILY_API_KEY=your-tavily-key  # If using web search
Error behavior: If any dependency is missing, you’ll see a ModuleNotFoundError when trying to import the AICoder component, since all imports are unconditional in the module.
praisonai code
  1. Username and Password will be asked for the first time. admin is the default username and password.
  2. Set Model name to be gpt-4o-mini in the settings

Other Models

  • Use 100+ LLMs - Litellm
  • Includes Gemini 1.5 for 2 Million Context Length

To Use Gemini 1.5

  • export GEMINI_API_KEY=xxxxxxxxx
  • praisonai code
  • Set Model name to be gemini/gemini-1.5-flash in the settings

Ignore Files

Using .praisonignore

  • Create a .praisonignore file in the root folder of the project
  • Add files to ignore
.*
*.pyc
pycache
.git
.gitignore
.vscode
.idea
.DS_Store
.lock
.pyc
.env

Using settings.yaml

(.praisonignore is preferred)
  • Create a settings.yaml file in the root folder of the project
  • Add below Variables and required Ignore Files
code:
  ignore_files:
  - ".*"
  - "*.pyc"
  - "pycache"
  - ".git"
  - ".gitignore"
  - ".vscode"
  - ".idea"
  - ".DS_Store"
  - ".lock"
  - ".pyc"
  - ".env"

Using .env File

  • Create a .env file in the root folder of the project
  • Add below Variables and required Ignore Files
PRAISONAI_IGNORE_FILES=".*,*.pyc,__pycache__,.git,.gitignore,.vscode"

Using Environment Variables in the Terminal

export PRAISONAI_IGNORE_FILES=".*,*.pyc,__pycache__,.git,.gitignore,.vscode"

Include Files .praisoninclude

  • Add files you wish to Include files in the context
  • This will include the files/folders mentioned in .praisoninclude to the original context (files in the folder - .gitignore - .praisonignore)
  • Create a .praisoninclude file in the root folder of the project
  • Add files to Include
projectfiles
docs

Include ONLY these Files .praisoncontext (Context)

  • Add files you wish to Include files in the context
  • This will include ONLY the files/folders mentioned in .praisoncontext to the context
  • Create a .praisoncontext file in the root folder of the project
  • Add files to Include
projectfiles
docs

Set Max Tokens

Note: By Default Max Tokens set is 900,000
export PRAISONAI_MAX_TOKENS=1000000
or
  • Create a .env file in the root folder of the project
  • Add below Variables and required Max Tokens
  • PRAISONAI_MAX_TOKENS=1000000
    

Default DB Location

~/.praisonai/database.sqlite

Key Features

PraisonAI Code now includes internet search capabilities using Crawl4AI and Tavily. This feature allows you to retrieve up-to-date information and code snippets during your coding sessions, enhancing your ability to find relevant programming information and examples. To use this feature:
  1. Ask a question or request information about a specific coding topic
  2. The AI will use internet search to find the most relevant and current information
  3. You’ll receive code snippets, documentation references, or explanations based on the latest available resources

Vision Language Model (VLM) Support

While primarily designed for code interactions, PraisonAI Code also supports Vision Language Model capabilities. This feature can be particularly useful when dealing with visual aspects of programming, such as UI design, data visualization, or understanding code structure through diagrams. To use this feature:
  1. Upload an image related to your coding query (e.g., a screenshot of a UI, a flowchart, or a code snippet image)
  2. Ask questions or request analysis based on the uploaded image
  3. The VLM will process the image and provide insights or answers based on its visual content, helping you understand or implement the visual concepts in your code
These new features significantly expand the capabilities of PraisonAI Code, allowing for more comprehensive and up-to-date coding assistance.

External Agents

The PraisonAI Code interface now includes sidebar toggles for external AI coding CLIs. These toggles appear automatically when the corresponding CLI tools are installed and available on your system PATH.

Available External Agents

  • Claude Code Toggle: Enable Claude Code CLI integration for advanced file editing and code analysis
  • Gemini CLI Toggle: Enable Google Gemini CLI for code analysis and search capabilities
  • Codex CLI Toggle: Enable OpenAI Codex CLI for code refactoring and optimization
  • Cursor CLI Toggle: Enable Cursor CLI for IDE-style development tasks

How It Works

  1. Auto-Detection: Toggles only appear for CLIs that are installed and available via shutil.which()
  2. Sidebar Integration: Toggle switches appear in the Code UI sidebar settings
  3. Persistence: Toggle states persist across sessions via Chainlit settings
  4. Dynamic Instructions: Enabled agents modify the assistant’s system instructions
  5. Workspace Context: External agents operate against PRAISONAI_CODE_REPO_PATH

Backward Compatibility

The new toggles replace the previous single “Enable Claude Code” switch. Legacy settings are automatically migrated:
  • claude_code_enabled setting → claude_enabled
  • PRAISONAI_CLAUDECODE_ENABLED environment variable → claude_enabled
For complete documentation on external agents UI toggles across all PraisonAI interfaces, see External Agents in UI.

Local Docker Development with Live Reload

To facilitate local development with live reload, you can use Docker. Follow the steps below:
  1. Create a Dockerfile.dev:
    FROM python:3.11-slim
    
    WORKDIR /app
    
    COPY . .
    
    RUN pip install flask praisonai==2.2.25 watchdog
    
    EXPOSE 5555
    
    ENV FLASK_ENV=development
    
    CMD ["flask", "run", "--host=0.0.0.0"]
    
  2. Create a docker-compose.yml:
    version: '3.8'
    
    services:
      app:
        build:
          context: .
          dockerfile: Dockerfile.dev
        volumes:
          - .:/app
        ports:
          - "5555:5555"
        environment:
          FLASK_ENV: development
        command: flask run --host=0.0.0.0
    
      watch:
        image: alpine:latest
        volumes:
          - .:/app
        command: sh -c "apk add --no-cache inotify-tools && while inotifywait -r -e modify,create,delete /app; do kill -HUP 1; done"
    
  3. Run Docker Compose:
    docker-compose up
    
This setup will allow you to develop locally with live reload, making it easier to test and iterate on your code.