Skip to main content

Overview

Codex CLI is OpenAI’s AI-powered coding assistant that can run commands, edit files, and perform complex coding tasks. PraisonAI integrates with Codex CLI to use it as an external agent.

Installation

# Install via npm
npm install -g @openai/codex

# Or build from source
git clone https://github.com/openai/codex
cd codex/codex-cli
pnpm install && pnpm build

Authentication

Codex uses ChatGPT authentication:
# Login with ChatGPT account
codex login
Or set OpenAI API key:
export OPENAI_API_KEY=your-api-key

Basic Usage with PraisonAI

# Use Codex as external agent
praisonai "Fix the bug in auth.py" --external-agent codex

# With verbose output
praisonai "Refactor this module" --external-agent codex --verbose

CLI Options Reference

Core Options

OptionDescription
[PROMPT]Optional user prompt to start the session
-m, --model <MODEL>Model the agent should use
-h, --helpPrint help
-V, --versionPrint version

Configuration

OptionDescription
-c, --config <key=value>Override config from ~/.codex/config.toml
-p, --profile <PROFILE>Configuration profile from config.toml
--enable <FEATURE>Enable a feature (repeatable)
--disable <FEATURE>Disable a feature (repeatable)

Sandbox Modes

OptionDescription
-s, --sandbox <MODE>Sandbox policy for shell commands
Sandbox Mode Values:
  • read-only - Read-only access
  • workspace-write - Write access to workspace
  • danger-full-access - Full system access (dangerous)

Approval Policies

OptionDescription
-a, --ask-for-approval <POLICY>When to require human approval
--full-autoLow-friction automatic execution
--dangerously-bypass-approvals-and-sandboxSkip all prompts (extremely dangerous)
Approval Policy Values:
  • untrusted - Only run trusted commands without approval
  • on-failure - Ask approval only on command failure
  • on-request - Model decides when to ask
  • never - Never ask for approval

Working Directory

OptionDescription
-C, --cd <DIR>Working directory for the agent
--add-dir <DIR>Additional writable directories

Input Options

OptionDescription
-i, --image <FILE>Image(s) to attach to initial prompt
--searchEnable web search tool

Local Models

OptionDescription
--ossUse local open source model provider
--local-provider <PROVIDER>Specify local provider (lmstudio or ollama)

Commands

CommandDescription
codex execRun Codex non-interactively
codex reviewRun code review non-interactively
codex loginManage login
codex logoutRemove authentication credentials
codex mcpRun as MCP server and manage MCP servers
codex mcp-serverRun the Codex MCP server (stdio)
codex completionGenerate shell completion scripts
codex sandboxRun commands within sandbox
codex applyApply latest diff as git apply
codex resumeResume previous interactive session
codex cloudBrowse tasks from Codex Cloud
codex featuresInspect feature flags

Examples

Basic Query

# Simple question
praisonai "What files are in this directory?" --external-agent codex

# Code analysis
praisonai "Analyze the code quality" --external-agent codex

Non-Interactive Execution

# Run non-interactively
codex exec "Fix all linting errors"

# With specific working directory
codex exec -C /path/to/project "Update dependencies"

Full Auto Mode

# Automatic execution with workspace write access
codex exec --full-auto "Refactor the authentication module"

# Equivalent to:
codex exec -a on-request --sandbox workspace-write "Refactor the authentication module"

Sandbox Modes

# Read-only mode (safest)
codex exec -s read-only "Analyze this codebase"

# Workspace write mode
codex exec -s workspace-write "Fix all bugs"

# Full access (dangerous)
codex exec -s danger-full-access "Install dependencies and run tests"

Code Review

# Run code review
codex review

# Review specific changes
codex review --diff HEAD~5

With Images

# Attach screenshot for context
codex exec -i screenshot.png "Fix the UI bug shown in this image"
# Enable web search
codex exec --search "Find the latest best practices for React hooks"

Local Models

# Use local LM Studio
codex --oss --local-provider lmstudio "Explain this code"

# Use local Ollama
codex --oss --local-provider ollama "Refactor this function"

Python Integration

from praisonai.integrations import CodexCLIIntegration

# Create integration
codex = CodexCLIIntegration(
    workspace="/path/to/project",
    full_auto=True,
    sandbox="workspace-write"
)

# Execute a task
result = await codex.execute("Fix the authentication bug")
print(result)

# Execute with JSON output
codex_json = CodexCLIIntegration(json_output=True)
result = await codex_json.execute("List all functions")
print(result)

# Stream output
async for event in codex.stream("Add error handling"):
    print(event)

Configuration File

Codex uses ~/.codex/config.toml for configuration:
# Default model
model = "gpt-5.2-codex"

# Default sandbox mode
sandbox_permissions = ["disk-full-read-access"]

# Shell environment policy
[shell_environment_policy]
inherit = "all"
Override via CLI:
codex -c model="o3" "Complex reasoning task"
codex -c 'sandbox_permissions=["disk-full-read-access"]' "Read all files"

Environment Variables

VariableDescription
OPENAI_API_KEYOpenAI API key

Output Format

Codex provides detailed output including:
OpenAI Codex v0.75.0 (research preview)
--------
workdir: /path/to/project
model: gpt-5.2-codex
provider: openai
approval: never
sandbox: read-only
--------
user
Your prompt here

thinking
**Analysis of the request**

codex
Response from Codex

tokens used
209