Skip to main content
The Examples Runner provides a CLI command to discover, execute, and report on Python examples in your repository. It’s designed for CI/CD pipelines and local development with zero performance impact when not invoked.

Quick Start

# Run all examples in the default examples/ directory
praisonai examples run

# List discovered examples without running
praisonai examples list

# Run with custom path and timeout
praisonai examples run --path ./my-examples --timeout 120

Commands

Run Examples

Execute examples sequentially with live output streaming and report generation.
praisonai examples run [OPTIONS]
Options:
OptionShortDescriptionDefault
--path-pPath to examples directory./examples
--include-iInclude patterns (glob), repeatableAll .py files
--exclude-eExclude patterns (glob), repeatableNone
--timeout-tPer-example timeout in seconds60
--fail-fast-xStop on first failurefalse
--no-streamDon’t stream output to terminalfalse
--report-dir-rDirectory for reports./reports/examples/<timestamp>
--no-jsonSkip JSON report generationfalse
--no-mdSkip Markdown report generationfalse
--require-envRequired env vars (skip all if missing)None
--quiet-qMinimal outputfalse
Examples:
# Run only context examples
praisonai examples run --include "context/*"

# Exclude WoW examples and set 2-minute timeout
praisonai examples run --exclude "*_wow.py" --timeout 120

# Run with fail-fast for CI
praisonai examples run --fail-fast --no-stream

# Require API key (skip all if missing)
praisonai examples run --require-env OPENAI_API_KEY

List Examples

Discover and list examples without executing them.
praisonai examples list [OPTIONS]
Options:
OptionShortDescription
--path-pPath to examples directory
--include-iInclude patterns (glob)
--exclude-eExclude patterns (glob)
--metadata-mShow parsed metadata for each example
Example:
# List with metadata
praisonai examples list --metadata

# Output:
#   1. context/01_basic.py [timeout=120, env=OPENAI_API_KEY]
#   2. context/02_advanced.py [skip]
#   3. db/sqlite_example.py

Show Example Info

Display detailed metadata for a specific example.
praisonai examples info <example-path>
Example:
praisonai examples info ./examples/context/01_basic.py

# Output:
# Example: 01_basic.py
# Path: ./examples/context/01_basic.py
#
# Metadata:
#   Skip: False
#   Timeout: 120
#   Required Env: OPENAI_API_KEY
#   XFail: no
#   Interactive: False

Example Metadata Directives

Control example behavior using comment directives in the first 30 lines of your example files:
#!/usr/bin/env python3
# praisonai: skip=true
# praisonai: timeout=120
# praisonai: require_env=OPENAI_API_KEY,ANTHROPIC_API_KEY
# praisonai: xfail=known_flaky_network

"""Your example code here."""

Available Directives

DirectiveDescriptionExample
skip=trueSkip this example# praisonai: skip=true
timeout=NOverride timeout (seconds)# praisonai: timeout=300
require_env=KEY1,KEY2Required environment variables# praisonai: require_env=OPENAI_API_KEY
xfail=reasonExpected failure (won’t count as failure)# praisonai: xfail=known_issue

Auto-Detection

The runner automatically detects and skips:
  • Interactive examples: Files containing input() calls
  • Private files: Files starting with _ or __
  • Cache directories: __pycache__, .pytest_cache, etc.
  • Virtual environments: venv, .venv, env, .env

Reports

Report Directory Structure

reports/examples/20260109_110000/
├── report.json      # Machine-readable JSON report
├── report.md        # Human-readable Markdown summary
└── logs/
    ├── example1.stdout.log
    ├── example1.stderr.log
    ├── example2.stdout.log
    └── example2.stderr.log

JSON Report Schema

{
  "metadata": {
    "timestamp": "2026-01-09T11:00:00Z",
    "platform": "Darwin-24.0.0-arm64",
    "python_version": "3.12.0",
    "praisonai_version": "3.0.2",
    "git_commit": "abc123",
    "cli_args": ["--timeout=60"],
    "totals": {
      "passed": 10,
      "failed": 2,
      "skipped": 5,
      "timeout": 1,
      "xfail": 0
    }
  },
  "examples": [
    {
      "path": "context/01_basic.py",
      "slug": "context__01_basic",
      "status": "passed",
      "exit_code": 0,
      "duration_seconds": 2.5,
      "start_time": "2026-01-09T11:00:00Z",
      "end_time": "2026-01-09T11:00:02Z",
      "stdout_path": "logs/context__01_basic.stdout.log",
      "stderr_path": "logs/context__01_basic.stderr.log"
    }
  ]
}

Markdown Report

The Markdown report includes:
  • Run metadata (timestamp, platform, versions)
  • Summary table with pass/fail/skip counts
  • Results table with status and duration
  • Detailed failure section with error summaries

Exit Codes

CodeMeaning
0All examples passed (or only skipped)
1One or more examples failed or timed out
2Discovery or configuration error

CI/CD Integration

GitHub Actions

name: Examples
on: [push, pull_request]

jobs:
  examples:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-python@v5
        with:
          python-version: '3.12'
      
      - name: Install dependencies
        run: pip install praisonai
      
      - name: Run examples
        env:
          OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
        run: |
          praisonai examples run \
            --timeout 120 \
            --fail-fast \
            --no-stream \
            --report-dir ./reports
      
      - name: Upload reports
        if: always()
        uses: actions/upload-artifact@v4
        with:
          name: example-reports
          path: reports/

GitLab CI

examples:
  stage: test
  script:
    - pip install praisonai
    - praisonai examples run --fail-fast --no-stream --report-dir ./reports
  artifacts:
    when: always
    paths:
      - reports/
    expire_in: 1 week

Best Practices

  1. Use metadata directives to document example requirements
  2. Set appropriate timeouts for long-running examples
  3. Use require_env to skip examples when API keys are missing
  4. Mark flaky tests with xfail to prevent CI failures
  5. Run with --fail-fast in CI for faster feedback
  6. Archive reports for debugging failed runs

Programmatic Usage

from praisonai.cli.features.examples import (
    ExampleDiscovery,
    ExampleRunner,
    ExamplesExecutor,
    ReportGenerator,
)

# Discover examples
discovery = ExampleDiscovery(
    root=Path("./examples"),
    include_patterns=["context/*"],
    exclude_patterns=["*_wow.py"],
)
examples = discovery.discover()

# Run a single example
runner = ExampleRunner(timeout=60)
result = runner.run(examples[0])
print(f"{result.path}: {result.status}")

# Run all examples with reporting
executor = ExamplesExecutor(
    path=Path("./examples"),
    timeout=60,
    report_dir=Path("./reports"),
)
report = executor.run()
print(f"Passed: {report.totals['passed']}")