Skip to main content

Multi-Provider Agent CLI

The CLI provides commands for running agents with different LLM providers, testing connectivity, and managing provider configurations.

Available Commands

CommandDescription
llm providersList available AI SDK providers
llm test <provider>Test connectivity to a provider
llm validate <provider>Validate provider configuration
llm run "<prompt>"Run a prompt with a model
llm modelsList common models by provider
llm configShow resolved configuration
llm traceDemo attribution headers
llm toolsShow tool calling documentation
llm jsonShow structured output documentation

List Providers

List all supported AI SDK providers and their status:
praisonai-ts llm providers
Output:
AI SDK Providers
================
✅ AI SDK: installed

Available Providers:
  ✅ openai               🔑
  ⚠️ anthropic            Missing ANTHROPIC_API_KEY
  ⚠️ google               Missing GOOGLE_API_KEY
  ✅ groq                  🔑
  ...

JSON Output

praisonai-ts llm providers --json
{
  "success": true,
  "data": {
    "ai_sdk_available": true,
    "ai_sdk_version": "6.0.3",
    "providers": [
      {
        "id": "openai",
        "package": "@ai-sdk/openai",
        "envKey": "OPENAI_API_KEY",
        "hasApiKey": true,
        "status": "ready"
      }
    ],
    "total": 13
  }
}

Test Provider

Test connectivity to a specific provider:
praisonai-ts llm test openai
Output:
Testing openai/gpt-4o-mini...
✅ Connection successful! (1538ms)
  Provider: openai
  Model: gpt-4o-mini
  Response: OK

With Custom Model

praisonai-ts llm test anthropic --model claude-3-haiku-20240307

JSON Output

praisonai-ts llm test openai --json
{
  "success": true,
  "data": {
    "provider": "openai",
    "model": "gpt-4o-mini",
    "status": "success",
    "duration_ms": 1538,
    "response": "OK"
  }
}

Validate Configuration

Validate provider setup including API keys and packages:
praisonai-ts llm validate openai
Output:
Validation: openai
==================
  ✅ AI SDK Installed: AI SDK is installed
  ✅ Provider Supported: Provider 'openai' is supported
  ✅ Provider Package: @ai-sdk/openai is installed
  ✅ API Key: OPENAI_API_KEY is set

✅ All checks passed!

JSON Output

praisonai-ts llm validate anthropic --json
{
  "success": true,
  "data": {
    "provider": "anthropic",
    "valid": true,
    "checks": [
      { "name": "AI SDK Installed", "passed": true },
      { "name": "Provider Supported", "passed": true },
      { "name": "Provider Package", "passed": true },
      { "name": "API Key", "passed": true }
    ]
  }
}

Run Prompts

Execute prompts with any supported model:

Basic Usage

praisonai-ts llm run "What is 2+2?"

With Model Selection

praisonai-ts llm run "Explain quantum computing" --model openai/gpt-4o
praisonai-ts llm run "Write a haiku" --model anthropic/claude-3-5-sonnet-latest

Streaming

praisonai-ts llm run "Write a story about a robot" --stream

JSON Output

praisonai-ts llm run "Hello" --json
{
  "success": true,
  "data": {
    "model": "openai/gpt-4o-mini",
    "text": "Hello! How can I help you today?",
    "finishReason": "stop",
    "duration_ms": 856
  }
}

With Timeout

praisonai-ts llm run "Complex task" --timeout 120000

Verbose Mode

praisonai-ts llm run "Hello" --verbose
Shows additional details like token usage and timing.

List Models

Show common models for each provider:
praisonai-ts llm models
Output:
Common Models by Provider
=========================
Note: This is not exhaustive. Check provider docs for all models.

  openai:
    - gpt-4o
    - gpt-4o-mini
    - gpt-4-turbo
    - o1
    - o1-mini

  anthropic:
    - claude-3-5-sonnet-latest
    - claude-3-5-haiku-latest
    - claude-3-opus-latest

  google:
    - gemini-2.0-flash
    - gemini-1.5-pro
    - gemini-1.5-flash

Show Configuration

Display resolved configuration with redacted secrets:
praisonai-ts llm config
Output:
AI SDK Configuration
====================
  Defaults:
    timeout: 60000ms
    maxRetries: 2
    maxOutputTokens: 4096
    redactLogs: true

  Provider Aliases:
    oai → openai
    claude → anthropic
    gemini → google

  Environment (redacted):
    OPENAI_API_KEY: ****GW4A
    ANTHROPIC_API_KEY: ****x0QAA

JSON Output

praisonai-ts llm config --json

Attribution Trace Demo

Demonstrate multi-agent attribution headers:
praisonai-ts llm trace
Output:
Attribution Trace Demo
======================
Running with model: openai/gpt-4o-mini

  Attribution Context:
    agentId: agent-mjspa0il
    runId: run-drkw6p18
    traceId: trace-zgu2awc4

  Headers injected:
    X-Agent-Id: agent-mjspa0il
    X-Run-Id: run-drkw6p18
    X-Trace-Id: trace-zgu2awc4

✅ Response: Trace OK (885ms)

Attribution headers are automatically injected into LLM requests.

JSON Output

praisonai-ts llm trace --json
{
  "success": true,
  "data": {
    "attribution": {
      "agentId": "agent-mjspa0il",
      "runId": "run-drkw6p18",
      "traceId": "trace-zgu2awc4"
    },
    "headers": {
      "X-Agent-Id": "agent-mjspa0il",
      "X-Run-Id": "run-drkw6p18",
      "X-Trace-Id": "trace-zgu2awc4"
    },
    "response": "Trace OK",
    "duration_ms": 885
  }
}

Tool Calling Documentation

Show how to use tool calling with AI SDK:
praisonai-ts llm tools
Displays example code for defining and using tools with Zod schemas.

Structured Output Documentation

Show how to generate structured JSON output:
praisonai-ts llm json
Displays example code for using generateObject with Zod schemas.

Global Options

All commands support these options:
OptionDescription
--jsonOutput in JSON format
--verboseShow detailed output
--model <provider/model>Specify model
--timeout <ms>Request timeout in milliseconds

Environment Variables

Configure providers via environment variables:
# Required for each provider you want to use
export OPENAI_API_KEY=sk-...
export ANTHROPIC_API_KEY=sk-ant-...
export GOOGLE_API_KEY=AIza...
export GROQ_API_KEY=gsk_...
export MISTRAL_API_KEY=...
export COHERE_API_KEY=...

Exit Codes

CodeMeaning
0Success
1General error
2Invalid arguments
3Authentication error
4Provider not found
5Network error
6Timeout

Examples

Test All Configured Providers

for provider in openai anthropic google; do
  echo "Testing $provider..."
  praisonai-ts llm test $provider --json
done

Compare Models

PROMPT="Explain AI in one sentence"

echo "OpenAI:"
praisonai-ts llm run "$PROMPT" --model openai/gpt-4o-mini

echo "Anthropic:"
praisonai-ts llm run "$PROMPT" --model anthropic/claude-3-haiku-20240307

echo "Google:"
praisonai-ts llm run "$PROMPT" --model google/gemini-1.5-flash

Streaming with Different Providers

# OpenAI streaming
praisonai-ts llm run "Write a poem" --model openai/gpt-4o-mini --stream

# Anthropic streaming
praisonai-ts llm run "Write a poem" --model anthropic/claude-3-5-sonnet-latest --stream

Troubleshooting

Provider Not Found

praisonai-ts llm validate <provider>
Check if the provider package is installed:
npm install @ai-sdk/<provider>

Authentication Error

Ensure your API key is set:
export OPENAI_API_KEY=sk-...
praisonai-ts llm test openai

Timeout Issues

Increase timeout for slow requests:
praisonai-ts llm run "Complex task" --timeout 120000

Next Steps