Skip to main content

Structured Output CLI

Generate type-safe structured JSON output using the PraisonAI CLI.

Commands

Generate Structured Output

# Basic structured output with inline schema
praisonai-ts llm json "Extract person info from: John is 30 years old" \
  --schema '{"type":"object","properties":{"name":{"type":"string"},"age":{"type":"number"}}}'

# Using a schema file
praisonai-ts llm json "Analyze this text" --schema-file ./schemas/analysis.json

# With specific model
praisonai-ts llm json "Extract data" --model openai/gpt-4o -m ./schema.json

Schema File Format

Create a JSON schema file:
{
  "type": "object",
  "properties": {
    "title": { "type": "string" },
    "summary": { "type": "string" },
    "tags": {
      "type": "array",
      "items": { "type": "string" }
    },
    "sentiment": {
      "type": "string",
      "enum": ["positive", "negative", "neutral"]
    }
  },
  "required": ["title", "summary"]
}

Options

OptionShortDescription
--model-mModel to use (e.g., openai/gpt-4o-mini)
--schemaInline JSON schema
--schema-file-fPath to JSON schema file
--temperature-tTemperature (0-1, default: 0.1)
--max-tokensMaximum output tokens
--timeoutRequest timeout in ms
--jsonOutput raw JSON
--verbose-vVerbose output

Examples

Data Extraction

$ praisonai-ts llm json "Extract: Apple Inc CEO Tim Cook announced new products" \
  --schema '{"type":"object","properties":{"company":{"type":"string"},"person":{"type":"string"},"role":{"type":"string"},"action":{"type":"string"}}}'
Output:
{
  "company": "Apple Inc",
  "person": "Tim Cook",
  "role": "CEO",
  "action": "announced new products"
}

Classification

$ praisonai-ts llm json "Classify: You won a million dollars! Click here!" \
  --schema '{"type":"object","properties":{"category":{"type":"string","enum":["spam","ham"]},"confidence":{"type":"number"},"reason":{"type":"string"}}}'
Output:
{
  "category": "spam",
  "confidence": 0.95,
  "reason": "Contains typical spam indicators: prize claim, urgency, call to action"
}

Sentiment Analysis

$ praisonai-ts llm json "Analyze sentiment: I love this product! Best purchase ever." \
  --schema-file ./schemas/sentiment.json
Where sentiment.json:
{
  "type": "object",
  "properties": {
    "sentiment": { "type": "string", "enum": ["positive", "negative", "neutral"] },
    "score": { "type": "number", "minimum": -1, "maximum": 1 },
    "keywords": { "type": "array", "items": { "type": "string" } }
  }
}

Complex Nested Output

$ praisonai-ts llm json "Parse this order: 2 pizzas ($15 each), 1 salad ($8), delivery to 123 Main St" \
  --schema-file ./schemas/order.json --json
Where order.json:
{
  "type": "object",
  "properties": {
    "items": {
      "type": "array",
      "items": {
        "type": "object",
        "properties": {
          "name": { "type": "string" },
          "quantity": { "type": "number" },
          "price": { "type": "number" }
        }
      }
    },
    "total": { "type": "number" },
    "delivery": {
      "type": "object",
      "properties": {
        "address": { "type": "string" }
      }
    }
  }
}
Output:
{
  "items": [
    { "name": "pizza", "quantity": 2, "price": 15 },
    { "name": "salad", "quantity": 1, "price": 8 }
  ],
  "total": 38,
  "delivery": {
    "address": "123 Main St"
  }
}

Using Different Providers

# OpenAI
praisonai-ts llm json "Extract data" --model openai/gpt-4o --schema '...'

# Anthropic
praisonai-ts llm json "Extract data" --model anthropic/claude-3-sonnet --schema '...'

# Google
praisonai-ts llm json "Extract data" --model google/gemini-pro --schema '...'

Piping and Scripting

Pipe Input

# Pipe text content
cat document.txt | praisonai-ts llm json --schema-file ./schema.json

# Process multiple files
for f in *.txt; do
  praisonai-ts llm json "$(cat $f)" --schema-file ./schema.json --json >> results.jsonl
done

Use in Scripts

#!/bin/bash
# extract-entities.sh

SCHEMA='{"type":"object","properties":{"entities":{"type":"array","items":{"type":"object","properties":{"name":{"type":"string"},"type":{"type":"string"}}}}}}'

result=$(praisonai-ts llm json "$1" --schema "$SCHEMA" --json)
echo "$result" | jq '.entities'

JSON Lines Output

# Process batch and output JSONL
while read -r line; do
  praisonai-ts llm json "$line" --schema-file ./schema.json --json
done < inputs.txt > outputs.jsonl

Error Handling

# Check exit code
praisonai-ts llm json "Extract data" --schema '...'
if [ $? -ne 0 ]; then
  echo "Extraction failed"
fi

# Capture errors
result=$(praisonai-ts llm json "Extract data" --schema '...' --json 2>&1)
if echo "$result" | jq -e '.success == false' > /dev/null; then
  echo "Error: $(echo "$result" | jq -r '.error.message')"
fi

Environment Variables

# Required: API key for provider
export OPENAI_API_KEY=sk-...

# Optional: Default model
export PRAISONAI_MODEL=openai/gpt-4o-mini

# Optional: Default temperature for structured output
export PRAISONAI_STRUCTURED_TEMP=0.1

Exit Codes

CodeDescription
0Success
1General error
2Invalid arguments
3Schema validation error
4API error

Best Practices

  1. Use schema files for complex schemas
  2. Set low temperature (0.1) for consistent output
  3. Validate output with jq or similar tools
  4. Handle errors in scripts
  5. Use --json flag for machine-readable output