Documentation Index
Fetch the complete documentation index at: https://docs.praison.ai/llms.txt
Use this file to discover all available pages before exploring further.
LLM Guardrail CLI Commands
The praisonai-ts CLI provides the guardrail command for content validation.
Basic Usage
# Check content against guardrails
praisonai-ts guardrail check "Your content here"
# Check with custom criteria
praisonai-ts guardrail check "Content to validate" --criteria "Must be professional"
# Get JSON output
praisonai-ts guardrail check "Hello world" --json
Example Output:
{
"success": true,
"data": {
"status": "passed",
"score": 0.95,
"message": "Content passes all criteria",
"reasoning": "The content is appropriate and safe"
}
}
Status Values
| Status | Description |
|---|
passed | Content meets all criteria |
failed | Content does not meet criteria |
warning | Content partially meets criteria |
SDK Usage
For programmatic guardrail usage:
import { LLMGuardrail } from 'praisonai';
const guard = new LLMGuardrail({
name: 'safety',
criteria: 'Content must be safe and appropriate',
threshold: 0.8
});
const result = await guard.check('Hello world');
console.log(result.status); // 'passed', 'failed', or 'warning'
console.log(result.score); // 0-1
console.log(result.reasoning);
For more details, see the LLM Guardrail SDK documentation.