Documentation Index
Fetch the complete documentation index at: https://docs.praison.ai/llms.txt
Use this file to discover all available pages before exploring further.
Guardrails CLI Commands
The praisonai-ts CLI provides the guardrail command for content validation.
Check Content
# Check content against guardrails
praisonai-ts guardrail check "Your content here"
# Check with custom criteria
praisonai-ts guardrail check "Content" --criteria "Must be professional"
# Get JSON output
praisonai-ts guardrail check "Hello world" --json
Example Output:
{
"success": true,
"data": {
"status": "passed",
"score": 0.95,
"message": "Content passes validation"
}
}
SDK Usage
For programmatic guardrail usage:
import { LLMGuardrail, builtinGuardrails } from 'praisonai';
// LLM-based guardrail
const guard = new LLMGuardrail({
name: 'safety',
criteria: 'Content must be safe and appropriate'
});
const result = await guard.check('Hello world');
// Built-in guardrails
const maxLength = builtinGuardrails.maxLength(100);
const lengthResult = await maxLength.run('Hello');
For more details, see the Guardrails SDK documentation.