Guardrails
Guardrails provide content validation and safety checks for agent outputs.Quick Start
Copy
import { createLLMGuardrail } from 'praisonai';
const guardrail = createLLMGuardrail({
name: 'safety-check',
criteria: 'Content should be safe, appropriate, and helpful',
llm: 'openai/gpt-4o-mini'
});
const result = await guardrail.check('Hello world');
console.log(result.status); // 'passed', 'failed', or 'warning'
console.log(result.score);
Configuration
Copy
interface LLMGuardrailConfig {
name: string;
criteria: string;
llm?: string;
threshold?: number; // Default: 0.7
verbose?: boolean;
}
Response Format
Copy
interface LLMGuardrailResult {
status: 'passed' | 'failed' | 'warning';
score: number; // 0 to 1
message?: string;
reasoning?: string;
}
Example
Copy
import { createLLMGuardrail, Agent } from 'praisonai';
// Create guardrail
const guardrail = createLLMGuardrail({
name: 'professional-check',
criteria: 'Content must be professional and appropriate for business use',
threshold: 0.8
});
// Check content
const result = await guardrail.check('This is a professional message.');
if (result.status === 'passed') {
console.log('Content approved');
} else {
console.log('Content rejected:', result.reasoning);
}
CLI Usage
Copy
praisonai-ts guardrail check "Content to validate"
praisonai-ts guardrail check "Text" --criteria "Must be professional" --json

