Documentation Index
Fetch the complete documentation index at: https://docs.praison.ai/llms.txt
Use this file to discover all available pages before exploring further.
Report Generator
Generate business reports from data
CLI Quickstart (Beginner)
# Install
pip install praisonai praisonai-tools
# Run the tool
praisonai recipe run ai-report-generator \
--input '{"input": "your-input-here"}' \
--json
# With output directory
praisonai recipe run ai-report-generator \
--input-file config.json \
--out-dir ./output
Output:
{
"ok": true,
"run_id": "run_abc123",
"recipe": "ai-report-generator",
"output": {},
"artifacts": [],
"warnings": [],
"error": null
}
Use in Your App (Embedded SDK)
from praisonai.recipe import run, run_stream
# Basic usage
result = run(
"ai-report-generator",
input={
"input": "your-input-here"
}
)
print(f"Success: {result.ok}")
print(f"Output: {result.output}")
# With streaming (if supported)
for event in run_stream("ai-report-generator", input={}):
print(event)
Use as a Server (HTTP Sidecar)
Start Server
praisonai serve recipe --port 8080
Invoke via curl
curl -X POST http://localhost:8080/v1/recipes/run \
-H "Content-Type: application/json" \
-d '{
"recipe": "ai-report-generator",
"input": {"input": "your-input-here"}
}'
With Authentication (Remote Runner)
export PRAISONAI_ENDPOINTS_URL=https://api.praisonai.com
export PRAISONAI_ENDPOINTS_API_KEY=your-key
curl -X POST $PRAISONAI_ENDPOINTS_URL/v1/recipes/run \
-H "X-API-Key: $PRAISONAI_ENDPOINTS_API_KEY" \
-H "Content-Type: application/json" \
-d '{"recipe": "ai-report-generator", "input": {}}'
{
"type": "object",
"properties": {
"input": {"type": "string", "description": "Primary input"},
"options": {"type": "object", "description": "Additional options"}
},
"required": ["input"]
}
Output Schema
{
"ok": true,
"run_id": "string",
"recipe": "ai-report-generator",
"output": {},
"artifacts": [{"path": "string", "type": "string"}],
"warnings": [],
"error": null
}
Integration Models
Model 1: Embedded SDK
from praisonai.recipe import run, run_stream
result = run("ai-report-generator", input={"input": "data"})
Model 2: CLI Invocation
praisonai recipe run ai-report-generator --input-file config.json --json
Model 3: Local HTTP Sidecar
praisonai serve recipe --port 8080
curl -X POST http://localhost:8080/v1/recipes/run -d '...'
Model 4: Remote Managed Runner
import os
os.environ["PRAISONAI_ENDPOINTS_URL"] = "https://api.praisonai.com"
os.environ["PRAISONAI_ENDPOINTS_API_KEY"] = "your-key"
Model 5: Event-Driven
queue.publish("recipes.run", {
"recipe": "ai-report-generator",
"input": {},
"callback_url": "https://your-app.com/webhook"
})
Model 6: Plugin Mode
from praisonaiagents import Agent
agent = Agent(name="Worker", tools=["ai-report-generator"])
Operational Notes
Dependencies
pip install praisonai-tools
# Requires: pandas, LLM
- Use
--stream for progress on long operations
- Enable caching for repeated inputs
- Use batch mode for multiple items
Security Notes
- File paths are sandboxed
- Secrets are redacted in logs
- PII handling follows GDPR guidelines
Troubleshooting
| Issue | Solution |
|---|
| Missing dependency | Install required extras |
| Timeout | Increase --timeout-sec |
| Invalid input | Check schema requirements |