type: job for deterministic automation with optional AI-powered steps.
Quick Start
deploy.yaml
[!TIP]
A YAML file is a job workflow when it has type: job at the root. Without it, PraisonAI treats it as an agent workflow.
How It Works
Step Types
Deterministic Steps (No LLM)
| Key | Type | What it does |
|---|---|---|
run: | Shell | Execute a shell command via subprocess |
python: | Script | Run a Python script file |
script: | Inline | Execute inline Python code |
action: | Action | Run a named action (YAML-defined, file-based, or built-in) |
Agent-Centric Steps (LLM-Powered)
| Key | Type | What it does |
|---|---|---|
agent: | AI Agent | Execute a single AI agent with Agent.chat() |
judge: | Quality Gate | Evaluate content with Judge.evaluate() and threshold |
approve: | Approval Gate | Request human or auto approval before continuing |
Deterministic Steps
Shell Steps
subprocess.run() with shell=True. Non-zero exit code means failure.
Python Script Steps
sys.executable to match the current Python interpreter.
Inline Python Steps
exec() in an isolated namespace:
| Variable | Type | Description |
|---|---|---|
flags | dict | Parsed CLI flags |
vars | dict | Resolved workflow variables |
env | dict | Copy of os.environ |
cwd | str | Workflow’s working directory |
result | — | Set this to produce step output |
Action Steps
bump-version — bumps version = "X.Y.Z" in a file.
Agent-Centric Steps
Agent Step (agent:)
Execute an AI agent inline using praisonaiagents.Agent:
| Field | Default | Description |
|---|---|---|
role | "Assistant" | Agent’s role description |
instructions | "" | System instructions for the agent |
prompt | Uses instructions | The prompt sent to Agent.chat() |
model | "gpt-4o-mini" | LLM model to use |
tools | [] | Tool names to resolve and attach |
name | Step name | Agent display name |
output_file:— automatically saves agent output to a filepromptsupports variable resolution:${{ env.X }},{{ flags.X }}- Tools are resolved from the
praisonaiagents.toolsregistry - Simple string shorthand:
agent: "Write a greeting"(uses defaults)
Judge Step (judge:)
Quality gate that evaluates content and passes/fails based on a threshold:
| Field | Default | Description |
|---|---|---|
input_file | — | File to evaluate (path relative to workflow) |
input | — | Inline text to evaluate (alternative to input_file) |
criteria | "Output is high quality" | Evaluation criteria |
threshold | 7.0 | Minimum score (0–10) to pass |
on_fail | "stop" | What to do on failure |
model | "gpt-4o-mini" | LLM model for evaluation |
on_fail options:
| Value | Behavior |
|---|---|
stop | Halt workflow (default) |
warn | Log warning, continue to next step |
retry | Retry the previous step (future) |
Approve Step (approve:)
Human or automatic approval gate:
| Field | Default | Description |
|---|---|---|
description | Step name | What is being approved |
risk_level | "medium" | Risk level: low, medium, high |
auto_approve | false | Skip human approval (true for CI/CD) |
auto_approve: false, the workflow pauses and prompts in the console. Use flag expressions for dynamic control:
YAML-Defined Actions
Define reusable actions inline — including agent-powered actions:run: (shell), script: (inline Python), python: (script file), and agent: (AI agent).
Variables
Workflow Variables
Environment Variables
Variable Resolution
| Syntax | Resolves to | Example |
|---|---|---|
${{ env.VAR }} | Environment variable | ${{ env.PYPI_TOKEN }} |
{{ flags.name }} | CLI flag value | {{ flags.major }} → True |
[!NOTE] Flag names with hyphens are converted to underscores:--no-bump→flags.no_bump.
Flags
Conditional Steps
flags (dot access) and env (os.environ) in scope.
Dry Run
Execution Output
Error Handling
Full Example
release.yaml
Comparison: Job vs Agent Workflows
| Job Workflows | Agent Workflows | |
|---|---|---|
| Discriminator | type: job | No type field |
| Execution | Deterministic + optional AI | LLM-driven |
| Deterministic steps | ✅ run, python, script, action | ❌ |
| Agent steps | ✅ agent, judge, approve | ✅ Agent tasks |
| API key required | Only for agent steps | Always |
| Dry-run | ✅ --dry-run | ❌ |
| Conditionals | ✅ if: expressions | ❌ (LLM decides) |
| Use case | CI/CD, build, deploy, mixed pipelines | Research, content, analysis |

