Skip to main content

Workflows

Create and execute reusable multi-step workflows with advanced features like context passing between steps, per-step agent configuration, and async execution. Define complex task sequences in markdown files and execute them programmatically.

Key Features

FeatureDescription
Context PassingAutomatically pass outputs from previous steps to subsequent steps
Per-Step AgentsConfigure different agents with unique roles for each step
Per-Step ToolsAssign specific tools to each step
Async ExecutionExecute workflows asynchronously with aexecute()
Variable SubstitutionUse {{previous_output}} and {{step_name_output}}
Planning ModeEnable planning mode at workflow level

Quick Start

from praisonaiagents import Agent, Workflow

# Create agents with specific roles
researcher = Agent(
    name="Researcher",
    role="Research Analyst",
    goal="Research and provide information about topics",
    instructions="You are a research analyst. Provide concise, factual information."
)

writer = Agent(
    name="Writer", 
    role="Content Writer",
    goal="Write engaging content based on research",
    instructions="You are a content writer. Write clear, engaging content."
)

# Create workflow with agents as steps
workflow = Workflow(steps=[researcher, writer])

# Run workflow - agents process sequentially
result = workflow.start("What are the key benefits of AI agents?")
print(result["output"])

Workflow Class API

The Workflow class provides a powerful programmatic API for creating workflows with functions, agents, and pattern helpers.

Basic Usage

from praisonaiagents import Workflow, WorkflowContext, StepResult
from praisonaiagents import route, parallel, loop, repeat

# Define step functions
def step1(ctx: WorkflowContext) -> StepResult:
    return StepResult(output="result")

# Create workflow with callbacks using hooks= consolidated param
from praisonaiagents import WorkflowHooksConfig

workflow = Workflow(
    steps=[step1, step2],
    hooks=WorkflowHooksConfig(
        on_workflow_start=lambda w, i: print(f"Starting: {i}"),
        on_workflow_complete=lambda w, r: print(f"Done: {r['status']}"),
        on_step_start=lambda name, ctx: print(f"Step: {name}"),
        on_step_complete=lambda name, r: print(f"{name}: {r.output}"),
        on_step_error=lambda name, e: print(f"Error in {name}: {e}")
    )
)

# Run
result = workflow.start("input")

Workflow with Agents

Use Agent objects directly as workflow steps:
from praisonaiagents import Agent, Workflow

researcher = Agent(name="Researcher", role="Research expert", tools=[tavily_search])
writer = Agent(name="Writer", role="Content writer")
editor = Agent(name="Editor", role="Editor")

workflow = Workflow(steps=[researcher, writer, editor])
result = workflow.start("Research and write about AI")

Planning & Reasoning

from praisonaiagents import WorkflowPlanningConfig

workflow = Workflow(
    steps=[researcher, writer, editor],
    planning=WorkflowPlanningConfig(
        enabled=True,        # Create execution plan before running
        llm="gpt-4o",        # LLM for planning
        reasoning=True       # Enable chain-of-thought reasoning
    )
)
result = workflow.start("Research and write about AI trends")

Tools per Step

from praisonaiagents import Workflow, Task

workflow = Workflow(steps=[
    Task(
        name="research",
        action="Research {{topic}}",
        tools=[tavily_search, web_scraper]
    ),
    Task(
        name="write",
        action="Write article based on: {{previous_output}}",
        tools=[file_writer]
    )
])

Guardrails & Validation

from praisonaiagents import Workflow, Task

def validate_output(result):
    """Returns (is_valid, feedback_message)"""
    if "error" in result.output.lower():
        return (False, "Please fix the error and try again")
    return (True, None)

workflow = Workflow(steps=[
    Task(
        name="generator",
        handler=generate_content,
        guardrails=validate_output,  # Validation function
        max_retries=3               # Auto-retry on failure
    )
])

Output Modes

Control how workflow execution is displayed. Workflows use the same output presets as Agent for consistency.
from praisonaiagents import Workflow, Agent

# Shows tool calls and final output inline
workflow = Workflow(
    steps=[Agent(instructions="Research AI trends")],
    output="status"  # Shows: ▸ tool → result ✓
)
result = workflow.start("Research AI")
Available Output Presets:
PresetDescription
silentNo output (default, fastest)
statusTool calls + final output inline
traceTimestamped execution trace
verboseFull Rich panels
debugTrace + metrics
jsonJSONL output for piping

Output to File

from praisonaiagents import Workflow, Task

workflow = Workflow(steps=[
    Task(
        name="generator",
        action="Generate report",
        output_file="output/{{name}}_report.txt"
    )
])

Memory Integration

from praisonaiagents import WorkflowMemoryConfig

workflow = Workflow(
    steps=[researcher, writer],
    memory=WorkflowMemoryConfig(
        backend="chroma",
        persist=True,
        collection="my_workflow"
    )
)

# First run
result1 = workflow.start("Research AI")

# Second run - remembers first run
result2 = workflow.start("Continue the research")

Async Execution

import asyncio
from praisonaiagents import Workflow

workflow = Workflow(steps=[step1, step2])

async def main():
    result = await workflow.astart("input")
    print(result)

asyncio.run(main())

Status Tracking

workflow = Workflow(steps=[step1, step2, step3])
result = workflow.start("input")

# Check workflow status
print(workflow.status)  # "not_started" | "running" | "completed"

# Check individual step statuses
print(workflow.step_statuses)  # {"step1": "completed", "step2": "completed", "step3": "skipped"}

Pattern Helpers Reference

PatternDescriptionExample
route()Decision-based branchingroute({"yes": [step_a], "no": [step_b]})
parallel()Concurrent executionparallel([step1, step2, step3])
loop()Iterate over list/CSVloop(handler, over="items")
repeat()Evaluator-optimizerrepeat(gen, until=condition, max_iterations=5)
when()Conditional branchingwhen(condition="{{score}} > 80", then_steps=[...])
include()Workflow compositioninclude(workflow=sub_workflow)
from praisonaiagents import Workflow
from praisonaiagents import route, parallel, loop, repeat, when, include

workflow = Workflow(steps=[
    classifier,
    route({"tech": [tech_agent], "creative": [creative_agent]}),
    parallel([worker1, worker2, worker3]),
    loop(processor, over="items"),
    repeat(generator, until=lambda r: "done" in r, max_iterations=5),
    when(condition="{{score}} > 80", then_steps=[approve], else_steps=[reject]),
    include(workflow=sub_workflow)  # Or include(recipe="recipe-name")
])

Robustness Features

Enable execution tracing and graceful degradation for production workflows:
from praisonaiagents import Workflow, Agent

# Enable execution trace for debugging
workflow = Workflow(
    steps=[agent1, agent2],
    history=True  # Track step execution
)

result = workflow.start("input")

# Get execution trace
history = workflow.get_history()
# Returns: [{"step": "agent1", "timestamp": "...", "success": True, "output": "..."}, ...]
[!NOTE] if_() is deprecated in favor of when() for cleaner syntax.

### YAML Workflow Configuration

Define workflows in YAML format with agents and all patterns:

```yaml
# .praison/workflows/research.yaml
name: Research Workflow
description: Research and write content with multiple patterns

agents:
  researcher:
    role: Research Expert
    goal: Find accurate information
    tools: [tavily_search, web_scraper]
  
  writer:
    role: Content Writer
    goal: Write engaging content
    tools: [file_writer]
  
  editor:
    role: Editor
    goal: Polish and refine content

  tech_expert:
    role: Technical Expert
    goal: Handle technical queries

  creative_writer:
    role: Creative Writer
    goal: Handle creative content

steps:
  # Step 1: Sequential agent execution
  - agent: researcher
    action: Research {{topic}}
    output_variable: research_data

  # Step 2: Routing based on content type
  - name: classifier
    action: Classify this content as technical or creative
    route:
      technical: [tech_handler]
      creative: [creative_handler]
      default: [general_handler]

  # Step 3: Parallel execution
  - name: parallel_research
    parallel:
      - agent: researcher
        action: Research market trends
      - agent: researcher
        action: Research competitors
      - agent: researcher
        action: Research customers

  # Step 4: Loop over items
  - agent: writer
    action: Write article about {{item}}
    loop_over: topics
    loop_var: item

  # Step 5: Repeat until condition (evaluator-optimizer)
  - agent: editor
    action: Review and improve the content
    repeat:
      until: "quality score > 8"
      max_iterations: 3

  # Step 6: Final output with file save
  - agent: writer
    action: Write final report based on {{previous_output}}
    output_file: output/{{topic}}_report.md

variables:
  topic: AI trends
  topics:
    - Machine Learning
    - Neural Networks
    - Natural Language Processing

# Workflow-level settings
planning: true
planning_llm: gpt-4o
verbose: true
memory_config:
  provider: chroma
  persist: true

Complete Python Example with All Patterns

from praisonaiagents import Agent, Workflow, Task, WorkflowContext, StepResult
from praisonaiagents import route, parallel, loop, repeat
from pydantic import BaseModel

# Define output schema
class Report(BaseModel):
    title: str
    content: str
    score: float

# Create agents
researcher = Agent(name="Researcher", role="Research expert", tools=[tavily_search])
writer = Agent(name="Writer", role="Content writer")
editor = Agent(name="Editor", role="Editor")
tech_agent = Agent(name="TechExpert", role="Technical expert")
creative_agent = Agent(name="Creative", role="Creative writer")

# Define handler functions
def classifier(ctx: WorkflowContext) -> StepResult:
    content = ctx.previous_result or ctx.input
    if "code" in content.lower() or "technical" in content.lower():
        return StepResult(output="technical")
    return StepResult(output="creative")

def validate_quality(result: StepResult) -> tuple[bool, str]:
    """Guardrail: Returns (is_valid, feedback)"""
    if len(result.output) < 100:
        return (False, "Content too short, please expand")
    return (True, None)

def process_item(ctx: WorkflowContext) -> StepResult:
    item = ctx.variables.get("item", "")
    return StepResult(output=f"Processed: {item}")

# Create comprehensive workflow
workflow = Workflow(
    steps=[
        # 1. Sequential agents
        researcher,
        writer,
        
        # 2. Routing based on classifier output
        classifier,
        route({
            "technical": [tech_agent],
            "creative": [creative_agent],
            "default": [writer]
        }),
        
        # 3. Parallel execution
        parallel([
            Task(name="market", action="Research market"),
            Task(name="competitors", action="Research competitors"),
            Task(name="customers", action="Research customers")
        ]),
        
        # 4. Loop over items
        loop(process_item, over="items"),
        
        # 5. Repeat until condition (evaluator-optimizer)
        repeat(
            Task(name="refine", handler=editor.chat),
            until=lambda ctx: "excellent" in ctx.previous_result.lower(),
            max_iterations=3
        ),
        
        # 6. Final step with guardrail and output options
        Task(
            name="final_report",
            handler=lambda ctx: StepResult(output=writer.chat(f"Write report: {ctx.previous_result}")),
            guardrails=validate_quality,
            execution={"max_retries": 2},
            output={"file": "output/report.md", "pydantic_model": Report}
        )
    ],
    
    # Workflow configuration
    variables={"items": ["AI", "ML", "NLP"]},
    planning=WorkflowPlanningConfig(enabled=True, llm="gpt-4o", reasoning=True),
    
    # Callbacks using hooks= consolidated param
    hooks=WorkflowHooksConfig(
        on_workflow_start=lambda w, i: print(f"🚀 Starting workflow: {i}"),
        on_step_start=lambda name, ctx: print(f"▶️ Step: {name}"),
        on_step_complete=lambda name, r: print(f"✅ {name}: {str(r.output)[:50]}..."),
        on_step_error=lambda name, e: print(f"❌ Error in {name}: {e}"),
        on_workflow_complete=lambda w, r: print(f"🎉 Workflow completed: {r['status']}")
    )
)

# Execute
result = workflow.start("Research and write about AI trends")

# Check status
print(f"Status: {workflow.status}")
print(f"Step statuses: {workflow.step_statuses}")
print(f"Final output: {result['output']}")

Workflow File Format

Workflows are defined in markdown files with YAML frontmatter:
---
name: Research Pipeline
description: Multi-agent research and writing workflow
default_llm: gpt-4o-mini
planning: true
planning_llm: gpt-4o
variables:
  topic: AI trends
---

## Step 1: Research
Research the topic thoroughly.

```agent
role: Researcher
goal: Find comprehensive information
instructions:  # Canonical: use 'instructions' instead of 'backstory' Expert researcher with 10 years experience
```

```tools
tavily_search
web_browser
```

output_variable: research_data

```action
Search for information about {{topic}}
```

## Step 2: Analyze
Analyze the research findings.

```agent
role: Analyst
goal: Analyze data patterns
```

context_from: [Research]
retain_full_context: false

```action
Analyze: {{research_data}}
```

## Step 3: Write Report
Write the final report.

```agent
role: Writer
goal: Write engaging content
```

```action
Write a comprehensive report based on {{previous_output}}
```

Frontmatter Options

OptionTypeDescription
namestringWorkflow name
descriptionstringWorkflow description
default_llmstringDefault LLM for all steps
planningbooleanEnable planning mode
planning_llmstringLLM for planning
variablesobjectDefault variables

Step Options

OptionTypeDescription
context_fromlistSpecific steps to include context from
retain_full_contextbooleanInclude all previous outputs (default: true)
output_variablestringStore output in custom variable name
output_filestringSave step output to file
loop_overstringVariable name to iterate over
loop_varstringVariable name for current item in loop

Pattern Blocks

Use code blocks to define workflow patterns:
BlockDescription
```routeDefine routing conditions
```parallelDefine parallel execution steps
```imagesDefine images for vision tasks
```repeatDefine repeat/iteration settings

Route Pattern Example

## Step 1: Classifier
Classify the request.

```action
Classify this request
```

```route
technical: [Tech Handler]
creative: [Creative Handler]
default: [General Handler]
```

Parallel Pattern Example

## Step 1: Research
Research in parallel.

```parallel
- Market Research
- Competitor Analysis
- Customer Survey
```

```action
Research the topic
```

Loop Pattern Example

## Step 1: Process Items
Process each item.

loop_over: items
loop_var: current_item

```action
Process {{current_item}}
```

Images Pattern Example

## Step 1: Analyze Image
Analyze the provided images.

```images
image1.jpg
image2.png
```

```action
Analyze these images
```

Output File Example

## Step 1: Generate Report
Generate and save a report.

output_file: output/report.txt

```action
Generate the report
```

Storage Structure

project/
├── .praison/
│   └── workflows/
│       ├── deploy.md        # Deployment workflow
│       ├── test.md          # Testing workflow
│       ├── review.md        # Code review workflow
│       └── release.md       # Release workflow

Variable Substitution

Use {{variable}} syntax for dynamic values:
from praisonaiagents import WorkflowManager

manager = WorkflowManager()

# Variables defined in workflow file are defaults
# Override at execution time
result = manager.execute(
    "deploy",
    default_agent=agent,
    variables={
        "environment": "staging",  # Override default
        "branch": "feature/new-ui",
        "version": "1.2.3"  # Additional variable
    }
)

Context Passing

Workflow steps automatically pass context to subsequent steps. Use special variables to access previous outputs:
VariableDescription
{{previous_output}}Output from the immediately previous step
{{step_name_output}}Output from a specific step (e.g., {{research_output}})
from praisonaiagents import Workflow, Task
from praisonaiagents import WorkflowManager, TaskContextConfig, TaskOutputConfig

workflow = Workflow(
    name="pipeline",
    steps=[
        Task(
            name="research",
            action="Research AI trends",
            output=TaskOutputConfig(variable="research_data")  # Store as custom variable
        ),
        Task(
            name="analyze",
            action="Analyze: {{research_data}}",  # Use custom variable
            context=TaskContextConfig(from_steps=["research"], retain_full=False)
        ),
        Task(
            name="write",
            action="Write based on {{previous_output}}"  # Use last step's output
        )
    ]
)

Context Control Options

OptionDefaultDescription
context_fromAll previousList of step names to include context from
retain_full_contextTrueInclude all previous outputs vs only specified
output_variable{step_name}_outputCustom variable name for step output

Conditional Steps

Add conditions to skip steps based on context:
## Step 3: Deploy to Staging
Only deploy to staging for non-production.

```condition
{{environment}} != production
```

```action
Deploy to staging environment.
```

Callbacks

Monitor workflow execution with callbacks:
from praisonaiagents import WorkflowManager

manager = WorkflowManager()

def on_step(step, index):
    print(f"Starting step {index + 1}: {step.name}")

def on_result(step, result):
    print(f"Completed {step.name}: {result[:100]}...")

result = manager.execute(
    "deploy",
    executor=lambda prompt: agent.chat(prompt),
    on_step=on_step,
    on_result=on_result
)

Error Handling

Configure how steps handle errors:
---
name: Resilient Workflow
---

## Step 1: Optional Cleanup
This step can fail without stopping the workflow.

```action
on_error: continue
max_retries: 2
```

Clean up temporary files.

## Step 2: Critical Build
This step must succeed.

```action
on_error: stop
```

Build the application.
Error ModeBehavior
stopStop workflow on failure (default)
continueContinue to next step on failure
retryRetry the step up to max_retries times

Async Execution

Use aexecute() for async workflow execution:
import asyncio
from praisonaiagents import Agent
from praisonaiagents import WorkflowManager

manager = WorkflowManager()

async def run_workflows():
    # Run multiple workflows concurrently
    results = await asyncio.gather(
        manager.aexecute("research", default_llm="gpt-4o-mini"),
        manager.aexecute("analysis", default_llm="gpt-4o-mini"),
    )
    return results

# With async executor
async def async_executor(prompt):
    # Your async logic here
    await asyncio.sleep(0.1)
    return f"Processed: {prompt}"

async def main():
    result = await manager.aexecute(
        "deploy",
        executor=async_executor,
        variables={"environment": "staging"}
    )
    print(result)

asyncio.run(main())

Execute Parameters

ParameterTypeDescription
workflow_namestrName of workflow to execute
executorcallableOptional function to execute steps
default_agentAgentDefault agent for steps without config
default_llmstrDefault LLM model
memoryMemoryShared memory instance
planningboolEnable planning mode
streamboolEnable streaming output
verboseintVerbosity level (0-3)
variablesdictVariables to substitute
on_stepcallableCallback before each step
on_resultcallableCallback after each step

Programmatic API

from praisonaiagents import Workflow, Task
from praisonaiagents import WorkflowManager

manager = WorkflowManager(workspace_path="/path/to/project")

# Get a specific workflow
workflow = manager.get_workflow("deploy")
print(f"Workflow: {workflow.name}")
print(f"Steps: {[s.name for s in workflow.steps]}")

# Get statistics
stats = manager.get_stats()
print(f"Total workflows: {stats['total_workflows']}")
print(f"Total steps: {stats['total_steps']}")

# Reload workflows from disk
manager.reload()

Best Practices

Configure different agents with specific roles for each step. A Researcher agent for gathering data, an Analyst for processing, and a Writer for output.
Use context_from to limit which previous outputs are included. This reduces token usage and keeps agents focused on relevant information.
Name your outputs with output_variable for clearer variable substitution in subsequent steps.
Each step should do one thing well. Break complex tasks into multiple steps for better error handling and visibility.
Use aexecute() with asyncio.gather() to run multiple independent workflows concurrently.
Use on_error: continue for optional steps and on_error: stop for critical steps that must succeed.

CLI Usage

Execute workflows directly from the command line:
# List workflows
praisonai workflow list

# Execute with tools and save
praisonai workflow run "Research Blog" --tools tavily --save

# With planning mode (AI creates sub-steps)
praisonai workflow run "Research Blog" --planning --verbose

# With variables
praisonai workflow run deploy --workflow-var environment=staging

CLI Options

FlagDescription
--workflow-var key=valueSet workflow variable
--llm <model>LLM model
--tools <tools>Tools (comma-separated)
--planningEnable planning mode
--memoryEnable memory
--saveSave output to file
--verboseVerbose output
For full CLI documentation, see Workflow CLI.

Architecture Patterns

Understanding how Workflow relates to other PraisonAI patterns:

Class Hierarchy

ClassContainsPurpose
AgentSelf (LLM wrapper)Core execution unit
TaskReference to agentWork item for agent
AgentsList of AgentMulti-agent orchestrator
WorkflowList of stepsDeclarative step orchestrator
TaskReference to agent fieldWork item with agent reference

Pattern Comparison

ConceptAgents PatternWorkflow Pattern
OrchestratorAgentsWorkflow
Work ItemTask (contains agent ref)Task (contains agent ref)
ExecutorAgentAgent (same!)
Key Insight: Task is to Workflow what Task is to Agents — it’s a work item that references an agent, not an agent itself.

Orchestrator Diagram

See Also