Skip to main content

Workflows

Create and execute reusable multi-step workflows with advanced features like context passing between steps, per-step agent configuration, and async execution. Define complex task sequences in markdown files and execute them programmatically.

Key Features

FeatureDescription
Context PassingAutomatically pass outputs from previous steps to subsequent steps
Per-Step AgentsConfigure different agents with unique roles for each step
Per-Step ToolsAssign specific tools to each step
Async ExecutionExecute workflows asynchronously with aexecute()
Variable SubstitutionUse {{previous_output}} and {{step_name_output}}
Planning ModeEnable planning mode at workflow level

Quick Start

from praisonaiagents import Agent
from praisonaiagents.memory import WorkflowManager

agent = Agent(name="Assistant", llm="gpt-4o-mini")
manager = WorkflowManager()

# Execute with default agent (no executor needed!)
result = manager.execute(
    "deploy",
    default_agent=agent,
    variables={"environment": "production"}
)

if result["success"]:
    print("Workflow completed successfully!")
    for step_result in result["results"]:
        print(f"  {step_result['step']}: {step_result['status']}")

Workflow File Format

Workflows are defined in markdown files with YAML frontmatter:
---
name: Research Pipeline
description: Multi-agent research and writing workflow
default_llm: gpt-4o-mini
planning: true
planning_llm: gpt-4o
variables:
  topic: AI trends
---

## Step 1: Research
Research the topic thoroughly.

```agent
role: Researcher
goal: Find comprehensive information
backstory: Expert researcher with 10 years experience
tavily_search
web_browser
output_variable: research_data
Search for information about {{topic}}

Step 2: Analyze

Analyze the research findings.
role: Analyst
goal: Analyze data patterns
context_from: [Research] retain_full_context: false
Analyze: {{research_data}}

Step 3: Write Report

Write the final report.
role: Writer
goal: Write engaging content
Write a comprehensive report based on {{previous_output}}

### Frontmatter Options

| Option | Type | Description |
|--------|------|-------------|
| `name` | string | Workflow name |
| `description` | string | Workflow description |
| `default_llm` | string | Default LLM for all steps |
| `planning` | boolean | Enable planning mode |
| `planning_llm` | string | LLM for planning |
| `variables` | object | Default variables |

### Step Options

| Option | Type | Description |
|--------|------|-------------|
| `context_from` | list | Specific steps to include context from |
| `retain_full_context` | boolean | Include all previous outputs (default: true) |
| `output_variable` | string | Store output in custom variable name |

## Storage Structure

project/ ├── .praison/ │ └── workflows/ │ ├── deploy.md # Deployment workflow │ ├── test.md # Testing workflow │ ├── review.md # Code review workflow │ └── release.md # Release workflow

## Variable Substitution

Use `{{variable}}` syntax for dynamic values:

```python
from praisonaiagents.memory import WorkflowManager

manager = WorkflowManager()

# Variables defined in workflow file are defaults
# Override at execution time
result = manager.execute(
    "deploy",
    default_agent=agent,
    variables={
        "environment": "staging",  # Override default
        "branch": "feature/new-ui",
        "version": "1.2.3"  # Additional variable
    }
)

Context Passing

Workflow steps automatically pass context to subsequent steps. Use special variables to access previous outputs:
VariableDescription
{{previous_output}}Output from the immediately previous step
{{step_name_output}}Output from a specific step (e.g., {{research_output}})
from praisonaiagents.memory import WorkflowManager, Workflow, WorkflowStep

workflow = Workflow(
    name="pipeline",
    steps=[
        WorkflowStep(
            name="research",
            action="Research AI trends",
            output_variable="research_data"  # Store as custom variable
        ),
        WorkflowStep(
            name="analyze",
            action="Analyze: {{research_data}}",  # Use custom variable
            context_from=["research"],  # Only include research context
            retain_full_context=False  # Don't include all previous outputs
        ),
        WorkflowStep(
            name="write",
            action="Write based on {{previous_output}}"  # Use last step's output
        )
    ]
)

Context Control Options

OptionDefaultDescription
context_fromAll previousList of step names to include context from
retain_full_contextTrueInclude all previous outputs vs only specified
output_variable{step_name}_outputCustom variable name for step output

Conditional Steps

Add conditions to skip steps based on context:
## Step 3: Deploy to Staging
Only deploy to staging for non-production.

```condition
{{environment}} != production
Deploy to staging environment.

## Callbacks

Monitor workflow execution with callbacks:

```python
from praisonaiagents.memory import WorkflowManager

manager = WorkflowManager()

def on_step(step, index):
    print(f"Starting step {index + 1}: {step.name}")

def on_result(step, result):
    print(f"Completed {step.name}: {result[:100]}...")

result = manager.execute(
    "deploy",
    executor=lambda prompt: agent.chat(prompt),
    on_step=on_step,
    on_result=on_result
)

Error Handling

Configure how steps handle errors:
---
name: Resilient Workflow
---

## Step 1: Optional Cleanup
This step can fail without stopping the workflow.

```action
on_error: continue
max_retries: 2
Clean up temporary files.

Step 2: Critical Build

This step must succeed.
on_error: stop
Build the application.

| Error Mode | Behavior |
|------------|----------|
| `stop` | Stop workflow on failure (default) |
| `continue` | Continue to next step on failure |
| `retry` | Retry the step up to `max_retries` times |

## Async Execution

Use `aexecute()` for async workflow execution:

```python
import asyncio
from praisonaiagents import Agent
from praisonaiagents.memory import WorkflowManager

manager = WorkflowManager()

async def run_workflows():
    # Run multiple workflows concurrently
    results = await asyncio.gather(
        manager.aexecute("research", default_llm="gpt-4o-mini"),
        manager.aexecute("analysis", default_llm="gpt-4o-mini"),
    )
    return results

# With async executor
async def async_executor(prompt):
    # Your async logic here
    await asyncio.sleep(0.1)
    return f"Processed: {prompt}"

async def main():
    result = await manager.aexecute(
        "deploy",
        executor=async_executor,
        variables={"environment": "staging"}
    )
    print(result)

asyncio.run(main())

Execute Parameters

ParameterTypeDescription
workflow_namestrName of workflow to execute
executorcallableOptional function to execute steps
default_agentAgentDefault agent for steps without config
default_llmstrDefault LLM model
memoryMemoryShared memory instance
planningboolEnable planning mode
streamboolEnable streaming output
verboseintVerbosity level (0-3)
variablesdictVariables to substitute
on_stepcallableCallback before each step
on_resultcallableCallback after each step

Programmatic API

from praisonaiagents.memory import WorkflowManager, Workflow, WorkflowStep

manager = WorkflowManager(workspace_path="/path/to/project")

# Get a specific workflow
workflow = manager.get_workflow("deploy")
print(f"Workflow: {workflow.name}")
print(f"Steps: {[s.name for s in workflow.steps]}")

# Get statistics
stats = manager.get_stats()
print(f"Total workflows: {stats['total_workflows']}")
print(f"Total steps: {stats['total_steps']}")

# Reload workflows from disk
manager.reload()

Best Practices

Configure different agents with specific roles for each step. A Researcher agent for gathering data, an Analyst for processing, and a Writer for output.
Use context_from to limit which previous outputs are included. This reduces token usage and keeps agents focused on relevant information.
Name your outputs with output_variable for clearer variable substitution in subsequent steps.
Each step should do one thing well. Break complex tasks into multiple steps for better error handling and visibility.
Use aexecute() with asyncio.gather() to run multiple independent workflows concurrently.
Use on_error: continue for optional steps and on_error: stop for critical steps that must succeed.

See Also