Skip to main content

YAML Configuration Reference

Complete reference for all configuration options in agents.yaml and workflow.yaml files.
Both files are fully compatible! PraisonAI accepts both agents.yaml and workflow.yaml with the same features. The difference is primarily in naming conventions.

Quick Comparison

framework: praisonai
topic: "Research AI trends"

roles:
  researcher:
    role: Research Analyst
    backstory: "Expert researcher"
    goal: Research topics
    tools:
      - tavily_search

  researcher:
    tasks:
      research_task:
        description: "Research {{topic}}"
        expected_output: "Research report"

Field Name Mapping

PraisonAI accepts both old and new field names. Use canonical names for new projects.
Canonical (Recommended)Alias (Also Works)Purpose
agentsrolesDefine agent personas
instructionsbackstoryAgent behavior/persona
actiondescriptionWhat the step does
stepstasks (nested in roles)Define work items
name-Workflow identifier
inputtopicData passed INTO the workflow
A-I-G-S Mnemonic - Easy to remember:
  • Agents - Who does the work
  • Instructions - How they behave
  • Goal - What they achieve
  • Steps - What they do

Root-Level Options

All options available at the root level of your YAML file.
FieldTypeDefaultDescription
namestring”Workflow”Workflow identifier
descriptionstring""Workflow description
inputstring""Data passed INTO workflow (use {{input}} in steps)
topicstring""Alias for input (legacy)
frameworkstring”praisonai”Framework: praisonai, crewai, autogen
processstring”sequential”Process type: sequential, hierarchical, workflow
manager_llmstring-LLM for hierarchical process manager
workflow:
  planning: true                    # Enable planning mode
  planning_llm: gpt-4o              # LLM for planning
  reasoning: true                   # Enable reasoning mode
  verbose: true                     # Verbose output
  default_llm: gpt-4o-mini          # Default LLM for all agents
  output: verbose                   # Output mode: silent, minimal, normal, verbose, debug
  memory_config:
    provider: chroma
    persist: true
FieldTypeDefaultDescription
planningboolfalseEnable planning mode
planning_llmstring-LLM for planning
reasoningboolfalseEnable reasoning mode
verboseboolfalseVerbose output
default_llmstring”gpt-4o-mini”Default LLM for agents
outputstring”normal”Output mode preset
memory_configobject-Memory configuration
Prevent token overflow errors with automatic context compaction.
# Simple enable
context: true

# Detailed configuration
context:
  auto_compact: true           # Enable auto-compaction
  compact_threshold: 0.8       # Trigger at 80% of context window
  strategy: smart              # smart | truncate | sliding_window | summarize | prune_tools
  tool_output_max: 10000       # Max tokens per tool output
FieldTypeDefaultDescription
auto_compact / enabledboolfalseEnable auto-compaction
compact_threshold / thresholdfloat0.8Trigger threshold (0-1)
strategystring”smart”Compaction strategy
tool_output_max / max_tool_output_tokensint10000Max tokens per tool
Always enable context: true for workflows with search/crawl tools to prevent “context_length_exceeded” errors.
variables:
  topic: AI trends
  max_results: 5
  categories:
    - Machine Learning
    - NLP
    - Computer Vision
Use variables in steps with {{variable_name}} syntax.
PatternDescription
{{input}}Workflow input
{{topic}}Topic field value
{{previous_output}}Previous step result
{{variable_name}}Custom variable
{{item}}Current loop item
{{item.field}}Field in loop item
Define custom models for model routing.
models:
  cheap-fast:
    provider: openai
    complexity: [simple]
    cost_per_1k: 0.0001
    capabilities: [text]
    context_window: 16000
  
  premium:
    provider: anthropic
    complexity: [complex, very_complex]
    cost_per_1k: 0.015
    capabilities: [text, vision, function-calling]
    context_window: 200000
    supports_tools: true
    strengths: [reasoning, analysis]
FieldRequiredDescription
provideropenai, anthropic, google, openrouter
complexityList: simple, moderate, complex, very_complex
cost_per_1kCost per 1,000 tokens in USD
capabilitiesList: text, vision, function-calling
context_windowMax context window in tokens
supports_toolsSupports tool/function calling
strengthsList: reasoning, code-generation, etc.
callbacks:
  on_workflow_start: log_start
  on_step_start: log_step_start
  on_step_complete: log_step_complete
  on_step_error: handle_error
  on_workflow_complete: log_complete
Callbacks are resolved from your tools.py file.

Agent Options

All options available for agent definitions.
FieldRequiredDefaultDescription
role-Agent’s job title
nameAgent IDDisplay name
goal“Complete the task”Agent’s objective
instructionsGenericAgent behavior/persona
backstory-Alias for instructions
FieldDefaultDescription
llmgpt-4o-miniModel to use
function_calling_llmSame as llmModel for tool calls
reflect_llmSame as llmModel for self-reflection
system_template-Custom system prompt
prompt_template-Custom prompt template
response_template-Custom response template
FieldDefaultDescription
max_rpmUnlimitedMax requests per minute
max_execution_time300Timeout in seconds
max_iter3Maximum iterations
min_reflect0Minimum reflection iterations
max_reflect3Maximum reflection iterations
cachetrueEnable response caching
FieldDefaultDescription
planningfalseEnable agent-level planning
reasoningfalseEnable reasoning mode
allow_delegationfalseAllow task delegation
verbosefalseVerbose output
tools[]List of tool names
Use the agent: field to specify specialized agent types:
agents:
  image_creator:
    agent: ImageAgent          # Specialized type
    role: Image Generator
    llm: dall-e-3
    style: natural
  
  narrator:
    agent: AudioAgent
    role: Audio Narrator
    llm: tts-1
    voice: alloy
  
  video_maker:
    agent: VideoAgent
    role: Video Creator
    llm: openai/sora-2
  
  document_reader:
    agent: OCRAgent
    role: Document Reader
    llm: mistral/mistral-ocr-latest
  
  researcher:
    agent: DeepResearchAgent
    role: Deep Researcher
    llm: o3-deep-research
Agent TypePurposeKey Options
ImageAgentImage generationstyle, llm (dall-e-3)
AudioAgentTTS/STTvoice, audio config
VideoAgentVideo generationvideo config
OCRAgentText extractionocr config
DeepResearchAgentAutomated researchinstructions

Step Options

All options available for step definitions.
FieldRequiredDefaultDescription
agent✅*-Agent to execute (*not needed for patterns)
action{{input}}What the step does
description-Alias for action
nameAuto-generatedStep identifier
expected_output-Description of expected output
FieldDescription
output_fileSave output to file path
create_directoryCreate output directory if needed
output_jsonJSON schema for structured output
output_pydanticPydantic model name from tools.py
output_variableStore output in named variable
steps:
  - agent: researcher
    action: "Find topics"
    output_json:
      type: array
      items:
        type: object
        properties:
          title: { type: string }
          url: { type: string }
    output_variable: topics
FieldDescription
contextList of dependent step names
steps:
  - name: research_step
    agent: researcher
    action: "Research {{input}}"
  
  - name: writing_step
    agent: writer
    action: "Write based on: {{previous_output}}"
    context:
      - research_step    # Explicit dependency
FieldDefaultDescription
async_executionfalseRun asynchronously
max_retries3Maximum retry attempts
guardrail-Guardrail function name
callback-Callback function name

Workflow Patterns

Advanced workflow patterns available in both agents.yaml and workflow.yaml.

Parallel

Execute multiple agents concurrently

Route

Classify and route to specialized agents

Loop

Iterate over a list of items

Repeat

Repeat until condition is met

Include

Include modular recipes
steps:
  - name: parallel_research
    parallel:
      - agent: market_analyst
        action: "Research market trends"
      - agent: tech_analyst
        action: "Research technology"
  
  - agent: aggregator
    action: "Combine findings: {{previous_output}}"

Loop Options

FieldRequiredDefaultDescription
over✅*-Variable name to iterate
from_csv-CSV file path to iterate
from_file-File path to iterate lines
var_name“item”Variable name for current item
parallelfalseExecute iterations in parallel
max_workers-Limit parallel workers
output_variable-Store all outputs in variable

Repeat Options

FieldRequiredDefaultDescription
until-Condition string to match in output
max_iterations5Maximum iterations

Include Options

FieldRequiredDefaultDescription
recipe-Recipe name or path
input{{previous_output}}Input for included recipe

Feature Compatibility Matrix

What works where:
Featureagents.yamlworkflow.yamlNotes
Agent DefinitionUse agents: (canonical) or roles:
Steps/TasksUse steps: (canonical)
Workflow Patternsparallel, route, loop, repeat
Include Recipesinclude: in steps
Variablesvariables: section
Context Managementcontext: section
Planning Modeworkflow.planning: true
Reasoning Modeworkflow.reasoning: true
Memory Configworkflow.memory_config:
Custom Modelsmodels: section
Callbackscallbacks: section
Specialized Agentsagent: ImageAgent, etc.
Structured Outputoutput_json, output_pydantic
Full Feature Parity! Both file formats support all features. The only difference is naming conventions.

What’s NOT Possible

These limitations apply to both agents.yaml and workflow.yaml:
LimitationWorkaround
Nested loopsUse multi-step loop with sequential steps
Conditional branching mid-stepUse route: pattern instead
Dynamic agent creationPre-define all agents in agents: section
Cross-workflow stateUse include: with explicit input passing
Real-time streaming in loopsStreaming works per-step, not across loop

Migration Guide

From agents.yaml to workflow.yaml

1

Rename container

roles:agents:
2

Rename agent fields

backstory:instructions:
3

Extract tasks to steps

Move nested tasks: to top-level steps:
4

Rename step fields

description:action:
5

Update input reference

topic:input: (optional but recommended)
framework: praisonai
topic: "Research AI"

roles:
  researcher:
    role: Analyst
    backstory: "Expert researcher"
    goal: Research
    tasks:
      research_task:
        description: "Research {{topic}}"

Validation

Validate your YAML configuration:
praisonai workflow validate my-workflow.yaml
Output shows:
  • ✅ Valid fields
  • 💡 Suggestions for canonical names
  • ❌ Errors if invalid

Best Practices

Use Canonical Names

agents, instructions, action, steps, input

Enable Context Management

context: true for tool-heavy workflows

Define Expected Output

Always specify expected_output for clarity

Use Variables

Centralize reusable values in variables:
Run praisonai workflow validate <file.yaml> to check for issues and get suggestions for canonical field names.