Learn how to create AI agents that can efficiently handle repetitive tasks through automated loops.
A workflow optimization pattern where agents handle repetitive tasks through automated loops, processing multiple instances efficiently while maintaining consistency.
Set your OpenAI API key as an environment variable in your terminal:
Copy
export OPENAI_API_KEY=your_api_key_here
3
Create a file
Create a new file repetitive_agent.py with the basic setup:
Copy
from praisonaiagents import Agent, Task, PraisonAIAgentsagent = Agent( instructions="You are a loop agent that creating a loop of tasks.")task = Task( description="Create the list of tasks to be looped through.", agent=agent, task_type="loop", input_file="tasks.csv")agents = PraisonAIAgents( agents=[agent], tasks=[task], process="workflow", max_iter=30)agents.start()
4
Start Agents
Type this in your terminal to run your agents:
Copy
python repetitive_agent.py
Requirements
Python 3.10 or higher
OpenAI API key. Generate OpenAI API key here. Use Other models using this guide.
Loop tasks can automatically process CSV and text files to create dynamic subtasks. This powerful feature enables batch processing of data without manual task creation.
from praisonaiagents import Agent, Task, PraisonAIAgents# Create a CSV file with customer issueswith open("customers.csv", "w") as f: f.write("name,issue\n") f.write("John,Billing problem with subscription\n") f.write("Jane,Technical issue with login\n") f.write("Sarah,Request for feature enhancement\n")# Create specialized support agentsupport_agent = Agent( name="Support Agent", role="Customer support specialist", goal="Resolve customer issues efficiently", backstory="Expert support agent with years of experience", llm="gpt-4o-mini" # Specify the LLM model)# Loop task automatically creates subtasks from CSVloop_task = Task( name="Process all customers", description="Handle each customer issue", expected_output="Resolution for the customer issue", agent=support_agent, task_type="loop", input_file="customers.csv" # Automatically processes each row)# Use PraisonAIAgents with workflow processagents = PraisonAIAgents( agents=[support_agent], tasks=[loop_task], process="workflow", # Required for loop tasks max_iter=10 # Prevent infinite loops)# Start processingresults = agents.start()# Each row will be processed as:# - Subtask 1: "Handle each customer issue: John,Billing problem with subscription"# - Subtask 2: "Handle each customer issue: Jane,Technical issue with login"# - Subtask 3: "Handle each customer issue: Sarah,Request for feature enhancement"
Loop tasks can also process text files line by line:
Copy
# Create a text file with URLswith open("urls.txt", "w") as f: f.write("https://example.com\n") f.write("https://test.com\n") f.write("https://demo.com\n")# Create URL analyzer agenturl_agent = Agent( name="URL Analyzer", role="Website analyzer", goal="Analyze websites for SEO and performance")# Process each URLurl_task = Task( name="Analyze URLs", description="Analyze each website", expected_output="SEO and performance report", agent=url_agent, task_type="loop", input_file="urls.txt" # Each line becomes a subtask)
Process batches of tasks from CSV or other structured files:
Copy
from praisonaiagents import Agent, Task, PraisonAIAgents# Create agent for processing questionsqa_agent = Agent( name="QA Bot", role="Answer questions", goal="Provide accurate answers to user questions")# Create loop task that processes questions from CSVloop_task = Task( name="process_questions", description="Answer each question", expected_output="Answer for each question", agent=qa_agent, task_type="loop", input_file="questions.csv" # Each row becomes a subtask)# Create workflowagents = PraisonAIAgents( agents=[qa_agent], tasks=[loop_task], process="workflow" # Use workflow for loop tasks)# Run the batch processingresult = agents.start()
The input CSV file should have headers that correspond to task parameters:
Copy
question,context,priority"What is Python?","Programming language context","high""Explain machine learning","AI and ML context","medium""How does Docker work?","Container technology context","high"
# Configure parallel processing for better performanceagents = PraisonAIAgents( agents=[qa_agent], tasks=[loop_task], process="workflow", max_workers=5, # Process 5 items in parallel batch_size=10 # Process in batches of 10)
from praisonaiagents.callbacks import Callbackclass BatchProgressTracker(Callback): def __init__(self): self.processed = 0 self.total = 0 def on_task_start(self, task, **kwargs): if task.task_type == "loop" and self.total == 0: # Count total items import csv try: with open(task.input_file, 'r', encoding='utf-8') as f: # More efficient for counting lines in a CSV self.total = sum(1 for _ in f) - 1 except FileNotFoundError: print(f"Warning: Input file not found at {task.input_file}. Progress will not be shown.") self.total = 0 def on_subtask_complete(self, subtask, result, **kwargs): self.processed += 1 print(f"Progress: {self.processed}/{self.total} ({self.processed/self.total*100:.1f}%)")# Use progress trackeragents = PraisonAIAgents( agents=[qa_agent], tasks=[loop_task], callbacks=[BatchProgressTracker()])
File Validation: Always validate input files before processing
Copy
import osimport csvdef validate_input_file(filepath): if not os.path.exists(filepath): raise FileNotFoundError(f"Input file not found: {filepath}") with open(filepath, 'r') as f: reader = csv.reader(f) headers = next(reader, None) if not headers: raise ValueError("CSV file is empty or has no headers") return True
Memory Management: For large files, use streaming
Copy
loop_task = Task( name="process_large_file", description="Process item", expected_output="Result", agent=processor, task_type="loop", input_file="large_data.csv", streaming=True, # Process one item at a time chunk_size=100 # Read 100 rows at a time)
Result Storage: Save results progressively
Copy
loop_task = Task( name="process_and_save", description="Process and save", expected_output="Saved result", agent=processor, task_type="loop", input_file="data.csv", output_file="results.csv", # Save results to file append_mode=True # Append results as processed)