Learn how to create AI agents that can efficiently handle repetitive tasks through automated loops.
A workflow optimization pattern where agents handle repetitive tasks through automated loops, processing multiple instances efficiently while maintaining consistency.
Set your OpenAI API key as an environment variable in your terminal:
Copy
export OPENAI_API_KEY=your_api_key_here
3
Create a file
Create a new file repetitive_agent.py with the basic setup:
Copy
from praisonaiagents import Agent, Task, PraisonAIAgentsagent = Agent( instructions="You are a loop agent that creating a loop of tasks.")task = Task( description="Create the list of tasks to be looped through.", agent=agent, task_type="loop", input_file="tasks.csv")agents = PraisonAIAgents( agents=[agent], tasks=[task], process="workflow", max_iter=30)agents.start()
4
Start Agents
Type this in your terminal to run your agents:
Copy
python repetitive_agent.py
Requirements
Python 3.10 or higher
OpenAI API key. Generate OpenAI API key here. Use Other models using this guide.
Process batches of tasks from CSV or other structured files:
Copy
from praisonaiagents import Agent, Task, PraisonAIAgents# Create agent for processing questionsqa_agent = Agent( name="QA Bot", role="Answer questions", goal="Provide accurate answers to user questions")# Create loop task that processes questions from CSVloop_task = Task( name="process_questions", description="Answer each question", expected_output="Answer for each question", agent=qa_agent, task_type="loop", input_file="questions.csv" # Each row becomes a subtask)# Create workflowagents = PraisonAIAgents( agents=[qa_agent], tasks=[loop_task], process="workflow" # Use workflow for loop tasks)# Run the batch processingresult = agents.start()
The input CSV file should have headers that correspond to task parameters:
Copy
question,context,priority"What is Python?","Programming language context","high""Explain machine learning","AI and ML context","medium""How does Docker work?","Container technology context","high"
# Configure parallel processing for better performanceagents = PraisonAIAgents( agents=[qa_agent], tasks=[loop_task], process="workflow", max_workers=5, # Process 5 items in parallel batch_size=10 # Process in batches of 10)
from praisonaiagents.callbacks import Callbackclass BatchProgressTracker(Callback): def __init__(self): self.processed = 0 self.total = 0 def on_task_start(self, task, **kwargs): if task.task_type == "loop" and self.total == 0: # Count total items import csv try: with open(task.input_file, 'r', encoding='utf-8') as f: # More efficient for counting lines in a CSV self.total = sum(1 for _ in f) - 1 except FileNotFoundError: print(f"Warning: Input file not found at {task.input_file}. Progress will not be shown.") self.total = 0 def on_subtask_complete(self, subtask, result, **kwargs): self.processed += 1 print(f"Progress: {self.processed}/{self.total} ({self.processed/self.total*100:.1f}%)")# Use progress trackeragents = PraisonAIAgents( agents=[qa_agent], tasks=[loop_task], callbacks=[BatchProgressTracker()])
File Validation: Always validate input files before processing
Copy
import osimport csvdef validate_input_file(filepath): if not os.path.exists(filepath): raise FileNotFoundError(f"Input file not found: {filepath}") with open(filepath, 'r') as f: reader = csv.reader(f) headers = next(reader, None) if not headers: raise ValueError("CSV file is empty or has no headers") return True
Memory Management: For large files, use streaming
Copy
loop_task = Task( name="process_large_file", description="Process item", expected_output="Result", agent=processor, task_type="loop", input_file="large_data.csv", streaming=True, # Process one item at a time chunk_size=100 # Read 100 rows at a time)
Result Storage: Save results progressively
Copy
loop_task = Task( name="process_and_save", description="Process and save", expected_output="Saved result", agent=processor, task_type="loop", input_file="data.csv", output_file="results.csv", # Save results to file append_mode=True # Append results as processed)