Skip to main content
Use with or async with on AgentTeam to automatically close connections, memory stores, and per-agent resources when your workflow finishes.

Quick Start

1

Sync usage

from praisonaiagents import Agent, Task, PraisonAIAgents

researcher = Agent(name="Researcher", instructions="Research topics thoroughly")
task = Task(description="Research quantum computing", agent=researcher)

with PraisonAIAgents(agents=[researcher], tasks=[task]) as workflow:
    result = workflow.start()

print(result)
2

Async usage

import asyncio
from praisonaiagents import Agent, Task, PraisonAIAgents

async def main():
    researcher = Agent(name="Researcher", instructions="Research topics")
    task = Task(description="Research AI trends", agent=researcher)

    async with PraisonAIAgents(agents=[researcher], tasks=[task]) as workflow:
        result = await workflow.astart()
    print(result)

asyncio.run(main())

How It Works

Entry PointMethodDescription
with team:__enter__ / __exit__Sync context manager - calls close() on exit
async with team:__aenter__ / __aexit__Async context manager - prefers aclose() when available
Manual cleanupteam.close()Explicit cleanup call

Choosing the right approach


Common Patterns

Using with shared memory

from praisonaiagents import Agent, Task, PraisonAIAgents
from praisonaiagents.memory import ChromaMemory

researcher = Agent(name="Researcher", instructions="Research and remember findings")
task = Task(description="Research renewable energy trends", agent=researcher)

with PraisonAIAgents(
    agents=[researcher], 
    tasks=[task],
    shared_memory=ChromaMemory()
) as workflow:
    result = workflow.start()
    # ChromaDB connection automatically closed here

Using inside FastAPI endpoint

from fastapi import FastAPI
from praisonaiagents import Agent, Task, PraisonAIAgents

app = FastAPI()

@app.post("/research")
async def research_topic(topic: str):
    researcher = Agent(name="Researcher", instructions="Research topics")
    task = Task(description=f"Research {topic}", agent=researcher)
    
    async with PraisonAIAgents(agents=[researcher], tasks=[task]) as workflow:
        result = await workflow.astart()
    
    return {"research": result}
    # All resources cleaned up even if an exception occurs

Explicit cleanup in long-running worker

from praisonaiagents import Agent, Task, PraisonAIAgents

# Initialize once
researcher = Agent(name="Researcher", instructions="Research topics")
workflow = PraisonAIAgents(agents=[researcher])

try:
    # Use multiple times
    workflow.add_task(Task(description="Research AI", agent=researcher))
    result1 = workflow.start()
    
    workflow.add_task(Task(description="Research ML", agent=researcher))
    result2 = workflow.start()
finally:
    # Clean up when shutting down
    workflow.close()

User interaction flow

A user sends a research request to your FastAPI application. The endpoint creates an AgentTeam with a research agent inside an async with block. The agent uses ChromaDB for memory storage and external APIs for research. When the request completes successfully or fails with an exception, the async with block automatically calls close(), ensuring ChromaDB connections, agent resources, and any open file handles are properly released without manual intervention.

Best Practices

Use with or async with instead of calling close() manually. Context managers guarantee cleanup even when exceptions occur.
# ✅ Good - automatic cleanup
with PraisonAIAgents(agents=[agent]) as workflow:
    result = workflow.start()

# ❌ Risky - cleanup might be skipped on exception
workflow = PraisonAIAgents(agents=[agent])
result = workflow.start()
workflow.close()  # Might not be called if exception occurs
Resource cleanup failures are logged as warnings but don’t raise exceptions. This prevents cleanup failures from masking the original issue.
# Cleanup failures won't raise exceptions
async with PraisonAIAgents(agents=[agent]) as workflow:
    result = await workflow.astart()
# Any agent.close() or memory.close() failures are logged, not raised
Once you exit a with block, consider the AgentTeam closed. Create a new one for additional work.
# ✅ Good - create new team for each batch
with PraisonAIAgents(agents=[agent1]) as workflow1:
    result1 = workflow1.start()

with PraisonAIAgents(agents=[agent2]) as workflow2:
    result2 = workflow2.start()

# ❌ Bad - reusing after cleanup
with PraisonAIAgents(agents=[agent]) as workflow:
    result1 = workflow.start()
# workflow is closed here
workflow.start()  # Undefined behavior
Instead of sharing an AgentTeam across requests, create one team per request inside the with block. This isolates resources and prevents connection leaks.
# ✅ Good - isolated per request
@app.post("/analyze")
async def analyze(data: str):
    agent = Agent(name="Analyzer", instructions="Analyze data")
    async with PraisonAIAgents(agents=[agent]) as workflow:
        return await workflow.astart()

# ❌ Risky - shared team across requests
global_workflow = PraisonAIAgents(agents=[agent])  # Shared state problems

Configuration Options

MethodAsyncDescription
close()NoCloses all agents, shared memory, and context manager resources. Best-effort; logs warnings on failure.
__enter__ / __exit__NoEnables with team: … — calls close() on exit.
__aenter__ / __aexit__YesEnables async with team: … — prefers aclose() on agents/memory when available, falls back to sync close().

Auto-generated SDK reference

Agent Management

Core agent concepts and configuration

Memory Systems

Shared memory stores that benefit from automatic cleanup