Skip to main content
Agents are the primary execution unit in PraisonAI. They combine LLM providers, tools, and memory for autonomous task execution.

Quick Start

1

Create a Simple Agent

use praisonai::Agent;

let agent = Agent::new()
    .name("Assistant")
    .instructions("You are a helpful AI assistant")
    .build()?;

let response = agent.chat("Hello!").await?;
println!("{}", response);
2

One-Liner Agent

use praisonai::Agent;

// Minimal agent with just instructions
let agent = Agent::simple("You are a helpful assistant")?;
let response = agent.start("Help me understand Rust").await?;
3

Agent with Tools

use praisonai::{Agent, tool};

#[tool]
fn search(query: String) -> String {
    format!("Results for: {}", query)
}

let agent = Agent::new()
    .name("Researcher")
    .instructions("Search for information when asked")
    .tool(search)
    .build()?;

let response = agent.chat("Find info about Rust").await?;

User Interaction Flow


Agent Struct

pub struct Agent {
    id: String,
    name: String,
    instructions: String,
    llm: Arc<dyn LlmProvider>,
    tools: Arc<RwLock<ToolRegistry>>,
    memory: Arc<RwLock<Memory>>,
    config: AgentConfig,
}

Runtime Methods

MethodSignatureDescription
chat(prompt)async fn chat(&self, &str) -> Result<String>Main interaction method
start(prompt)async fn start(&self, &str) -> Result<String>Alias for chat
run(task)async fn run(&self, &str) -> Result<String>Alias for chat
add_tool(tool)async fn add_tool(&self, impl Tool)Add tool at runtime
clear_memory()async fn clear_memory(&self) -> Result<()>Clear conversation
history()async fn history(&self) -> Result<Vec<Message>>Get message history

Accessor Methods

MethodSignatureDescription
id()fn id(&self) -> &strGet agent ID
name()fn name(&self) -> &strGet agent name
instructions()fn instructions(&self) -> &strGet instructions
model()fn model(&self) -> &strGet model name
tool_count()async fn tool_count(&self) -> usizeNumber of tools

AgentBuilder

Fluent API for constructing agents.

Builder Methods

MethodSignatureDefaultDescription
new()fn new() -> AgentBuilder-Create builder
name(n)fn name(impl Into<String>) -> Self"agent"Set name
instructions(i)fn instructions(impl Into<String>) -> SelfGeneric assistant promptSet system prompt
model(m)fn model(impl Into<String>) -> Self"gpt-4o-mini"Set LLM model
llm(m)fn llm(impl Into<String>) -> Self-Alias for model
api_key(k)fn api_key(impl Into<String>) -> SelfFrom ENVSet API key
base_url(u)fn base_url(impl Into<String>) -> SelfOpenAI defaultSet API endpoint
temperature(t)fn temperature(f32) -> Self0.7LLM temperature
max_tokens(n)fn max_tokens(u32) -> SelfNoneMax response tokens
tool(t)fn tool(impl Tool) -> Self-Add a tool
tools(ts)fn tools(impl IntoIterator) -> Self-Add multiple tools
memory(b)fn memory(bool) -> SelftrueEnable/disable memory
max_iterations(n)fn max_iterations(usize) -> Self10Max tool call loops
verbose(b)fn verbose(bool) -> SelffalseEnable logging
stream(b)fn stream(bool) -> SelftrueEnable streaming
build()fn build(self) -> Result<Agent>-Build agent

Configuration Options

AgentConfig

OptionTypeDefaultDescription
max_iterationsusize10Max tool calling iterations
verboseboolfalseEnable verbose output
streambooltrueEnable response streaming

Common Patterns

Research Agent with Multiple Tools

use praisonai::{Agent, tool};

#[tool]
fn web_search(query: String) -> String {
    format!("Web results for: {}", query)
}

#[tool]
fn calculate(expression: String) -> String {
    format!("Calculated: {}", expression)
}

let researcher = Agent::new()
    .name("Researcher")
    .instructions("You are a research assistant. Use tools to find and analyze information.")
    .model("gpt-4o")
    .tool(web_search)
    .tool(calculate)
    .temperature(0.3)
    .max_iterations(20)
    .build()?;

let result = researcher.chat("Research the population of Tokyo and calculate growth rate").await?;

Conversational Agent with Memory

use praisonai::Agent;

let assistant = Agent::new()
    .name("Assistant")
    .instructions("You are a friendly assistant. Remember our conversation.")
    .memory(true)
    .build()?;

// Conversation persists across calls
assistant.chat("My name is Alice").await?;
assistant.chat("What's my name?").await?; // Remembers "Alice"

Best Practices

Clear instructions lead to better outputs. Include role, capabilities, and constraints.
Higher values for complex tool chains, lower for simple tasks.
Keep stream(true) for better user experience with long outputs.
All async methods return Result - use ? operator or match.