Models
Run models locally with Ollama. Popular options:- Recommended:
ollama/llama3.2(latest Llama) - Reasoning:
ollama/deepseek-r1(reasoning model) - Small:
ollama/qwen3(efficient) - Code:
ollama/codellama(coding tasks)
Setup
Copy
# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
# Start Ollama server
ollama serve
# Pull a model
ollama pull llama3.2
Python
Copy
# No API key needed - runs locally
from praisonaiagents import Agent
agent = Agent(
instructions="You are a helpful assistant",
llm="ollama/llama3.2"
)
agent.start("Explain deep learning")
With Tools
Copy
from praisonaiagents import Agent
def read_file(path: str) -> str:
"""Read a file's contents."""
with open(path, 'r') as f:
return f.read()
agent = Agent(
instructions="You are a code assistant",
llm="ollama/codellama",
tools=[read_file]
)
agent.start("Read and explain main.py")
Multi-Agent
Copy
from praisonaiagents import Agent, Task, Agents
researcher = Agent(
instructions="You research topics thoroughly",
llm="ollama/llama3.2"
)
writer = Agent(
instructions="You write clear summaries",
llm="ollama/qwen3"
)
task1 = Task(description="Research Python best practices", agent=researcher)
task2 = Task(description="Write a guide", agent=writer)
agents = Agents(agents=[researcher, writer], tasks=[task1, task2])
agents.start()
DeepSeek Reasoning
Copy
from praisonaiagents import Agent
agent = Agent(
instructions="You are a problem solver",
llm="ollama/deepseek-r1"
)
agent.start("Solve this math problem: What is 15% of 240?")
CLI
Copy
# Basic prompt
python -m praisonai "Explain AI" --ollama llama3.2
# With specific model
python -m praisonai "Write code" --llm ollama/codellama
# Run agents.yaml
python -m praisonai
YAML
Copy
framework: praisonai
topic: Local AI development
agents:
coder:
role: Software Developer
goal: Write clean code
instructions: You are an expert programmer
llm:
model: ollama/codellama
tasks:
code_task:
description: Write a Python function to sort a list
expected_output: Clean, documented Python code
reviewer:
role: Code Reviewer
goal: Review and improve code
instructions: You review code for best practices
llm:
model: ollama/llama3.2
tasks:
review_task:
description: Review the code and suggest improvements
expected_output: Code review with suggestions

