Learn how to use Deepseek models with PraisonAI Agents through Ollama integration for basic queries, RAG applications, and interactive UI implementations.

Prerequisites

1

Install Ollama

First, install Ollama on your system:

curl -fsSL https://ollama.com/install.sh | sh
2

Pull Deepseek Model

Pull the Deepseek model from Ollama:

ollama pull deepseek-r1
3

Install Package

Install PraisonAI Agents:

pip install praisonaiagents
4

Set Environment

Set Ollama as your base URL:

export OPENAI_BASE_URL=http://localhost:11434/v1

Basic Usage

The simplest way to use Deepseek with PraisonAI Agents:

from praisonaiagents import Agent

agent = Agent(instructions="You are helpful Assisant", llm="deepseek-r1")

agent.start("Why sky is Blue?")

RAG Implementation

Use Deepseek with RAG capabilities for knowledge-based interactions:

from praisonaiagents import Agent

config = {
    "vector_store": {
        "provider": "chroma",
        "config": {
            "collection_name": "praison",
            "path": ".praison"
        }
    },
    "llm": {
        "provider": "ollama",
        "config": {
            "model": "deepseek-r1:latest",
            "temperature": 0,
            "max_tokens": 8000,
            "ollama_base_url": "http://localhost:11434",
        },
    },
    "embedder": {
        "provider": "ollama",
        "config": {
            "model": "nomic-embed-text:latest",
            "ollama_base_url": "http://localhost:11434",
            "embedding_dims": 1536
        },
    },
}

agent = Agent(
    name="Knowledge Agent",
    instructions="You answer questions based on the provided knowledge.",
    knowledge=["kag-research-paper.pdf"], # Indexing
    knowledge_config=config,
    user_id="user1",
    llm="deepseek-r1"
)

agent.start("What is KAG in one line?") # Retrieval

Interactive UI with Streamlit

Create an interactive chat interface using Streamlit:

import streamlit as st
from praisonaiagents import Agent

def init_agent():
    config = {
        "vector_store": {
            "provider": "chroma",
            "config": {
                "collection_name": "praison",
                "path": ".praison"
            }
        },
        "llm": {
            "provider": "ollama",
            "config": {
                "model": "deepseek-r1:latest",
                "temperature": 0,
                "max_tokens": 8000,
                "ollama_base_url": "http://localhost:11434",
            },
        },
        "embedder": {
            "provider": "ollama",
            "config": {
                "model": "nomic-embed-text:latest",
                "ollama_base_url": "http://localhost:11434",
                "embedding_dims": 1536
            },
        },
    }
    
    return Agent(
        name="Knowledge Agent",
        instructions="You answer questions based on the provided knowledge.",
        knowledge=["kag-research-paper.pdf"],
        knowledge_config=config,
        user_id="user1",
        llm="deepseek-r1"
    )

st.title("Knowledge Agent Chat")

if "agent" not in st.session_state:
    st.session_state.agent = init_agent()
    st.session_state.messages = []

if "messages" in st.session_state:
    for message in st.session_state.messages:
        with st.chat_message(message["role"]):
            st.markdown(message["content"])

prompt = st.chat_input("Ask a question...")

if prompt:
    st.session_state.messages.append({"role": "user", "content": prompt})
    with st.chat_message("user"):
        st.markdown(prompt)

    with st.chat_message("assistant"):
        response = st.session_state.agent.start(prompt)
        st.markdown(response)
        st.session_state.messages.append({"role": "assistant", "content": response}) 

Running the UI

1

Install Streamlit

Install Streamlit if you haven’t already:

pip install streamlit
2

Save and Run

Save the UI code in a file (e.g., app.py) and run:

streamlit run app.py

Features

Local Deployment

Run Deepseek models locally through Ollama.

RAG Capabilities

Integrate with vector databases for knowledge retrieval.

Interactive UI

Create chat interfaces with Streamlit integration.

Custom Configuration

Configure model parameters and embedding settings.

Troubleshooting

Ollama Issues

If Ollama isn’t working:

  • Check if Ollama is running
  • Verify model is downloaded
  • Check port availability

Performance Issues

If responses are slow:

  • Check system resources
  • Adjust max_tokens
  • Monitor memory usage

For optimal performance, ensure your system meets the minimum requirements for running Deepseek models locally through Ollama.

Was this page helpful?