Learn how to use Deepseek models with PraisonAI Agents for various applications.
Learn how to use Deepseek models with PraisonAI Agents through Ollama integration for basic queries, RAG applications, and interactive UI implementations.
Install Ollama
First, install Ollama on your system:
Pull Deepseek Model
Pull the Deepseek model from Ollama:
Install Package
Install PraisonAI Agents:
Streamlit for UI is optional. Ollama is required for Local RAG Agents. praisonaiagents[knowledge] is for RAG Agents. praisonaiagents is for Basic Agents.
Set Environment
Set Ollama as your base URL:
The simplest way to use Deepseek with PraisonAI Agents:
Use Deepseek with RAG capabilities for knowledge-based interactions:
Create an interactive chat interface using Streamlit:
Install Streamlit
Install Streamlit if you haven’t already:
Save and Run
Save the UI code in a file (e.g., app.py
) and run:
Run Deepseek models locally through Ollama.
Integrate with vector databases for knowledge retrieval.
Create chat interfaces with Streamlit integration.
Configure model parameters and embedding settings.
If Ollama isn’t working:
If responses are slow:
For optimal performance, ensure your system meets the minimum requirements for running Deepseek models locally through Ollama.
Learn how to use Deepseek models with PraisonAI Agents for various applications.
Learn how to use Deepseek models with PraisonAI Agents through Ollama integration for basic queries, RAG applications, and interactive UI implementations.
Install Ollama
First, install Ollama on your system:
Pull Deepseek Model
Pull the Deepseek model from Ollama:
Install Package
Install PraisonAI Agents:
Streamlit for UI is optional. Ollama is required for Local RAG Agents. praisonaiagents[knowledge] is for RAG Agents. praisonaiagents is for Basic Agents.
Set Environment
Set Ollama as your base URL:
The simplest way to use Deepseek with PraisonAI Agents:
Use Deepseek with RAG capabilities for knowledge-based interactions:
Create an interactive chat interface using Streamlit:
Install Streamlit
Install Streamlit if you haven’t already:
Save and Run
Save the UI code in a file (e.g., app.py
) and run:
Run Deepseek models locally through Ollama.
Integrate with vector databases for knowledge retrieval.
Create chat interfaces with Streamlit integration.
Configure model parameters and embedding settings.
If Ollama isn’t working:
If responses are slow:
For optimal performance, ensure your system meets the minimum requirements for running Deepseek models locally through Ollama.