Prerequisites

1

Install Package

Install required packages:

pip install "praisonaiagents[llm]" streamlit

streamlit for UI
praisonaiagents[llm] for Gemini model access (It uses Litellm)

2

Setup Environment

Configure environment:

export GOOGLE_API_KEY=your-api-key

Get your API key from Google AI Studio

3

Create File

Create a new file called app.py and add the following code:

4

Run Application

Start the Streamlit application:

streamlit run app.py

Code

import streamlit as st
from praisonaiagents import Agent

st.title("Gemini 2.0 Thinking AI Agent")

# Initialize the agent
@st.cache_resource
def get_agent():
    llm_config = {
        "model": "gemini/gemini-2.0-flash-thinking-exp-01-21",
        "response_format": {"type": "text"}
    }
    
    return Agent(
        instructions="You are a helpful assistant",
        llm=llm_config
    )

agent = get_agent()

# Create text area input field
user_question = st.text_area("Ask your question:", height=150)

# Add ask button
if st.button("Ask"):
    if user_question:
        with st.spinner('Thinking...'):
            result = agent.start(user_question)
            st.write("### Answer")
            st.write(result)
    else:
        st.warning("Please enter a question") 

Features

Interactive Chat

Real-time chat interface with message history.

Knowledge Base

RAG capabilities with ChromaDB integration.

Model Integration

Uses Google’s Gemini Pro model.

Session Management

Maintains chat history in session state.

Make sure you have a valid Google API key and sufficient quota for using Gemini models.

Was this page helpful?