A Streamlit application integrating LangChain with Ollama and DeepSeek-r1:8b to create a conversational assistant.
Before running the program, ensure you have installed the following Python modules:
pip install streamlit
pip install python-dotenv
pip install pytesseract
pip install Pillow
Additionally, download and run the deepseek-r1:8b model locally using Ollama.
import streamlit as st
from dotenv import load_dotenv # For loading environment variables
from langchain_ollama import ChatOllama
from langchain_core.prompts import (
SystemMessagePromptTemplate,
HumanMessagePromptTemplate,
ChatPromptTemplate,
MessagesPlaceholder
)
from langchain_core.runnables.history import RunnableWithMessageHistory
from langchain_community.chat_message_histories import SQLChatMessageHistory
from langchain_core.output_parsers import StrOutputParser
load_dotenv('./../.env')
This block imports all necessary modules, including environment loaders, langchain components, and utilities for image processing and OCR. The environment variables are loaded from a .env
file.
st.title("DeepSeek-r1 Chatboat")
st.write("Chat with me!")
We set up a title and description for the chat application.
base_url = "http://localhost:11434"
model = 'deepseek-r1:8b' # Local running Model
user_id = st.text_input("Enter your user id", "AILeaderX")
def get_session_history(session_id):
return SQLChatMessageHistory(session_id, "sqlite:///chat_history.db")
if "chat_history" not in st.session_state:
st.session_state.chat_history = []
if st.button("Start New Conversation"):
st.session_state.chat_history = []
history = get_session_history(user_id)
history.clear()
This section defines the model endpoint and sets up the chat history using a SQL database. The user ID is taken as input and a new conversation can be started.
for message in st.session_state.chat_history:
with st.chat_message(message['role']):
st.markdown(message['content'])
The chat history is displayed using Streamlit’s chat message functionality.
llm = ChatOllama(base_url=base_url, model=model)
system = SystemMessagePromptTemplate.from_template("You are helpful assistant.")
human = HumanMessagePromptTemplate.from_template("{input}")
messages = [system, MessagesPlaceholder(variable_name='history'), human]
prompt = ChatPromptTemplate(messages=messages)
chain = prompt | llm | StrOutputParser()
runnable_with_history = RunnableWithMessageHistory(
chain,
get_session_history,
input_messages_key='input',
history_messages_key='history'
)
This block sets up the language model using ChatOllama and configures the prompt templates and message history using LangChain components.
def chat_with_llm(session_id, input):
for output in runnable_with_history.stream({'input': input}, config={'configurable': {'session_id': session_id}}):
yield output
prompt = st.chat_input("What is up?")
if prompt:
st.session_state.chat_history.append({'role': 'user', 'content': prompt})
with st.chat_message("user"):
st.markdown(prompt)
with st.chat_message("assistant"):
response = st.write_stream(chat_with_llm(user_id, prompt))
st.session_state.chat_history.append({'role': 'assistant', 'content': response})
This section handles the chat input and displays the streamed output from the language model, updating the chat history accordingly.
import streamlit as st
from dotenv import load_dotenv
from langchain_ollama import ChatOllama
from langchain_core.runnables.history import RunnableWithMessageHistory
from langchain_community.chat_message_histories import SQLChatMessageHistory
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import (
SystemMessagePromptTemplate,
HumanMessagePromptTemplate,
ChatPromptTemplate,
MessagesPlaceholder
)
load_dotenv('./../.env')
st.title("deepseek-r1 Chatboat")
st.write("Chat with me!")
base_url = "http://localhost:11434"
model = 'deepseek-r1:8b' # Local running Model
user_id = st.text_input("Enter your user id", "AILeaderX")
def get_session_history(session_id):
return SQLChatMessageHistory(session_id, "sqlite:///chat_history.db")
if "chat_history" not in st.session_state:
st.session_state.chat_history = []
if st.button("Start New Conversation"):
st.session_state.chat_history = []
history = get_session_history(user_id)
history.clear()
for message in st.session_state.chat_history:
with st.chat_message(message['role']):
st.markdown(message['content'])
### LLM Setup
llm = ChatOllama(base_url=base_url, model=model)
system = SystemMessagePromptTemplate.from_template("You are helpful assistant.")
human = HumanMessagePromptTemplate.from_template("{input}")
messages = [system, MessagesPlaceholder(variable_name='history'), human]
prompt = ChatPromptTemplate(messages=messages)
chain = prompt | llm | StrOutputParser()
runnable_with_history = RunnableWithMessageHistory(chain, get_session_history,
input_messages_key='input',
history_messages_key='history')
def chat_with_llm(session_id, input):
for output in runnable_with_history.stream({'input': input}, config={'configurable': {'session_id': session_id}}):
yield output
prompt = st.chat_input("What is up?")
# st.write(prompt)
if prompt:
st.session_state.chat_history.append({'role': 'user', 'content': prompt})
with st.chat_message("user"):
st.markdown(prompt)
with st.chat_message("assistant"):
response = st.write_stream(chat_with_llm(user_id, prompt))
st.session_state.chat_history.append({'role': 'assistant', 'content': response})
This is the complete code for the Streamlit chat application using DeepSeek-r1:8b via Ollama and LangChain.
streamlit run ai_chatboat.py
Open your terminal, navigate to the project directory, and run the command above.
Once the app starts, open your browser:
Local URL: http://localhost:8501
Network URL: http://192.168.1.11:8501
This application demonstrates the integration of advanced conversational AI using LangChain, Ollama, and the DeepSeek-r1:8b model. By leveraging Streamlit, we provide a user-friendly interface that enables real-time chat interactions and message history management. This approach highlights how combining multiple AI components can result in a powerful, interactive chat assistant.