Crafting No-Code Local RAG Chatbots with LangFlow and Ollama
Do you remember when developing an intelligent chatbot meant investing months into coding?
While frameworks like LangChain have significantly simplified the process, the need to write hundreds of lines of code can still be a major barrier for non-programmers.
But is there an easier way?
That’s when I stumbled upon “LangFlow,” an innovative open-source tool that extends the capabilities of LangChain’s Python version. LangFlow allows you to build AI applications without writing a single line of code. It provides an intuitive canvas where you can effortlessly drag and connect components to create your chatbot.
In this guide, we’ll explore how to use LangFlow to quickly develop a smart AI chatbot prototype. For the backend, we’ll leverage Ollama to handle embeddings and Large Language Models, ensuring that your application runs locally without incurring any costs. Finally, we’ll demonstrate how to convert this flow into a fully functional Streamlit application with minimal coding effort.
Overview of the Retrieval-Augmented Generation Pipeline: LangChain, LangFlow, and Ollama
In this project, we’ll be creating an AI chatbot named “Dinnerly — Your Healthy Dish Planner,” designed to suggest nutritious dish recipes sourced from a recipe PDF file using the power of Retrieval-Augmented Generation (RAG).
Before we delve into the step-by-step process of bringing Dinnerly to life, let’s first take a closer look at the key components that will make this project possible.
Understanding Retrieval-Augmented Generation (RAG)
Retrieval-Augmented Generation (RAG) is a technique that enhances Large Language Models (LLMs) by providing them with relevant information from external sources. This additional context enables LLMs to generate more accurate, context-aware, and up-to-date responses.
The RAG pipeline typically involves the following key steps, as outlined in various guides on the subject:
- Load Document: Start by loading the document or data source that will be used for retrieval.
- Split into Chunks: Divide the document into smaller, manageable parts to ensure more precise processing.
- Create Embeddings: Convert these chunks into vector representations using embedding techniques.
- Store in Vector Database: Save these vectors in a vector database for efficient and quick retrieval.
- User Interaction: Receive queries or input from the user, which are then converted into embeddings.
- Semantic Search in VectorDB: Perform a semantic search in the vector database using the user’s query to find the most relevant information.
- Retrieve and Process Responses: Retrieve the most pertinent chunks, pass them through an LLM, and generate a comprehensive response.
- Deliver Answer to User: Finally, present the generated response to the user, ensuring that the output is both accurate and contextually relevant.
This structured approach allows RAG to bridge the gap between static data sources and dynamic, real-time query handling, making it an indispensable tool for creating intelligent chatbots and other AI-driven applications.
LangChain: The Foundation for Advanced AI Applications
LangChain is an open-source framework specifically designed to harness the power of Large Language Models (LLMs), making it easier to develop a wide range of Generative AI (GenAI) applications, from chatbots to summarization tools and beyond.
At the heart of LangChain is the concept of “chaining” together different components. This modular approach allows developers to simplify complex AI tasks by linking various functionalities, enabling the creation of more sophisticated and tailored AI solutions. Whether you’re building a chatbot, automating content creation, or developing data-driven applications, LangChain provides the flexibility and tools needed to bring your ideas to life.
LangFlow: A Visual Interface for Building LangChain Applications
LangFlow is a specialized web tool designed to complement LangChain by offering an intuitive, no-code environment. With LangFlow, users can easily drag and drop components to construct and test LangChain-based applications, eliminating the need for manual coding.
However, to effectively use LangFlow, it’s important to have a foundational understanding of LangChain’s workings and its various components. This knowledge will empower you to design and optimize your AI application flow within LangFlow, ensuring that your projects are both efficient and powerful.
Ollama: Your Gateway to Open-Source LLMs
For those looking to quickly and easily start working with open-source Large Language Models (LLMs), Ollama stands out as the top choice. It provides seamless access to some of the most powerful LLMs, including Llama 2 and Mistral, making it incredibly user-friendly for both beginners and experts.
Ollama supports a wide range of models, all of which can be explored on their library page at ollama.ai/library. Whether you’re developing an AI application or experimenting with different LLMs, Ollama simplifies the process, allowing you to focus on innovation rather than setup.
Setting Up Ollama: Installation and Model Deployment
Installing Ollama
To get started with Ollama, visit the Ollama download page, select the version compatible with your operating system, and proceed with the installation.
Once Ollama is installed, open your command terminal to download and run the necessary models locally on your machine. This setup allows you to run the entire application without relying on cloud services.
For this project, we’ll be utilizing Llama2 as our Large Language Model (LLM) and “nomic-embed-text” as our embedding model. The “nomic-embed-text” model is a robust, open-source embedding solution with a large context window, perfect for processing complex inputs.
To set up the models, execute the following commands in your terminal:
ollama serve
ollama pull llama2
ollama pull nomic-embed-text
ollama run llama2
These commands will start the Ollama server, download the Llama2 model, and the “nomic-embed-text” embedding model, and then run Llama2, allowing you to fully deploy your AI application locally.
Setting Up LangFlow: Installation and Configuration
Prerequisites
Before diving into LangFlow, ensure that Python is installed on your computer. The Python version should be between 3.9 and 3.11. Versions above 3.11 are not yet supported.
Installing LangFlow
Now, let’s proceed with installing LangFlow. It’s recommended to do this within a virtual environment to keep your dependencies organized and isolated. On macOS, you can use Conda for this setup. Follow these steps to create a virtual environment named “langflow” with Python 3.11:
conda create -n langflow python=3.11
conda activate langflow
pip install langflow
If you don’t have Conda installed, you can create a virtual environment directly with Python by running the following commands:
python -m venv langflow
source langflow/bin/activate
pip install langflow
Launching LangFlow
Once the installation is complete, starting LangFlow is straightforward. Simply enter the following command in your terminal:
langflow run
This will launch LangFlow, and the terminal will provide you with a URL (typically something like http://127.0.0.1:7860
). Copy this URL, paste it into your web browser, and you’ll be greeted by the LangFlow interface, displaying all your projects.
From here, you can begin building and managing your LangChain-based applications with ease, using the drag-and-drop interface that LangFlow provides.
Designing Your Chatbot’s Flow: Step-by-Step Guide
Now that you’re ready to design your first flow, let’s dive into the process!
Getting Started with a New Project
- Create a New Project: Begin by clicking “New project.” This will open a blank canvas where you can start building your chatbot.
- Explore Components: On the left-hand side of the interface, you’ll find a variety of components that you can drag and drop into your workspace.
Building the Chatbot Flow
For our chatbot, which will answer questions from a PDF file using the RAG pipeline, you’ll need to incorporate specific components:
- PDF Loader: Use the “PyPDFLoader” component to load your PDF document. Make sure to input the file path correctly.
- Text Splitter: Select the “RecursiveCharacterTextSplitter” to break down the PDF into manageable chunks. The default settings should suffice.
- Text Embedding Model: Choose “OllamaEmbeddings” to leverage the free, open-source embedding model.
- Vector Database: Opt for “FAISS” to store the embeddings and perform efficient vector searches.
- LLM for Generating Responses: Use “ChatOllama” and specify “llama2” as the model for generating the chatbot’s responses.
- Conversation Memory: Implement “ConversationBufferMemory” to allow the chatbot to retain conversation history, which is useful for handling follow-up questions.
- Conversation Retrieval Chain: This critical component connects the LLM, memory, and retrieved texts to generate coherent responses. Use “ConversationRetrievalChain” for this purpose.
Assembling the Flow
- Drag and Drop: Drag each of these components onto the canvas and configure them. Input necessary details like the PDF file path and the LLM model name, while leaving other settings at their default values.
- Connect Components: Link the components together to form a cohesive flow. Ensure all parts are correctly connected, reflecting the RAG pipeline.
Compiling and Testing
- Compile the Flow: Once all components are in place, click the “lightning” button in the bottom-right corner to compile the flow. A green button indicates that the compilation was successful.
- Test Your Chatbot: After a successful compile, click the “chatbot” icon to interact with your newly created chatbot and see it in action.
Additional Tips
- Saving Your Flow: You can save your completed flow as a JSON file or find it under “My Collection” for future edits and access.
- Explore Pre-Built Examples: For inspiration and a head start, explore pre-built examples:
- LangFlow Store: Access examples in the LangFlow Store, though you’ll need an API key.
- LangFlow GitHub: Download examples from the LangFlow GitHub page and upload them into your LangFlow UI using the “upload” button.
- Alternative Setup with OpenAI: If local setup isn’t for you, consider using OpenAI to build your RAG pipeline. Just make sure you have your OpenAI API key ready for integration.
This guide should help you design a functional chatbot flow in LangFlow, leveraging the power of RAG and other AI components.
Integrating Your LangFlow Chatbot into a Streamlit Application
Once you’ve perfected your flow in LangFlow, the next step is to integrate it into a Streamlit application. This will allow you to create a user-friendly interface for interacting with your chatbot.
Setting Up Dependencies
First, install the necessary dependencies for your Streamlit application. Run the following commands in your terminal:
pip install streamlit
pip install langflow
pip install langchain-community
Fetching and Integrating the LangFlow Code Snippet
- Create a Python File: Create a new file named
app.py
. - Get the LangFlow Code Snippet: In the LangFlow UI, click the “Code” button, navigate to the “Python API” tab, and copy the provided code snippet. Paste this code into your
app.py
file:
import requests
from typing import Optional
BASE_API_URL = "http://127.0.0.1:7860/api/v1/process"
FLOW_ID = "d9392262-a912-42b4-8582-cc9e48894a00"
TWEAKS = {
"VectorStoreAgent-brRPx": {},
"VectorStoreInfo-BS24v": {},
"OpenAIEmbeddings-lnfRZ": {},
"RecursiveCharacterTextSplitter-bErPe": {},
"WebBaseLoader-HLOqm": {},
"ChatOpenAI-aQOv0": {},
"FAISS-o0WIf": {}
}
def run_flow(inputs: dict, flow_id: str, tweaks: Optional[dict] = None) -> dict:
api_url = f"{BASE_API_URL}/{flow_id}"
payload = {"inputs": inputs}
if tweaks:
payload["tweaks"] = tweaks
response = requests.post(api_url, json=payload)
return response.json()
Building the Chat Function
In the same app.py
file, add the following function to handle user interactions and responses:
import streamlit as st
def chat(prompt: str):
with current_chat_message:
st.session_state.disabled = True
st.session_state.messages.append(("human", prompt))
with st.chat_message("human"):
st.markdown(prompt)
with st.chat_message("ai"):
history = "\n".join([f"{role}: {msg}" for role, msg in st.session_state.messages])
query = f"{history}\nAI:"
inputs = {"input": query}
output = run_flow(inputs, flow_id=FLOW_ID, tweaks=TWEAKS)
try:
output = output['result']['output']
except Exception:
output = f"Application error: {output}"
placeholder = st.empty()
response = ""
for tokens in output:
response += tokens
with placeholder:
st.markdown(response + "▌")
with placeholder:
st.markdown(response)
st.session_state.messages.append(("ai", response))
st.session_state.disabled = False
st.rerun()
Crafting the User Interface
Now, build a simple Streamlit user interface to interact with your chatbot:
import streamlit as st
st.set_page_config(page_title="Dinnerly")
st.title("Welcome to Dinnerly: Your Healthy Dish Planner")
system_prompt = "You’re a helpful assistant to suggest and provide healthy dishes recipes to users"
if "messages" not in st.session_state:
st.session_state.messages = [("system", system_prompt)]
if "disabled" not in st.session_state:
st.session_state.disabled = False
with st.chat_message("ai"):
st.markdown(
f"Hi! I'm your healthy dish planner. Happy to help you prepare healthy and yummy dishes!"
)
for role, message in st.session_state.messages:
if role == "system":
continue
with st.chat_message(role):
st.markdown(message)
current_chat_message = st.container()
prompt = st.chat_input("Ask your question here...", disabled=st.session_state.disabled)
if prompt:
chat(prompt)
Running Your Streamlit App
To see your chatbot in action, run the Streamlit app with the following command:
streamlit run app.py
This will start the Streamlit server and open your browser to the application. You can now interact with your chatbot, which will help users with healthy dish recommendations based on the PDF content.
Tips
- Testing Different Flows: You can use the same code for different flows by simply changing the
FLOW_ID
to test and integrate new flows. - Exploring Examples: Use pre-built examples from LangFlow or the LangFlow GitHub page for inspiration and guidance.
With these steps, you’ll have a functional chatbot integrated into a Streamlit application, ready to assist users with their recipe inquiries!
Final Reflections
In this post, we’ve successfully built a smart chatbot using the Retrieval-Augmented Generation (RAG) approach. By leveraging LangFlow, we crafted the RAG pipeline with no-code convenience, utilized open-source models for embeddings and LLM processing, and integrated it all into a Streamlit application.
LangFlow’s no-code approach has been particularly impressive, simplifying the process of building and prototyping AI applications. It offers a streamlined way to create functional prototypes without delving into complex coding, making it a promising tool for rapid development.
However, it’s important to acknowledge that some components are still evolving. This can lead to occasional issues with functionality, and the lack of detailed troubleshooting guidance can be a challenge. Additionally, providing underlying Python code for further customization could enhance the flexibility of LangFlow.
Despite these areas for improvement, LangFlow remains a valuable asset for quickly bringing AI prototypes to life, and its no-code interface could significantly impact how we develop and test AI applications in the future.