RAG Chatbot Application🤖
Introduction
This project implements a Context-Awarew Retrieval-Augmented Generation (RAG) chatbot using Streamlit.The chatbot is powered by the Mistral-7B-Instruct-v0.3 language model integrated with ChromaDB as vector database.
Table of Contents
Installation
To install and set up the project, follow these steps:
- Clone the repository.
git clone https://github.com/todap/RAG.git
- Navigate to the project directory.
- Install the required dependencies.
pip install -r requirements.txt
- Set your Hugging Face token in the
app.py
file:
HF_TOKEN = st.secrets["HF_TOKEN"]
Usage
- Run the main application:
### OR
-deployed on streamlit:https://team-qubits.streamlit.app/
- Interact with the chatbot via the web interface.
- Upload documents using the Document Management section and process them for use within the chatbot.
Features
- Contextual Responses: The chatbot retrieves relevant documents from a knowledge base and uses them to provide contextual responses to user queries.
- Conversational History: The chatbot maintains a conversation history, allowing it to reference and build upon previous interactions.
- Document Management: The application provides a document management interface, allowing users to upload and store new documents in the knowledge base.
- Feedback Mechanism: Users can provide feedback on the chatbot’s responses, which is used to improve the quality of future responses.
Dependencies
The project relies on the following major dependencies:
streamlit
huggingface_hub
langchain
chromadb
Configuration
- Store your Hugging Face token in the Streamlit secrets file as
HF_TOKEN
.
- Additional configuration options may be found within the
app.py
file.
Contributor