Welcome to the Babbler Bot QA Model repository. This sub-repo is responsible for handling the Question Answering (QA) functionality.
The process flow involves:
-
Embedding Model: This initial step creates embeddings of the books and stores them in the vector database.
-
User Query Processing: When a user asks a question, it goes through the embedding model. The system then searches the vector database for similar embeddings and retrieves the top 3 relevant documents.
-
Language Model (LLM): These top relevant documents along with the user's question are passed to the Language Model, which generates an appropriate answer.
- Embedding Model: Instructor Large
- Vector Database: ChromaDB
- Language Model: Llama-2
- Backend Framework: FastAPI
- Dependency Management: Langchain