category
Cohere的Command R在检索增强生成(RAG)和工具使用任务方面拥有高精度。它提供低延迟和高吞吐量,具有长的128k令牌上下文长度。此外,它还展示了10种关键语言的强大多语能力。
在这个工作室里,我们正在构建一个完全自主托管的“与您的文档聊天”RAG应用程序,使用:
- -Cohere的“R”在当地使用Ollama服务。
- -Qdrant矢量数据库(自托管)
- -用于生成嵌入的Fastembed
下面是我们正在构建的内容的快速演示:
https://youtu.be/aLLw3iCPhtM
Run main notebook
You can start by running the main.ipynb
notebook, which contains the essential code to set up a query engine for interacting with the repository you provide.
Getting started in a notebook
Chat with your documents app
You can also interact with your docs using a nice UI we've created using streamlit, served directly from the Studio.
Follow the steps below to launch the app:
1. Click on the Streamlit plugin:
Launching the streamlit plugin
2. Then create a new app by clicking on the "New App" button on the top right (or clicking on "select a Studio file"):
Launching a new app
3. Now select the app.py
, which contains the Streamlit application, and click on "Run":
Selecting the streamlit app code file
4. And there you go, you're now all set to chat with your documents!
Chat with your code streamlit app
Key architecture components
Building a robust RAG application involves a lot of moving parts, the architecture diagram presented below illustrates some of the key components & how they interact with each other, followed by detailed descriptions of each component, we've used:
- -用于编排的LlamaIndex
- -流媒体,用于创建聊天UI
- -Cohere的命令R作为LLM
- -Qdrant矢量数据库(自托管)
- -用于生成嵌入的Fastembed
A chat with your docs RAG application
1. Custom knowledge base
Custom Knowledge Base: A collection of relevant and up-to-date information that serves as a foundation for RAG. It can be a database, a set of documents, or a combination of both. In this case it's a PDF provided by you that will be used as a source of truth to provide answers to user queries.
2. Chunking
Chunking is the process of breaking down a large input text into smaller pieces. This ensures that the text fits the input size of the embedding model and improves retrieval efficiency.
Following code will load pdf documents from a directory specified by the user using LlamaIndex's SimpleDirectoryReader
:
1 2 3 4 5 6 7 8 9
from llama_index.core import SimpleDirectoryReader # load data loader = SimpleDirectoryReader( input_dir = input_dir_path, required_exts=[".pdf"], recursive=True ) docs = loader.load_data()
3. Embeddings model
A technique for representing text data as numerical vectors, which can be input into machine learning models. The embedding model is responsible for converting text into these vectors. We will use BAAI/bge-large-en-v1.5 as embedding model using Fastembed.
FastEmbed is a lightweight library with minimal dependencies, ideal for serverless runtimes. It prioritizes speed using the faster ONNX Runtime and offers accuracy surpassing OpenAI Ada-002, with support for various models, including multilingual ones.
Fastembed comes with seamless integration with Qdrant vector database, which we are going to use here.
1 2 3
from llama_index.embeddings.fastembed import FastEmbedEmbedding embed_model = FastEmbedEmbedding(model_name="BAAI/bge-large-en-v1.5")
4. Vector databases
A collection of pre-computed vector representations of text data for fast retrieval and similarity search, with capabilities like CRUD operations, metadata filtering, and horizontal scaling.
In this studio we are using Qdrant vector database, self hosted on the studio. Your data stays completely on premise.
import qdrant_client from llama_index.core import Settings from qdrant_client.models import Distance, VectorParams, Batch from llama_index.core import StorageContext from llama_index.embeddings.fastembed import FastEmbedEmbedding from llama_index.vector_stores.qdrant import QdrantVectorStore # Creating an index over loaded data Settings.embed_model = embed_model client = qdrant_client.QdrantClient( host="localhost", port=6333 ) unique_collection_name = f"document_chat_{uuid.uuid4()}" vector_store = QdrantVectorStore(client=client, collection_name=unique_collection_name) storage_context = StorageContext.from_defaults(vector_store=vector_store) index = VectorStoreIndex.from_documents( docs, storage_context=storage_context, )
5. User chat interface
A user-friendly interface that allows users to interact with the RAG system, providing input query and receiving output. We have built a streamlit app to do the same. The code for it can be found in app.py
6. Query engine
The query engine takes query string to use it to fetch relevant context and then sends them both as a prompt to the LLM to generate a final natural language response. The LLM used here is Cohere's Command R! The final response is displayed in the user interface.
from llama_index.core import Settings from llama_index.llms.ollama import Ollama
# setup the llm llm=Ollama(model="command-r", request_timeout=60.0)
# Create the query engine Settings.llm = llm query_engine = index.as_query_engine()
7. Prompt template
A custom prompt template is use to refine the response from LLM & include the context as well:
from llama_index.core import PromptTemplate
qa_prompt_tmpl_str = ( "Context information is below.\n" "---------------------\n" "{context_str}\n" "---------------------\n" "Given the context information above I want you to think step by step to answer the query in a crisp manner, incase case you don't know the answer say 'I don't know!'.\n" "Query: {query_str}\n" "Answer: " )
qa_prompt_tmpl = PromptTemplate(qa_prompt_tmpl_str) query_engine.update_prompts({"response_synthesizer:text_qa_template": qa_prompt_tmpl})
response = query_engine.query('What is RAFT algorithm?') print(response)
Conclusion
In this studio, we developed a completely self-hosted Retrieval Augmented Generation (RAG) application that allows you to "Chat with your documents", without compromising your data privacy. Throughout this process, we learned about LlamaIndex, the go to library for building RAG application & Cohere's Command R model designed for RAG application & tool use, served locally using Ollama.
We also learned how to self hosted vector database, we used Qdrant VectorDB and a fast, lightweight library Fastembed for embedding generation.
We also explored the concept of prompt engineering to refine and steer the responses of our LLM. These techniques can similarly be applied to anchor your LLM to various knowledge bases, such as documents, PDFs, videos, and more.
LlamaIndex has a variety of data loaders you can learn more about the same here.
Next Steps
As we continue to enhance the system, there are several promising directions to explore:
-
To further improve the accuracy of the retrieved context, we can use a dedicated reranker.
-
Optimizing the ingestion process is another critical area. Utilizing parallel ingestion with LlamaIndex
-
We can also experiment with a different chunking strategy by trying different chunk sizes & overlaps between them.
By pursuing these avenues, we aim to continuously improve the RAG system we have built!
- 登录 发表评论