category
Introduction
When it comes to bestowing Large Language Models (LLMs) with long-term memory, the prevalent approach often involves a Retrieval Augmented Generation (RAG) solution, with vector databases acting as the storage mechanism for the long-term memory. This begs the question: Can we achieve the same results without vector databases?
Enter RecallM: An Adaptable Memory Mechanism with Temporal Understanding for Large Language Models by Brandon Kynoch, Hugo Latapie, and Dwane van der Sluis. This paper proposes the use of an automatically constructed knowledge graph as the backbone of long-term memory for LLMs.
This blog post is a deep dive into the mechanics of RecallM, focusing on how it updates its knowledge graph and performs inference, underpinned by a series of illustrative examples.
We’ll start by exploring how the knowledge graph updates work, walking through two specific examples to clarify the process. Following that, we’ll examine the inference mechanism of RecallM with another example, showcasing how it draws on the knowledge graph to generate responses. Our discussion will also cover examples of temporal reasoning, demonstrating RecallM’s proficiency in understanding and applying time-based…
- 登录 发表评论