DataStax RAGStack Knowledge Store
Project description
RAGStack Knowledge Store
Hybrid Knowledge Store combining vector similarity and edges between chunks.
Usage
- Pre-process your documents to populate
metadata
information. - Create a Hybrid
KnowledgeStore
and add your LangChainDocument
s. - Retrieve documents from the
KnowledgeStore
.
Populate Metadata
The Knowledge Store makes use of the following metadata fields on each Document
:
content_id
: If assigned, this specifies the unique ID of theDocument
. If not assigned, one will be generated. This should be set if you may re-ingest the same document so that it is overwritten rather than being duplicated.parent_content_id
: If thisDocument
is a chunk of a larger document, you may reference the parent content here.keywords
: A list of strings representing keywords present in thisDocument
.hrefs
: A list of strings containing the URLs which thisDocument
links to.urls
: A list of strings containing the URLs associated with thisDocument
. If one webpage is divided into multiple chunks, each chunk'sDocument
would have the same URL. One webpage may have multiple URLs if it is available in multiple ways.
Keywords
To link documents with common keywords, assign the keywords
metadata of each Document
.
There are various ways to assign keywords to each Document
, such as TF-IDF across the documents.
One easy option is to use the KeyBERT.
Once installed with pip install keybert
, you can add keywords to a list documents
as follows:
from keybert import KeyBERT
kw_model = KeyBERT()
keywords = kw_model.extract_keywords([doc.page_content for doc in pages],
stop_words='english')
for (doc, kws) in zip(documents, keywords):
doc.metadata["keywords"] = [kw for (kw, _distance) in kws]
Rather than taking all the top keywords, you could also limit to those with less than a certain _distance
to the document.
Hyperlinks
To capture hyperlinks, populate the hrefs
and urls
metadata fields of each Document
.
import re
link_re = re.compile("href=\"([^\"]+)")
for doc in documents:
doc.metadata["content_id"] = doc.metadata["source"]
doc.metadata["hrefs"] = list(link_re.findall(doc.page_content))
doc.metadata["urls"] = [doc.metadata["source"]]
Store
import cassio
from langchain_openai import OpenAIEmbeddings
from ragstack_knowledge_store import KnowledgeStore
cassio.init(auto=True)
knowledge_store = KnowledgeStore(embeddings=OpenAIEmbeddings())
# Store the documents
knowledge_store.add_documents(documents)
Retrieve
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o")
# Retrieve and generate using the relevant snippets of the blog.
from langchain_core.runnables import RunnablePassthrough
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
# Depth 0 - don't traverse edges. equivalent to vector-only.
# Depth 1 - vector search plus 1 level of edges
retriever = knowledge_store.as_retriever(k=4, depth=1)
template = """You are a helpful technical support bot. You should provide complete answers explaining the options the user has available to address their problem. Answer the question based only on the following context:
{context}
Question: {question}
"""
prompt = ChatPromptTemplate.from_template(template)
def format_docs(docs):
formatted = "\n\n".join(f"From {doc.metadata['content_id']}: {doc.page_content}" for doc in docs)
return formatted
rag_chain = (
{"context": retriever | format_docs, "question": RunnablePassthrough()}
| prompt
| llm
| StrOutputParser()
)
Development
poetry install --with=dev
# Run Tests
poetry run pytest
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for ragstack_ai_knowledge_store-0.0.2.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | 181ab61728bbbc52e625312f81434c36ea8d567966ae6316cb4bf943826395d5 |
|
MD5 | 071c6021a28f4f60711bc6c71901372b |
|
BLAKE2b-256 | 7db5e1d141c0790094e714a5d6a246c370d9242a840800d8483cda70d534b625 |
Hashes for ragstack_ai_knowledge_store-0.0.2-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 9b7bcde99440d003d27ce4fbcd4c73ea53fb40ef859cab1bcddd5b4c01e04fe2 |
|
MD5 | a787381683206b3473d3f3a6982a3379 |
|
BLAKE2b-256 | e75f0bc6c74b7b4fbc1ca45941ec281eb48890f536348e7c544785187455207d |