Skip to main content

DataStax RAGStack Knowledge Store

Project description

RAGStack Knowledge Store

Hybrid Knowledge Store combining vector similarity and edges between chunks.

Usage

  1. Pre-process your documents to populate metadata information.
  2. Create a Hybrid KnowledgeStore and add your LangChain Documents.
  3. Retrieve documents from the KnowledgeStore.

Populate Metadata

The Knowledge Store makes use of the following metadata fields on each Document:

  • content_id: If assigned, this specifies the unique ID of the Document. If not assigned, one will be generated. This should be set if you may re-ingest the same document so that it is overwritten rather than being duplicated.
  • parent_content_id: If this Document is a chunk of a larger document, you may reference the parent content here.
  • keywords: A list of strings representing keywords present in this Document.
  • hrefs: A list of strings containing the URLs which this Document links to.
  • urls: A list of strings containing the URLs associated with this Document. If one webpage is divided into multiple chunks, each chunk's Document would have the same URL. One webpage may have multiple URLs if it is available in multiple ways.

Keywords

To link documents with common keywords, assign the keywords metadata of each Document.

There are various ways to assign keywords to each Document, such as TF-IDF across the documents. One easy option is to use the KeyBERT.

Once installed with pip install keybert, you can add keywords to a list documents as follows:

from keybert import KeyBERT

kw_model = KeyBERT()
keywords = kw_model.extract_keywords([doc.page_content for doc in pages],
                                     stop_words='english')

for (doc, kws) in zip(documents, keywords):
    doc.metadata["keywords"] = [kw for (kw, _distance) in kws]

Rather than taking all the top keywords, you could also limit to those with less than a certain _distance to the document.

Hyperlinks

To capture hyperlinks, populate the hrefs and urls metadata fields of each Document.

import re
link_re = re.compile("href=\"([^\"]+)")
for doc in documents:
    doc.metadata["content_id"] = doc.metadata["source"]
    doc.metadata["hrefs"] = list(link_re.findall(doc.page_content))
    doc.metadata["urls"] = [doc.metadata["source"]]

Store

import cassio
from langchain_openai import OpenAIEmbeddings
from ragstack_knowledge_store import KnowledgeStore

cassio.init(auto=True)

knowledge_store = KnowledgeStore(embeddings=OpenAIEmbeddings())

# Store the documents
knowledge_store.add_documents(documents)

Retrieve

from langchain_openai import ChatOpenAI

llm = ChatOpenAI(model="gpt-4o")

# Retrieve and generate using the relevant snippets of the blog.
from langchain_core.runnables import RunnablePassthrough
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate

# Depth 0 - don't traverse edges. equivalent to vector-only.
# Depth 1 - vector search plus 1 level of edges
retriever = knowledge_store.as_retriever(k=4, depth=1)

template = """You are a helpful technical support bot. You should provide complete answers explaining the options the user has available to address their problem. Answer the question based only on the following context:
{context}

Question: {question}
"""
prompt = ChatPromptTemplate.from_template(template)

def format_docs(docs):
    formatted = "\n\n".join(f"From {doc.metadata['content_id']}: {doc.page_content}" for doc in docs)
    return formatted


rag_chain = (
    {"context": retriever | format_docs, "question": RunnablePassthrough()}
    | prompt
    | llm
    | StrOutputParser()
)

Development

poetry install --with=dev

# Run Tests
poetry run pytest

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ragstack_ai_knowledge_store-0.0.3.tar.gz (15.7 kB view hashes)

Uploaded Source

Built Distribution

ragstack_ai_knowledge_store-0.0.3-py3-none-any.whl (17.2 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page