Skip to main content

llama-index node_parser topic node parser integration

Project description

LlamaIndex Node_Parser Integration: TopicNodeParser

Implements the topic node parser described in the paper MedGraphRAG, which aims to improve the capabilities of LLMs in the medical domain by generating evidence-based results through a novel graph-based Retrieval-Augmented Generation framework, improving safety and reliability in handling private medical data.

TopicNodeParser implements an approximate version of the chunking technique described in the paper.

Here is the technique as outlined in the paper:

Large medical documents often contain multiple themes or diverse content. To process these effectively, we first segment the document into data chunks that conform to the context limitations of Large Language Models (LLMs). Traditional methods such as chunking based on token size or fixed characters typically fail to detect subtle shifts in topics accurately. Consequently, these chunks may not fully capture the intended context, leading to a loss in the richness of meaning.

To enhance accuracy, we adopt a mixed method of character separation coupled with topic-based segmentation. Specifically, we utilize static characters (line break symbols) to isolate individual paragraphs within the document. Following this, we apply a derived form of the text for semantic chunking. Our approach includes the use of proposition transfer, which extracts standalone statements from a raw text Chen et al. (2023). Through proposition transfer, each paragraph is transformed into self-sustaining statements. We then conduct a sequential analysis of the document to assess each proposition, deciding whether it should merge with an existing chunk or initiate a new one. This decision is made via a zero-shot approach by an LLM. To reduce noise generated by sequential processing, we implement a sliding window technique, managing five paragraphs at a time. We continuously adjust the window by removing the first paragraph and adding the next, maintaining focus on topic consistency. We set a hard threshold that the longest chunk cannot excess the context length limitation of LLM. After chunking the document, we construct graph on each individual of the data chunk.

Installation

pip install llama-index-node-parser-topic

Usage

from llama_index.core import Document
from llama_index.node_parser.topic import TopicNodeParser

node_parser = TopicNodeParser.from_defaults(
    llm=llm,
    max_chunk_size=1000,
    similarity_method="llm",  # can be "llm" or "embedding"
    # embed_model=embed_model,  # used for "embedding" similarity_method
    # similarity_threshold=0.8,  # used for "embedding" similarity_method
    window_size=2,  # paper suggests window_size=5
)

nodes = node_parser(
    [
        Document(text="document text 1"),
        Document(text="document text 2"),
    ],
)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_node_parser_topic-0.3.1.tar.gz (7.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

File details

Details for the file llama_index_node_parser_topic-0.3.1.tar.gz.

File metadata

File hashes

Hashes for llama_index_node_parser_topic-0.3.1.tar.gz
Algorithm Hash digest
SHA256 273266fcda9368bd230a53d91d6f19ab8c77e6f5b266e481c552e7bd086e9945
MD5 015c2f2587d9047e50eb8638a29d30fa
BLAKE2b-256 c047b4909471c89cd041e519ca60d213472bb9db4166b08ee33843649a6ab2dd

See more details on using hashes here.

File details

Details for the file llama_index_node_parser_topic-0.3.1-py3-none-any.whl.

File metadata

File hashes

Hashes for llama_index_node_parser_topic-0.3.1-py3-none-any.whl
Algorithm Hash digest
SHA256 03bf5b32f0c5b390bf6779f00ca38875b6a83d834f4414f790d83b11bf727c5f
MD5 9c807d0fc6446780bec45e9f301d06e7
BLAKE2b-256 c1a1e048ccfb686d649344942e6eb9326d5c0f1d4d061e62a19d225574cdc42c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page