llama-index node_parser topic node parser integration
Project description
LlamaIndex Node_Parser Integration: TopicNodeParser
Implements the topic node parser described in the paper MedGraphRAG, which aims to improve the capabilities of LLMs in the medical domain by generating evidence-based results through a novel graph-based Retrieval-Augmented Generation framework, improving safety and reliability in handling private medical data.
TopicNodeParser
implements an approximate version of the chunking technique described in the paper.
Here is the technique as outlined in the paper:
Large medical documents often contain multiple themes or diverse content. To process these effectively, we first segment the document into data chunks that conform to the context limitations of Large Language Models (LLMs). Traditional methods such as chunking based on token size or fixed characters typically fail to detect subtle shifts in topics accurately. Consequently, these chunks may not fully capture the intended context, leading to a loss in the richness of meaning.
To enhance accuracy, we adopt a mixed method of character separation coupled with topic-based segmentation. Specifically, we utilize static characters (line break symbols) to isolate individual paragraphs within the document. Following this, we apply a derived form of the text for semantic chunking. Our approach includes the use of proposition transfer, which extracts standalone statements from a raw text Chen et al. (2023). Through proposition transfer, each paragraph is transformed into self-sustaining statements. We then conduct a sequential analysis of the document to assess each proposition, deciding whether it should merge with an existing chunk or initiate a new one. This decision is made via a zero-shot approach by an LLM. To reduce noise generated by sequential processing, we implement a sliding window technique, managing five paragraphs at a time. We continuously adjust the window by removing the first paragraph and adding the next, maintaining focus on topic consistency. We set a hard threshold that the longest chunk cannot excess the context length limitation of LLM. After chunking the document, we construct graph on each individual of the data chunk.
Installation
pip install llama-index-node-parser-topic
Usage
from llama_index.core import Document
from llama_index.node_parser.topic import TopicNodeParser
node_parser = TopicNodeParser.from_defaults(
llm=llm,
max_chunk_size=1000,
similarity_method="llm", # can be "llm" or "embedding"
# embed_model=embed_model, # used for "embedding" similarity_method
# similarity_threshold=0.8, # used for "embedding" similarity_method
window_size=2, # paper suggests window_size=5
)
nodes = node_parser(
[
Document(text="document text 1"),
Document(text="document text 2"),
],
)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file llama_index_node_parser_topic-0.1.0.tar.gz
.
File metadata
- Download URL: llama_index_node_parser_topic-0.1.0.tar.gz
- Upload date:
- Size: 6.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.3 CPython/3.10.12 Linux/6.8.0-1014-azure
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 14b65d06b952e927ae5f4c5c5927d4dab0d826117a5ee05943bf1b060b38aab3 |
|
MD5 | f767d6d3d1715d8e98c285a2ac6ce2c9 |
|
BLAKE2b-256 | 561aa90f1627cb03f3628140cfa43f00e57292e0d102631fe9cd98a098356e03 |
File details
Details for the file llama_index_node_parser_topic-0.1.0-py3-none-any.whl
.
File metadata
- Download URL: llama_index_node_parser_topic-0.1.0-py3-none-any.whl
- Upload date:
- Size: 6.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.3 CPython/3.10.12 Linux/6.8.0-1014-azure
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 9261d4e7ab034ba236605d36a51451025205ed993393147f2f88459a107a5254 |
|
MD5 | 7530b45f0d397439b7d182bbc5a003cd |
|
BLAKE2b-256 | 6276572735def7e7ec84a8336c340037bc461182fca29f209e07e8f553e5ac33 |