Split text into semantic chunks, up to a desired chunk size. Supports calculating length by characters and tokens (when used with large language models).
Project description
semantic-text-splitter
Large language models (LLMs) can be used for many tasks, but often have a limited context size that can be smaller than documents you might want to use. To use documents of larger length, you often have to split your text into chunks to fit within this context size.
This crate provides methods for splitting longer pieces of text into smaller chunks, aiming to maximize a desired chunk size, but still splitting at semantically sensible boundaries whenever possible.
Get Started
By Number of Characters
from semantic_text_splitter import CharacterTextSplitter
# Maximum number of characters in a chunk
max_characters = 1000
# Optionally can also have the splitter not trim whitespace for you
splitter = CharacterTextSplitter(trim_chunks=False)
chunks = splitter.chunks("your document text", max_characters)
Using a Range for Chunk Capacity
You also have the option of specifying your chunk capacity as a range.
Once a chunk has reached a length that falls within the range it will be returned.
It is always possible that a chunk may be returned that is less than the start
value, as adding the next piece of text may have made it larger than the end
capacity.
from semantic_text_splitter import CharacterTextSplitter
# Optionally can also have the splitter trim whitespace for you
splitter = CharacterTextSplitter()
# Maximum number of characters in a chunk. Will fill up the
# chunk until it is somewhere in this range.
chunks = splitter.chunks("your document text", chunk_capacity=(200, 1000))
Method
To preserve as much semantic meaning within a chunk as possible, a recursive approach is used, starting at larger semantic units and, if that is too large, breaking it up into the next largest unit. Here is an example of the steps used:
- Split the text by a given level
- For each section, does it fit within the chunk size?
- Yes. Merge as many of these neighboring sections into a chunk as possible to maximize chunk length.
- No. Split by the next level and repeat.
The boundaries used to split the text if using the top-level chunks
method, in descending length:
- Descending sequence length of newlines. (Newline is
\r\n
,\n
, or\r
) Each unique length of consecutive newline sequences is treated as its own semantic level. - Unicode Sentence Boundaries
- Unicode Word Boundaries
- Unicode Grapheme Cluster Boundaries
- Characters
Splitting doesn't occur below the character level, otherwise you could get partial bytes of a char, which may not be a valid unicode str.
Note on sentences: There are lots of methods of determining sentence breaks, all to varying degrees of accuracy, and many requiring ML models to do so. Rather than trying to find the perfect sentence breaks, we rely on unicode method of sentence boundaries, which in most cases is good enough for finding a decent semantic breaking point if a paragraph is too large, and avoids the performance penalties of many other methods.
Inspiration
This crate was inspired by LangChain's TextSplitter. But, looking into the implementation, there was potential for better performance as well as better semantic chunking.
A big thank you to the unicode-rs team for their unicode-segmentation crate that manages a lot of the complexity of matching the Unicode rules for words and sentences.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distributions
Hashes for semantic_text_splitter-0.2.1.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | c1f21faec825439236dd787dd637d7ad33ee501bac28566082fa8763077efe54 |
|
MD5 | 392323ca674f28da176dbed2dd12dc69 |
|
BLAKE2b-256 | 52a13eee200f57717be4cec00b655bc25a184767eb061aff1cd6bd1c44a0c5ee |
Hashes for semantic_text_splitter-0.2.1-cp37-abi3-win_amd64.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 26f4e74ee051dc9006a3268f39c9cc0c62fee102e9bd8de67cdbef9ae5fd6611 |
|
MD5 | 54a9be37381d47dfc19625e8a27c36e4 |
|
BLAKE2b-256 | 4bd43c593a068c93cacd463f627ad3b11d1a6ed2ea3da5113c81f12abb513a97 |
Hashes for semantic_text_splitter-0.2.1-cp37-abi3-win32.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 38af5a85d04c6743ff895d36452a567327ef234ce0915c971b7a90a8bdb7a6f0 |
|
MD5 | 8b4edc28adbf9a3c2abe6c535faaaedf |
|
BLAKE2b-256 | c0b1b614fe0c54cd13fdf8a52f04ad5f021b5c24b5eb6543d9eb8daba3be26d8 |
Hashes for semantic_text_splitter-0.2.1-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 3cf9a4c5f0883568c2b9a42a718c5fbc9b472f0c63e95a062a19d44db477db91 |
|
MD5 | 04fc2dc12b631bafb9a312177489136d |
|
BLAKE2b-256 | 7c7c97337892182fc7159eefc566d91d5cf7b06446dd72b178beea1e3d3df3ff |
Hashes for semantic_text_splitter-0.2.1-cp37-abi3-manylinux_2_17_s390x.manylinux2014_s390x.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 5f5b6b0f2103644be0c73c6d7cab8ba3e962040a6ff09f8a663fca3f6916dcac |
|
MD5 | e8187a2c9540fc3b8bc721974d2403b8 |
|
BLAKE2b-256 | 2673736317ab57b10b6a95a5e9bee39d6cae700bc462fe024edc85ba1dc1a92f |
Hashes for semantic_text_splitter-0.2.1-cp37-abi3-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | e5339524728d70b0cb4b89bdcfa223c815dd5dd8d53fdf75198666a2c7035214 |
|
MD5 | 860a8b34515d584927de9ee952ed2eb1 |
|
BLAKE2b-256 | b68d098767be7cd726997543d3d339e09f18eade8ecdec87aa861ff0358a639b |
Hashes for semantic_text_splitter-0.2.1-cp37-abi3-manylinux_2_17_armv7l.manylinux2014_armv7l.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | ec5a3882ae52ca6505cc8e026df0d959377e0b98013415a70bf2ab71dbad67d9 |
|
MD5 | 6c63b4f6abfb462a5006f4d32d8fe73b |
|
BLAKE2b-256 | 66b9dc2e165bf8d3bed72e8f884e510c399ff0d2774b14fc70e595656c701303 |
Hashes for semantic_text_splitter-0.2.1-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 6d7d084a3e57449b09f1c49ebaf70a6999e8408d6cc4aa413fa30126e0efe377 |
|
MD5 | 2d702b091ba350da1d615ca619a26f8b |
|
BLAKE2b-256 | e0a08f8bf4b2d001c150baa39730418959beb13856dfadaa82c0b6454ac2f963 |
Hashes for semantic_text_splitter-0.2.1-cp37-abi3-manylinux_2_12_i686.manylinux2010_i686.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 5ba01e8ec8f658e5244b43ec146a57f6980e13f789ace879b7ce9db785c738c7 |
|
MD5 | f7420dded111508c123ba4a48902cd5f |
|
BLAKE2b-256 | 1a5fe52bd7ff67225e0b024db02396c35320ae709cf4bb401b87ea569c172984 |
Hashes for semantic_text_splitter-0.2.1-cp37-abi3-macosx_11_0_arm64.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 260f047b88582e6e339852b480edacfd4546fa2e07a27edf054b7c6e41d868cd |
|
MD5 | ff1bd41f131cc00ac753ae10bea787f8 |
|
BLAKE2b-256 | 7c7aa87cecb38d7a567fb522d7589e70e18720f580e28adcb957c9e83d29d70f |
Hashes for semantic_text_splitter-0.2.1-cp37-abi3-macosx_10_7_x86_64.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | c38b2308650f6bda331b31cc298596f57559aad23e24d034d5f9b76683d7852a |
|
MD5 | d5d612c353c7c703c2d57d81e477198c |
|
BLAKE2b-256 | 56be1b182de92af19b1c86a191a46862d209a6a8a84a976d47f7123beaa1a83d |