Skip to main content

A simple, efficient text chunking library for RAG applications

Project description

Chonkie Logo

🦛 Chonkie

so i found myself making another RAG bot (for the 2342148th time) and meanwhile, explaining to my juniors about why we should use chunking in our RAG bots, only to realise that i would have to write chunking all over again unless i use the bloated software library X or the extremely feature-less library Y. WHY CAN I NOT HAVE GOOD THINGS IN LIFE, UGH?

Can't i just install, import and run chunking and not have to worry about dependencies, bloat, speed or other factors?

Well, with chonkie you can! (chonkie boi is a gud boi)

✅ All the CHONKs you'd ever need
✅ Easy to use: Install, Import, CHONK
✅ No bloat, just CHONK
✅ Cute CHONK mascoot
✅ Moto Moto's favorite python library

What're you waiting for, just CHONK it!

Table of Contents

Why do we need Chunking?

Here are some arguments for why one would like to chunk their texts for a RAG scenario:

  • Most RAG pipelines are bottlenecked by context length as of today. While we expect future LLMs to exceed 1Mill token lenghts, even then, it's not only LLMs inside the pipeline, but other aspects too, namely, bi-encoder retriever, cross-encoder reranker and even models for particular aspects like answer relevancy models and answer attribution models, that could lead to the context length bottleneck.
  • Even with infinite context, there's no free lunch on the context side - the minimum it takes to understand a string is o(n) and we would never be able to make models more efficient on scaling context. So, if we have smaller context, our search and generation pipeline would be more efficient (in response latency)
  • Research suggests that a lot of random, noisy context can actually lead to higher hallucination in the model responses. However, if we ensure that each chunk that get's passed onto the model is only relevant, the model would end up with better responses.

Approaches to doing chunking

  1. Token Chunking (a.k.a Fixed Size Chunking or Sliding Window Chunking)
  2. Word Chunking
  3. Sentence Chunking
  4. Semantic Chunking
  5. Semantic Double-Pass Merge (SDPM) Chunking
  6. Context-aware Chunking

Installation

To install chonkie, simply run:

pip install chonkie

Usage

Here's a basic example to get you started:

from chonkie import TokenChunker

# Initialize the chunker
chunker = TokenChunker()

# Chunk some text
chunks = chunker.chunk("Your text here")
print(chunks)

Citation

If you use Chonkie in your research, please cite it as follows:

@misc{chonkie2024,
  author = {Minhas, Bhavnick},
  title = {Chonkie: A Lightweight Chunking Library for RAG Bots},
  year = {2024},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/bhavnick/chonkie}},
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

chonkie-0.0.1a4.tar.gz (4.2 kB view details)

Uploaded Source

Built Distribution

chonkie-0.0.1a4-py3-none-any.whl (4.7 kB view details)

Uploaded Python 3

File details

Details for the file chonkie-0.0.1a4.tar.gz.

File metadata

  • Download URL: chonkie-0.0.1a4.tar.gz
  • Upload date:
  • Size: 4.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.9.20

File hashes

Hashes for chonkie-0.0.1a4.tar.gz
Algorithm Hash digest
SHA256 83f1ef84fef14cb8d7b6e86bbef4523fc5f75bff20ac6b96d680adde150c08c2
MD5 c779d9fb2591bb3f6f1123a01fc8b229
BLAKE2b-256 e22248fa83e1083740e57ea9cfa155b11dcfab0b6146118e21221d70027452e4

See more details on using hashes here.

Provenance

File details

Details for the file chonkie-0.0.1a4-py3-none-any.whl.

File metadata

  • Download URL: chonkie-0.0.1a4-py3-none-any.whl
  • Upload date:
  • Size: 4.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.9.20

File hashes

Hashes for chonkie-0.0.1a4-py3-none-any.whl
Algorithm Hash digest
SHA256 17d4cc6d2bc054c54d17c2c1b3ac2c6c476bd8b6c224a300487ba7a7d3d5da0d
MD5 498d8f2d048cc9a5e2cff9b94b88e3aa
BLAKE2b-256 abfe68adb2401a17d59017914ae1133bbc4ae7b5a3657428934e00698b20287a

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page