Skip to main content

Chunking components for the Sayou Data Platform

Project description

sayou-chunking

Build Status License: Apache 2.0 Docs

sayou-chunking is a context-aware text splitting library for Python. It transforms raw text documents into Knowledge Graph-ready nodes, focusing on preserving semantic structure, hierarchy, and context.

This library is the "Assembler Preparation" component of the Sayou Data Platform. It sits between data cleaning (sayou-refinery) and knowledge graph construction (sayou-assembler), ensuring that RAG pipelines operate on logical, structured units of information rather than fragmented text.

Philosophy

sayou-chunking believes that "How you split determines how you retrieve." Naive chunking destroys context (e.g., splitting a table in half). We prioritize Structure-First Splitting:

  1. Atomic Protection: Never split atomic blocks like Tables or Code Snippets.
  2. Hierarchical Binding: Headers are parents; contents are children. We maintain parent_id linkages for KG construction.
  3. Composite Strategies: Combine multiple splitting strategies (e.g., Structure for context, Recursive for retrieval units).

🚀 Key Features

  • 4-Tier Architecture: Highly extensible design (Engine -> Interface -> Template -> Plugin -> Composite).
  • Atomic Protection: Built-in TextSegmenter engine prevents breaking Markdown tables and code blocks.
  • KG-Ready Output: Automatically generates parent_id, doc_level, and semantic_type metadata.
  • Smart Plugins:
    • MarkdownPlugin: Anchors chunks to Headers (#) and classifies content (Table, List, H1...).
    • ParentDocument: Implements "Small-to-Big" retrieval strategy using composite splitters.
  • Semantic Awareness: Detects topic shifts to create logically grouped chunks (Tier 2 Template).

📦 Installation

pip install sayou-chunking

⚡ Quickstart

The ChunkingPipeline orchestrates the splitting process. You can register any combination of Tier 2 Templates or Tier 3 Plugins.

Here is a complete example demonstrating how to process a Markdown file generated by sayou-refinery.

import os
import json
from typing import List

# 1. Import Core Pipeline & Interface
from sayou.chunking.pipeline import ChunkingPipeline
from sayou.chunking.interfaces.base_splitter import BaseSplitter

# 2. Import Splitters (Templates & Plugins)
from sayou.chunking.splitter.fixed_length import FixedLengthSplitter
from sayou.chunking.splitter.recursive import RecursiveSplitter
from sayou.chunking.splitter.structure import StructureSplitter
from sayou.chunking.splitter.semantic import SemanticSplitter
from sayou.chunking.splitter.parent_document import ParentDocumentSplitter
from sayou.chunking.plugins.markdown_plugin import MarkdownPlugin

def run_chunking_demo():
    # Setup File Paths
    refinery_output_md = os.path.join(".", "test.md")

    with open(REFINERY_OUTPUT_MD, "r", encoding="utf-8") as f:
        markdown_content = f.read()

    source_metadata = {
        "source_file": refinery_output_md, 
        "id": "doc_refinery_output"
    }

    # 3. Register Splitters
    default_splitters: List[BaseSplitter] = [
        FixedLengthSplitter(),
        ParentDocumentSplitter(),
        RecursiveSplitter(),
        SemanticSplitter(),
        StructureSplitter(),
        MarkdownPlugin(),
    ]

    # 4. Initialize Pipeline
    pipeline = ChunkingPipeline(splitters=default_splitters)
    pipeline.initialize()

    # 5. Create Split Request
    # We use 'markdown' type to leverage structure-aware splitting
    split_request = {
        "type": "markdown",
        "content": markdown_content,
        "metadata": source_metadata,
        "chunk_size": 1000,
        "chunk_overlap": 50
    }

    print(f"--- [Sayou Chunking Demo] ---")
    print(f"Splitting using '{split_request['type']}'...")

    try:
        # 6. Run Splitting
        chunks = pipeline.split(split_request)
        print(f"✅ Successfully split content into {len(chunks)} chunks.\n")

        # 7. Save Output
        output_dir = os.path.join(os.path.dirname(__file__), "output")
        os.makedirs(output_dir, exist_ok=True)
        
        output_path = os.path.join(output_dir, f"chunks_output.json")
        with open(output_path, "w", encoding="utf-8") as f:
            json.dump(chunks, f, indent=2, ensure_ascii=False)
        
        print(f"Full output saved to {output_path}")

    except Exception as e:
        print(f"❌ Chunking failed: {e}")

if __name__ == "__main__":
    run_chunking_demo()

Example JSON Output (KG-Ready)

Notice how semantic_type is identified and parent_id links the content to its header.

[
    {
    "chunk_content": "# 1. Introduction",
    "metadata": {
        "chunk_id": "doc_123_h_0",
        "semantic_type": "h1",
        "is_header": true,
        "part_index": 0
    }
    },
    {
    "chunk_content": "| Feature | Status |\n|---|---|\n| Protect | Done |...",
    "metadata": {
        "chunk_id": "doc_123_part_1",
        "semantic_type": "table",
        "parent_id": "doc_123_h_0", 
        "section_title": "1. Introduction",
        "part_index": 1
    }
    }
]

🗺️ Roadmap (v0.1.0+)

sayou-chunking v0.0.1 establishes the structural foundation.

  • HTML Plugin: Applying the "Parent-Child" strategy to HTML DOM trees.
  • Real Semantic Engine: Integrating OpenAI/HuggingFace embeddings into SemanticSplitter.
  • Tokenizer Support: Switching chunk_size calculation from characters to tokens (e.g., tiktoken).

🤝 Contributing

We welcome contributions! Whether it's a new Tier 3 Plugin for a specific format or optimization of the Tier 1 Engine. Please check our contributing guidelines.

📜 License

Apache 2.0 License © 2025 Sayouzone

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

sayou_chunking-0.1.0.tar.gz (18.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

sayou_chunking-0.1.0-py3-none-any.whl (23.8 kB view details)

Uploaded Python 3

File details

Details for the file sayou_chunking-0.1.0.tar.gz.

File metadata

  • Download URL: sayou_chunking-0.1.0.tar.gz
  • Upload date:
  • Size: 18.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for sayou_chunking-0.1.0.tar.gz
Algorithm Hash digest
SHA256 db5c0ea929b082546ce249358bc40a094f09fe83b7c012c144a7793e26d4b7c3
MD5 f6437f90c52d19f19b94428dda525250
BLAKE2b-256 cf9c85d8a7e774e6364b95711a40036673c185943870267baf87c96bd43192cf

See more details on using hashes here.

File details

Details for the file sayou_chunking-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: sayou_chunking-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 23.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for sayou_chunking-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 e5722d62c2135b0407af171d8842e7f18f7c7bdd848a9cb620d41cdf340f8777
MD5 ec3487625ac1b186e3a5eb7d1925d489
BLAKE2b-256 95e526682874f5bf10f164e79f8e3037e7470672ee9dd4d05094be223acbe5f6

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page