Skip to main content

A lightweight sentence boundary detector for Meitei Mayek (Manipuri) text

Project description

Meitei Senter

A lightweight sentence boundary detector for Meitei Mayek (Manipuri) text.

PyPI version License: MIT Python 3.9+ Model Size F-Score

Features

  • 🚀 Lightweight - Only ~1MB model, minimal dependencies
  • 🎯 Accurate - 94.7% F-Score on Meitei text
  • 🔧 Easy to use - Simple Python API and CLI
  • Fast - Optimized for quick inference

Installation

pip install meitei-senter

Optional: spaCy Backend (for higher accuracy)

pip install meitei-senter[spacy]

Quick Start

Python API

from meitei_senter import MeiteiSentenceSplitter

# Initialize the splitter
splitter = MeiteiSentenceSplitter()

# Split text into sentences
text = "ꯆꯦꯔꯣꯀꯤ ꯑꯁꯤ ꯑꯣꯀ꯭ꯂꯥꯍꯣꯃꯥꯒꯤ ꯁꯍꯔꯅꯤ ꯫ ꯃꯁꯤ ꯌꯥꯝꯅ ꯆꯥꯎꯏ ꯫"
sentences = splitter.split_sentences(text)

for i, sent in enumerate(sentences, 1):
    print(f"{i}. {sent}")

Output:

1. ꯆꯦꯔꯣꯀꯤ ꯑꯁꯤ ꯑꯣꯀ꯭ꯂꯥꯍꯣꯃꯥꯒꯤ ꯁꯍꯔꯅꯤ ꯫
2. ꯃꯁꯤ ꯌꯥꯝꯅ ꯆꯥꯎꯏ ꯫

Command Line

# Interactive mode
meitei-senter --interactive

# Direct text input
meitei-senter --text "ꯆꯦꯔꯣꯀꯤ ꯑꯁꯤ ꯑꯣꯀ꯭ꯂꯥꯍꯣꯃꯥꯒꯤ ꯁꯍꯔꯅꯤ ꯫ ꯃꯁꯤ ꯌꯥꯝꯅ ꯆꯥꯎꯏ ꯫"

# Show version
meitei-senter --version

Advanced Usage

Using the Convenient Loader

from meitei_senter import load_splitter

# Load with default (delimiter-based) backend
splitter = load_splitter()

# Or with spaCy backend (requires spacy extra)
splitter = load_splitter(use_spacy=True)

sentences = splitter.split_sentences("Your Meitei text here ꯫")

Using Neural Network Mode

from meitei_senter import MeiteiSentenceSplitter

# Enable neural mode for context-aware splitting
splitter = MeiteiSentenceSplitter(use_neural=True)
sentences = splitter.split_sentences(text)

Direct Callable Interface

from meitei_senter import MeiteiSentenceSplitter

splitter = MeiteiSentenceSplitter()

# Call splitter directly
sentences = splitter("ꯆꯦꯔꯣꯀꯤ ꯑꯁꯤ... ꯫ ꯃꯁꯤ ꯌꯥꯝꯅ ꯆꯥꯎꯏ ꯫")

With spaCy (Custom Tokenizer)

import spacy
from meitei_senter import MeiteiTokenizer

# Create blank spaCy model with custom tokenizer
nlp = spacy.blank("xx")
nlp.tokenizer = MeiteiTokenizer("path/to/meitei_tokenizer.model", nlp.vocab)

doc = nlp("ꯆꯦꯔꯣꯀꯤ ꯑꯁꯤ ꯑꯣꯀ꯭ꯂꯥꯍꯣꯃꯥꯒꯤ ꯁꯍꯔꯅꯤ ꯫")
print([token.text for token in doc])

📊 Model Details

Feature Specification
Model Size ~1 MB
Tokenizer SentencePiece (Unigram, 8K vocab)
Architecture CNN (HashEmbedCNN)
F-Score 94.71%
Precision 93.94%
Recall 95.49%

📂 Repository Structure

mni_tokenizer/
├── meitei_senter/              # Main package
│   ├── __init__.py             # Package exports
│   ├── cli.py                  # Command-line interface
│   ├── model.py                # PyTorch model & splitter
│   ├── tokenizer.py            # spaCy tokenizer
│   ├── meitei_tokenizer.model  # SentencePiece model
│   ├── meitei_senter.pth       # PyTorch weights
│   └── meitei_senter.json      # Model config
├── pyproject.toml              # Build configuration
└── README.md                   # This file

API Reference

MeiteiSentenceSplitter

Main class for sentence splitting.

MeiteiSentenceSplitter(
    pth_path: str = None,      # Path to PyTorch model
    spm_path: str = None,      # Path to SentencePiece model
    config_path: str = None,   # Path to config JSON
    use_neural: bool = False   # Enable neural network mode
)

Methods:

Method Description
split_sentences(text) Split text into list of sentences
tokenize(text) Tokenize text into pieces and IDs
__call__(text) Direct callable interface

MeiteiTokenizer

spaCy-compatible tokenizer using SentencePiece.

MeiteiTokenizer(model_path: str, vocab: spacy.Vocab)

load_splitter

Convenience function to load a pre-configured splitter.

load_splitter(use_spacy: bool = False)

🔧 Development

# Clone repository
git clone https://github.com/Okramjimmy/mni_tokenizer.git
cd mni_tokenizer

# Install in development mode
pip install -e ".[dev]"

# Run tests
pytest

# Build package
python -m build

# Upload to PyPI
twine upload dist/*

📜 License

MIT License - see LICENSE for details.


📚 Citation

If you use this in your research, please cite:

@software{meitei_senter,
  author = {Okram Jimmy},
  title = {Meitei Senter: Sentence Boundary Detection for Meitei Mayek},
  year = {2024},
  url = {https://github.com/Okramjimmy/mni_tokenizer}
}

🤝 Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

📧 Contact

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

meitei_senter-1.0.0.tar.gz (1.2 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

meitei_senter-1.0.0-py3-none-any.whl (1.2 MB view details)

Uploaded Python 3

File details

Details for the file meitei_senter-1.0.0.tar.gz.

File metadata

  • Download URL: meitei_senter-1.0.0.tar.gz
  • Upload date:
  • Size: 1.2 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.9

File hashes

Hashes for meitei_senter-1.0.0.tar.gz
Algorithm Hash digest
SHA256 ba081282cb2b1e6d2d62aff87c05b13d5633a8c0e50db950b9cdff9747a12c93
MD5 59e470681218314d8b757041e6c1fc5f
BLAKE2b-256 70d41464cfdd845c33affa7b13d7574effef863f29a5814cc1104d10aa7d58e4

See more details on using hashes here.

File details

Details for the file meitei_senter-1.0.0-py3-none-any.whl.

File metadata

  • Download URL: meitei_senter-1.0.0-py3-none-any.whl
  • Upload date:
  • Size: 1.2 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.9

File hashes

Hashes for meitei_senter-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 ce4e95f4a129e8a034ce476fe218b16a62364a46f09a720adcb1ba06a2c146e1
MD5 18bf5743d7731f5eff2738cc5fbc5a2b
BLAKE2b-256 23311b75fd6429deaaca885537ebdf5f449d415121bb73aed394bbde1fe31b05

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page