Skip to main content

A lightweight sentence boundary detector for Meitei Mayek (Manipuri) text

Project description

Meitei Senter

A lightweight sentence boundary detector for Meitei Mayek (Manipuri) text.

PyPI version License: MIT Python 3.9+ Model Size F-Score

Features

  • 🚀 Lightweight - Only ~1MB model, minimal dependencies
  • 🎯 Accurate - 94.7% F-Score on Meitei text
  • 🔧 Easy to use - Simple Python API and CLI
  • Fast - Optimized for quick inference

Installation

pip install meitei-senter

Optional: spaCy Backend (for higher accuracy)

pip install meitei-senter[spacy]

Quick Start

Python API

from meitei_senter import MeiteiSentenceSplitter

# Initialize the splitter
splitter = MeiteiSentenceSplitter()

# Split text into sentences
text = "ꯆꯦꯔꯣꯀꯤ ꯑꯁꯤ ꯑꯣꯀ꯭ꯂꯥꯍꯣꯃꯥꯒꯤ ꯁꯍꯔꯅꯤ ꯫ ꯃꯁꯤ ꯌꯥꯝꯅ ꯆꯥꯎꯏ ꯫"
sentences = splitter.split_sentences(text)

for i, sent in enumerate(sentences, 1):
    print(f"{i}. {sent}")

Output:

1. ꯆꯦꯔꯣꯀꯤ ꯑꯁꯤ ꯑꯣꯀ꯭ꯂꯥꯍꯣꯃꯥꯒꯤ ꯁꯍꯔꯅꯤ ꯫
2. ꯃꯁꯤ ꯌꯥꯝꯅ ꯆꯥꯎꯏ ꯫

Command Line

# Interactive mode
meitei-senter --interactive

# Direct text input
meitei-senter --text "ꯆꯦꯔꯣꯀꯤ ꯑꯁꯤ ꯑꯣꯀ꯭ꯂꯥꯍꯣꯃꯥꯒꯤ ꯁꯍꯔꯅꯤ ꯫ ꯃꯁꯤ ꯌꯥꯝꯅ ꯆꯥꯎꯏ ꯫"

# Show version
meitei-senter --version

Advanced Usage

Using the Convenient Loader

from meitei_senter import load_splitter

# Load with default (delimiter-based) backend
splitter = load_splitter()

# Or with spaCy backend (requires spacy extra)
splitter = load_splitter(use_spacy=True)

sentences = splitter.split_sentences("Your Meitei text here ꯫")

Using Neural Network Mode

from meitei_senter import MeiteiSentenceSplitter

# Enable neural mode for context-aware splitting
splitter = MeiteiSentenceSplitter(use_neural=True)
sentences = splitter.split_sentences(text)

Direct Callable Interface

from meitei_senter import MeiteiSentenceSplitter

splitter = MeiteiSentenceSplitter()

# Call splitter directly
sentences = splitter("ꯆꯦꯔꯣꯀꯤ ꯑꯁꯤ... ꯫ ꯃꯁꯤ ꯌꯥꯝꯅ ꯆꯥꯎꯏ ꯫")

With spaCy (Custom Tokenizer)

import spacy
import os
from meitei_senter import MeiteiTokenizer, get_model_path

# Get path to bundled model
model_path = os.path.join(get_model_path(), 'meitei_tokenizer.model')

# Create blank spaCy model with custom tokenizer
nlp = spacy.blank("xx")
nlp.tokenizer = MeiteiTokenizer(model_path, nlp.vocab)

doc = nlp("ꯆꯦꯔꯣꯀꯤ ꯑꯁꯤ ꯑꯣꯀ꯭ꯂꯥꯍꯣꯃꯥꯒꯤ ꯁꯍꯔꯅꯤ ꯫")
print([token.text for token in doc])
# Output: ['ꯆꯦ', 'ꯔꯣ', 'ꯀꯤ', 'ꯑꯁꯤ', 'ꯑꯣꯀ꯭ꯂꯥꯍꯣꯃꯥ', 'ꯒꯤ', 'ꯁꯍꯔ', 'ꯅꯤ', '꯫']

🌐 REST API Server

Start the FastAPI server for HTTP-based sentence splitting:

Start Server

# Using CLI
meitei-senter-server --port 8000

# Or with uvicorn directly
uvicorn meitei_senter.server:app --host 0.0.0.0 --port 8000

# With auto-reload for development
meitei-senter-server --port 8000 --reload

API Endpoints (POST only)

Endpoint Method Description
/ POST API info
/health POST Health check
/split POST Split text into sentences
/tokenize POST Tokenize text
/docs GET Swagger UI

Example Requests

POST /split

curl -X POST "http://localhost:8000/split" \
     -H "Content-Type: application/json" \
     -d '{"text": "ꯆꯦꯔꯣꯀꯤ ꯑꯁꯤ ꯑꯣꯀ꯭ꯂꯥꯍꯣꯃꯥꯒꯤ ꯁꯍꯔꯅꯤ ꯫ ꯃꯁꯤ ꯌꯥꯝꯅ ꯆꯥꯎꯏ ꯫"}'

Response:

{
  "sentences": ["ꯆꯦꯔꯣꯀꯤ ꯑꯁꯤ ꯑꯣꯀ꯭ꯂꯥꯍꯣꯃꯥꯒꯤ ꯁꯍꯔꯅꯤ꯫", "ꯃꯁꯤ ꯌꯥꯝꯅ ꯆꯥꯎꯏ꯫"],
  "count": 2
}

POST /tokenize

curl -X POST "http://localhost:8000/tokenize" \
     -H "Content-Type: application/json" \
     -d '{"text": "ꯆꯦꯔꯣꯀꯤ ꯑꯁꯤ"}'

Response:

{
  "tokens": ["▁ꯆꯦ", "ꯔꯣ", "ꯀꯤ", "▁ꯑꯁꯤ"],
  "token_ids": [460, 390, 42, 3],
  "count": 4
}

POST /health

curl -X POST "http://localhost:8000/health"

Response:

{"status": "ok", "version": "1.1.0", "model_loaded": true}

📊 Model Details

Feature Specification
Model Size ~1 MB
Tokenizer SentencePiece (Unigram, 8K vocab)
Architecture CNN (HashEmbedCNN)
F-Score 94.71%
Precision 93.94%
Recall 95.49%

📂 Repository Structure

mni_tokenizer/
├── meitei_senter/              # Main package
│   ├── __init__.py             # Package exports
│   ├── cli.py                  # Command-line interface
│   ├── model.py                # PyTorch model & splitter
│   ├── tokenizer.py            # spaCy tokenizer
│   ├── meitei_tokenizer.model  # SentencePiece model
│   ├── meitei_senter.pth       # PyTorch weights
│   └── meitei_senter.json      # Model config
├── pyproject.toml              # Build configuration
└── README.md                   # This file

API Reference

MeiteiSentenceSplitter

Main class for sentence splitting.

MeiteiSentenceSplitter(
    pth_path: str = None,      # Path to PyTorch model
    spm_path: str = None,      # Path to SentencePiece model
    config_path: str = None,   # Path to config JSON
    use_neural: bool = False   # Enable neural network mode
)

Methods:

Method Description
split_sentences(text) Split text into list of sentences
tokenize(text) Tokenize text into pieces and IDs
__call__(text) Direct callable interface

MeiteiTokenizer

spaCy-compatible tokenizer using SentencePiece.

MeiteiTokenizer(model_path: str, vocab: spacy.Vocab)

load_splitter

Convenience function to load a pre-configured splitter.

load_splitter(use_spacy: bool = False)

🔧 Development

# Clone repository
git clone https://github.com/Okramjimmy/mni_tokenizer.git
cd mni_tokenizer

# Install in development mode
pip install -e ".[dev]"

# Run tests
pytest

# Build package
python -m build

# Upload to PyPI
twine upload dist/*

📜 License

MIT License - see LICENSE for details.


📚 Citation

If you use this in your research, please cite:

@software{meitei_senter,
  author = {Okram Jimmy},
  title = {Meitei Senter: Sentence Boundary Detection for Meitei Mayek},
  year = {2024},
  url = {https://github.com/Okramjimmy/mni_tokenizer}
}

🤝 Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

📧 Contact

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

meitei_senter-1.1.1.tar.gz (1.2 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

meitei_senter-1.1.1-py3-none-any.whl (1.2 MB view details)

Uploaded Python 3

File details

Details for the file meitei_senter-1.1.1.tar.gz.

File metadata

  • Download URL: meitei_senter-1.1.1.tar.gz
  • Upload date:
  • Size: 1.2 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.9

File hashes

Hashes for meitei_senter-1.1.1.tar.gz
Algorithm Hash digest
SHA256 ec6d020fb6cf3c92d34cdd407a72f1d2a336e37d1609e39bcdb7b7dc100fff6b
MD5 80891486f9341ad920a71044ba8c34a5
BLAKE2b-256 c075f886563027bfcd2af6ef00c6700321f072130006239488e691f2726436eb

See more details on using hashes here.

File details

Details for the file meitei_senter-1.1.1-py3-none-any.whl.

File metadata

  • Download URL: meitei_senter-1.1.1-py3-none-any.whl
  • Upload date:
  • Size: 1.2 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.9

File hashes

Hashes for meitei_senter-1.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 8dd3f4d12cbd087deaea898217f2669ffbd446900e9b01ba5bde0657c25e249b
MD5 b35e928d14e6435c18cd8db19521214f
BLAKE2b-256 ed5fe84a7377535b56d9d8a2c82aef24e39e716362ad9b61b08a0ef698e0441f

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page