Skip to main content

Fast and accurate Thai tokenization library.

Project description

Thai Tokenizer

Fast and accurate Thai tokenization library using supervised BPE designed for full-text search applications.

Installation

pip3 install thai_tokenizer

Usage

Default set of pairs is optimized for short Thai-English product descriptions.

from thai_tokenizer import Tokenizer
tokenizer = Tokenizer()
tokenizer('iPad Mini 256GB เครื่องไทย') #> 'iPad Mini 256GB เครื่อง ไทย'
tokenizer.split('เครื่องไทย') #> ['เครื่อง', 'ไทย']

Training

See Training for guidelines to train your own pairs.

Contributing

Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

thai_tokenizer-0.2.5.tar.gz (6.5 kB view hashes)

Uploaded Source

Built Distribution

thai_tokenizer-0.2.5-py3-none-any.whl (52.5 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page