Sphinx extension to split searchword with TinySegmenter
A Sphinx extension for tokenize japanese query word with TinySegmenter.js
This extension tweaks searchtools.js of sphinx-generated html document to tokenize Japanese composite words.
Since Japanese is an agglutinative language, query word for document search usually becomes composite form like ‘システム標準’ (system standard). This makes difficult to search pages containing phrase such as ‘システムの標準’, ‘標準システム’, because TinySegmenter.py (Sphinx’s default Japanese index tokenizer) tokenizes ‘システム’ and ‘標準’ as indexes.
- Add ‘sphinx_tsegsearch’ in conf.extensions
- Rebuild document.
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|Filename, size||File type||Python version||Upload date||Hashes|
|Filename, size sphinx-tsegsearch-1.0.tar.gz (10.8 kB)||File type Source||Python version None||Upload date||Hashes View|