Skip to main content

Sphinx extension to split searchword with TinySegmenter

Project description

A Sphinx extension for tokenize japanese query word with TinySegmenter.js

This extension tweaks searchtools.js of sphinx-generated html document to tokenize Japanese composite words.

Since Japanese is an agglutinative language, query word for document search usually becomes composite form like ‘システム標準’ (system standard). This makes difficult to search pages containing phrase such as ‘システムの標準’, ‘標準システム’, because TinySegmenter.py (Sphinx’s default Japanese index tokenizer) tokenizes ‘システム’ and ‘標準’ as indexes.

sphinx-tsegsearch patches searchtools.js to override query tokinization step so that query input is re-tokenized by TinySegmenter.js (original JavaScript implementation of TinySegmenter). As a result, roughly say, this tiny hack improves recall of Japanese document search in exchange of precision.

Usage:

  1. Add ‘sphinx_tsegsearch’ in conf.extensions
  2. Rebuild document.

Project details


Release history Release notifications | RSS feed

This version

1.0

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Files for sphinx-tsegsearch, version 1.0
Filename, size File type Python version Upload date Hashes
Filename, size sphinx-tsegsearch-1.0.tar.gz (10.8 kB) File type Source Python version None Upload date Hashes View

Supported by

Pingdom Pingdom Monitoring Google Google Object Storage and Download Analytics Sentry Sentry Error logging AWS AWS Cloud computing DataDog DataDog Monitoring Fastly Fastly CDN DigiCert DigiCert EV certificate StatusPage StatusPage Status page