Skip to main content

Interface package for Japanese tokenization

Project description

What’s this?

This is simple wrapper for Japanese Tokenizers(A.K.A Morphology Splitter)

This project aims to call Tokenizer and split into tokens as easy as possible.

And this project supports various Tokenization tools. You can compare results among them.

If you find any bugs, please report them to github issues. Or any pull requests are welcomed!


  • Python 2.7
  • Python 3.5


  • You can get set of tokens from input sentence
  • You can filter some tokens with your Part-of-Speech condition or stopwords
  • You can add extension dictionary like mecab-neologd dictionary
  • You can define your original dictionary. And this dictionary forces mecab to make it one token

Supported Tokenization tool


Mecab is open source tokenization system for various language(if you have dictionary for it)

See english documentation for detail


Juman is tokenization tool developped by Kurohashi laboratory, Kyoto University, Japan.

Juman is strong for ambigious writing style in Japanese, and is strong for new-comming words thanks to Web based huge dictionary.

And, Juman tells you semantic meaning of words.


Kytea is tokenization tool developped by Graham Neubig.

Kytea has a different algorithm from one of Mecab or Juman.

Setting up


See here to install MeCab system.

Mecab Neologd dictionary

Mecab-neologd dictionary is a dictionary-extension based on ipadic-dictionary, which is basic dictionary of Mecab.

With, Mecab-neologd dictionary, you’re able to new-coming words make one token.

Here, new-coming words is suche like, movie actor name or company name…..

See here and install mecab-neologd dictionary.


wget -O juman7.0.1.tar.bz2 ""
bzip2 -dc juman7.0.1.tar.bz2  | tar xvf -
cd juman-7.01
[sudo] make install


Install Kytea system

tar -xvf kytea-0.4.7.tar
cd kytea-0.4.7
make install

Kytea has python wrapper thanks to michiaki ariga. Install Kytea-python wrapper

pip install kytea
git clone
[sudo] make install


[sudo] python install


Tokenization Example(For python2x. To see exmaple code for Python3.x, plaese see here)

# input is `unicode` type(in python2x)
sentence = u'テヘラン(ペルシア語: تهران  ; Tehrān Tehran.ogg 発音[ヘルプ/ファイル]/teɦˈrɔːn/、英語:Tehran)は、西アジア、イランの首都でありかつテヘラン州の州都。人口12,223,598人。都市圏人口は13,413,348人に達する。'

# make MecabWrapper object
# path where `mecab-config` command exists. You can check it with `which mecab-config`
# default value is '/usr/local/bin'

# you can choose from "neologd", "all", "ipaddic", "user", ""
# "ipadic" and "" is equivalent
dictType = ""

mecab_wrapper = MecabWrapper(dictType=dictType, path_mecab_config=path_mecab_config)

# tokenize sentence. Returned object is list of tuples
tokenized_obj = mecab_wrapper.tokenize(sentence=sentence)
assert isinstance(tokenized_obj, list)

# Returned object is "TokenizedSenetence" class if you put return_list=False
tokenized_obj = mecab_wrapper.tokenize(sentence=sentence, return_list=False)

Filtering example

stopwords = [u'テヘラン']
assert isinstance(tokenized_obj, TokenizedSenetence)
# returned object is "FilteredObject" class
filtered_obj = mecab_wrapper.filter(
assert isinstance(filtered_obj, FilteredObject)

# pos condition is list of tuples
# You can set POS condition "ChaSen 品詞体系 (IPA品詞体系)" of this page
pos_condition = [(u'名詞', u'固有名詞'), (u'動詞', u'自立')]
filtered_obj = mecab_wrapper.filter(

Similar Package


natto-py is sophisticated package for tokenization. It supports following features

  • easy interface for tokenization
  • importing additional dictionary
  • partial parsing mode



  • first release to Pypi


  • Juman supports(only for python2.x)
  • Kytea supports(only for python2.x)

Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Filename, size & hash SHA256 hash help File type Python version Upload date
JapaneseTokenizer-0.7.tar.gz (16.0 kB) Copy SHA256 hash SHA256 Source None

Supported by

Elastic Elastic Search Pingdom Pingdom Monitoring Google Google BigQuery Sentry Sentry Error logging AWS AWS Cloud computing DataDog DataDog Monitoring Fastly Fastly CDN SignalFx SignalFx Supporter DigiCert DigiCert EV certificate StatusPage StatusPage Status page