A Japanese tokenizer based on recurrent neural networks
Project description
Nagisa is a python module for Japanese word segmentation/POS-tagging.
It is designed to be a simple and easy-to-use tool.
This tool has the following features.
Based on recurrent neural networks.
The word segmentation model uses character- and word-level features [池田+].
The POS-tagging model uses tag dictionary information [Inoue+].
For more details refer to the following links.
Installation
Python 2.7.x or 3.5+ is required.
This tool uses DyNet (the Dynamic
Neural Network Toolkit) to calcucate neural networks.
You can install nagisa by using the following command.
pip install nagisa
For Windows users, please run it with python 3.6+ (64bit).
Basic usage
Sample of word segmentation and POS-tagging for Japanese.
import nagisa
text = 'Pythonで簡単に使えるツールです'
words = nagisa.tagging(text)
print(words)
#=> Python/名詞 で/助詞 簡単/形状詞 に/助動詞 使える/動詞 ツール/名詞 です/助動詞
# Get a list of words
print(words.words)
#=> ['Python', 'で', '簡単', 'に', '使える', 'ツール', 'です']
# Get a list of POS-tags
print(words.postags)
#=> ['名詞', '助詞', '形状詞', '助動詞', '動詞', '名詞', '助動詞']
Post-processing functions
Filter and extarct words by the specific POS tags.
# Filter the words of the specific POS tags.
words = nagisa.filter(text, filter_postags=['助詞', '助動詞'])
print(words)
#=> Python/名詞 簡単/形状詞 使える/動詞 ツール/名詞
# Extarct only nouns.
words = nagisa.extract(text, extract_postags=['名詞'])
print(words)
#=> Python/名詞 ツール/名詞
# This is a list of available POS-tags in nagisa.
print(nagisa.tagger.postags)
#=> ['補助記号', '名詞', ... , 'URL']
Add the user dictionary in easy way.
# default
text = "3月に見た「3月のライオン」"
print(nagisa.tagging(text))
#=> 3/名詞 月/名詞 に/助詞 見/動詞 た/助動詞 「/補助記号 3/名詞 月/名詞 の/助詞 ライオン/名詞 」/補助記号
# If a word ("3月のライオン") is included in the single_word_list, it is recognized as a single word.
new_tagger = nagisa.Tagger(single_word_list=['3月のライオン'])
print(new_tagger.tagging(text))
#=> 3/名詞 月/名詞 に/助詞 見/動詞 た/助動詞 「/補助記号 3月のライオン/名詞 」/補助記号
Train a model
Nagisa (v0.2.0+) provides a simple train method
for a joint word segmentation and sequence labeling (e.g, POS-tagging,
NER) model.
The format of the train/dev/test files is tsv.
Each line is word and tag and one line is represented by
word \t(tab) tag.
Note that you put EOS between sentences.
$ cat sample.train 唯一 NOUN の ADP 趣味 NOU は ADP 料理 NOUN EOS とても ADV おいしかっ ADJ た AUX です AUX 。 PUNCT EOS ドル NOUN は ADP 主要 ADJ 通貨 NOUN EOS
# After finish training, save the three model files (*.vocabs, *.params, *.hp).
nagisa.fit(train_file="sample.train", dev_file="sample.dev", test_file="sample.test", model_name="sample")
# Build the tagger by loading the trained model files.
sample_tagger = nagisa.Tagger(vocabs='sample.vocabs', params='sample.params', hp='sample.hp')
text = "福岡・博多の観光情報"
words = sample_tagger.tagging(text)
print(words)
#> 福岡/PROPN ・/SYM 博多/PROPN の/ADP 観光/NOUN 情報/NOUN
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
nagisa-0.2.3.tar.gz
(20.9 MB
view details)
File details
Details for the file nagisa-0.2.3.tar.gz
.
File metadata
- Download URL: nagisa-0.2.3.tar.gz
- Upload date:
- Size: 20.9 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/1.9.1 pkginfo/1.4.1 requests/2.18.4 setuptools/39.2.0 requests-toolbelt/0.8.0 tqdm/4.19.5 CPython/2.7.12
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 5023215f1a63b42bfcab535a15e3a7b9c23652ae237103b9206d113ca84377cc |
|
MD5 | 73e5dc31dad5c9e2082b78923a6bf243 |
|
BLAKE2b-256 | d4c5dd56d380f037334b75bcdd6a3e41f1990c07fa0712b754fbc9fa878199f4 |