Skip to main content

UITNLP: A Python NLP Library for Vietnamese

Project description

UITNLP: A Python NLP Library for Vietnamese

Installation

You can install this package from PyPI using pip:

$ pip install uit-tokenizer

Example

#!/usr/bin/python
# -*- coding: utf-8 -*-
from uit_tokenizer import load_word_segmenter
word_segmenter = load_word_segmenter(feature_name='base_sep_sfx')
word_segmenter.segment(texts=['Chào mừng bạn đến với Trường Đại học Công nghệ Thông tin, ĐHQG-HCM.'], pre_tokenized=False, batch_size=4)

Note

Currently, we have just wrappered the Vietnamese word segmentation method published in the following our paper:

@InProceedings{10.1007/978-981-15-6168-9_33,
  author    = "Nguyen, Duc-Vu and Van Thin, Dang and Van Nguyen, Kiet and Nguyen, Ngan Luu-Thuy",
  editor    = "Nguyen, Le-Minh and Phan, Xuan-Hieu and Hasida, K{\^o}iti and Tojo, Satoshi",
  title     = "Vietnamese Word Segmentation with SVM: Ambiguity Reduction and Suffix Capture",
  booktitle = "Computational Linguistics",
  year      = "2020",
  publisher = "Springer Singapore",
  address   = "Singapore",
  pages     = "400--413",
  abstract  = "In this paper, we approach Vietnamese word segmentation as a binary classification by using the Support Vector Machine classifier. We inherit features from prior works such as n-gram of syllables, n-gram of syllable types, and checking conjunction of adjacent syllables in the dictionary. We propose two novel ways to feature extraction, one to reduce the overlap ambiguity and the other to increase the ability to predict unknown words containing suffixes. Different from UETsegmenter and RDRsegmenter, two state-of-the-art Vietnamese word segmentation methods, we do not employ the longest matching algorithm as an initial processing step or any post-processing technique. According to experimental results on benchmark Vietnamese datasets, our proposed method obtained a better {\$}{\$}{\backslash}text {\{}F{\}}{\_}{\{}1{\}}{\backslash}text {\{}-score{\}}{\$}{\$}F1-scorethan the prior state-of-the-art methods UETsegmenter, and RDRsegmenter.",
  isbn      = "978-981-15-6168-9"
}

Project details


Release history Release notifications | RSS feed

This version

1.0

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

uit_tokenizer-1.0.tar.gz (16.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

uit_tokenizer-1.0-py3-none-any.whl (15.1 kB view details)

Uploaded Python 3

File details

Details for the file uit_tokenizer-1.0.tar.gz.

File metadata

  • Download URL: uit_tokenizer-1.0.tar.gz
  • Upload date:
  • Size: 16.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.12

File hashes

Hashes for uit_tokenizer-1.0.tar.gz
Algorithm Hash digest
SHA256 c44e75defac36e9fbf96a911410c32df04435a81833fad98423db8db222f84c4
MD5 0dba6ec3281bac9fb7c8e0b76c7087ef
BLAKE2b-256 318c7dcb20d7631d1fd75f952dcfd6b13d9bbe538c5e94d59a6d05f072ecba53

See more details on using hashes here.

File details

Details for the file uit_tokenizer-1.0-py3-none-any.whl.

File metadata

  • Download URL: uit_tokenizer-1.0-py3-none-any.whl
  • Upload date:
  • Size: 15.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.12

File hashes

Hashes for uit_tokenizer-1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 9bc90700241f19678fadb48da291516309f70c641ac66cc7a8e882149e26b4b7
MD5 63b882c86d2f2909c3b9c6ec3ba09ef0
BLAKE2b-256 d33363accff6418ff520c247204ff654883a337b2aef459f53914c5abe2566b7

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page