Skip to main content

An effective text normalization tool for Vietnamese

Project description

Soe Vinorm - Vietnamese Text Normalization Toolkit

Soe Vinorm is an effective and extensible toolkit for Vietnamese text normalization, designed for use in Text-to-Speech (TTS) and NLP pipelines. It detects and expands non-standard words (NSWs) such as numbers, dates, abbreviations, and more, converting them into their spoken forms. This project is based on the paper Non-Standard Vietnamese Word Detection and Normalization for Text-to-Speech.

Installation

Option 1: Clone the repository (for development)

# Clone the repository
git clone https://github.com/vinhdq842/soe-vinorm.git
cd soe-vinorm

# Install dependencies including development dependencies (using uv)
uv sync --dev

Option 2: Install from PyPI

# Install using uv
uv add soe-vinorm

# Or using pip
pip install soe-vinorm

Option 3: Install from source

# Install directly from GitHub
uv pip install git+https://github.com/vinhdq842/soe-vinorm.git

Usage

from soe_vinorm import SoeNormalizer

normalizer = SoeNormalizer()
text = 'Từ năm 2021 đến nay, đây là lần thứ 3 Bộ Công an xây dựng thông tư để quy định liên quan đến mẫu hộ chiếu, giấy thông hành.'

result = normalizer.normalize(text)
print(result)
# Output: Từ năm hai nghìn không trăm hai mươi mốt đến nay , đây là lần thứ ba Bộ Công an xây dựng thông tư để quy định liên quan đến mẫu hộ chiếu , giấy thông hành .

Quick function usage

from soe_vinorm import normalize_text

text = "1kg dâu 25 quả, giá 700.000 - Trung bình 30.000đ/quả"
result = normalize_text(text)
print(result)
# Output: một ki lô gam dâu hai mươi lăm quả , giá bảy trăm nghìn - Trung bình ba mươi nghìn đồng trên quả

Batch processing

from soe_vinorm import batch_normalize_texts

texts = [
    "Tôi có 123.456 đồng trong tài khoản",
    "ĐT Việt Nam giành HCV tại SEA Games 32",
    "Nhiệt độ hôm nay là 25°C, ngày 25/04/2014",
    "Tốc độ xe đạt 60km/h trên quãng đường 150km"
]

# Process multiple texts in parallel (4 worker processes)
results = batch_normalize_texts(texts, n_jobs=4)

for original, normalized in zip(texts, results):
    print(f"Original: {original}")
    print(f"Normalized: {normalized}")
    print("-" * 50)

Output:

Original: Tôi có 123.456 đồng trong tài khoản
Normalized: Tôi có một trăm hai mươi ba nghìn bốn trăm năm mươi sáu đồng trong tài khoản
--------------------------------------------------
Original: ĐT Việt Nam giành HCV tại SEA Games 32
Normalized: đội tuyển Việt Nam giành Huy chương vàng tại SEA Games ba mươi hai
--------------------------------------------------
Original: Nhiệt độ hôm nay là 25°C, ngày 25/04/2014
Normalized: Nhiệt độ hôm nay là hai mươi lăm độ xê , ngày hai mươi lăm tháng bốn năm hai nghìn không trăm mười bốn
--------------------------------------------------
Original: Tốc độ xe đạt 60km/h trên quãng đường 150km
Normalized: Tốc độ xe đạt sáu mươi ki lô mét trên giờ trên quãng đường một trăm năm mươi ki lô mét
--------------------------------------------------

Approach: Two-stage normalization

Preprocessing & tokenizing

  • The extra spaces, ASCII arts, emojis, HTML entities, unspoken words, etc. are removed.
  • A Regex-based tokenizer is then used to split the very sentence into tokens.

Stage 1: Non-standard word detection

  • Use a sequence tagger to extract non-standard words (NSWs) and categorize them into different types (18 in total).
  • Later, these NSWs can be verbalized properly according to their types.
  • The sequence tagger can be any kind of sequence labeling models. This implementation uses Conditional Random Field due to the shortage of data.

Stage 2: Non-standard word normalization

  • With the NSWs detected in Stage 1 and their respective types, Regex-based expanders are applied to get the normalized results.
  • Each NSW type has its own dedicated expander.
  • The normalized results are then inserted into the original sentence, resulting in the desired normalized sentence.

Minor details

  • Foreign NSWs are kept as is at the moment.
  • To expand Abbreviation NSWs, a language model is used (i.e. BERT), incorporated with a Vietnamese abbreviation dictionary.
  • ...

Testing

Run all tests with:

pytest tests

Author

License

MIT License

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

soe_vinorm-0.1.7.tar.gz (200.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

soe_vinorm-0.1.7-py3-none-any.whl (86.5 kB view details)

Uploaded Python 3

File details

Details for the file soe_vinorm-0.1.7.tar.gz.

File metadata

  • Download URL: soe_vinorm-0.1.7.tar.gz
  • Upload date:
  • Size: 200.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.7.19

File hashes

Hashes for soe_vinorm-0.1.7.tar.gz
Algorithm Hash digest
SHA256 2b8827e1d25a96e9c1a13b66799d5346814a5d326d351fff0b9db5a87e0fd1e6
MD5 77b37fce9a393a60aa1ab2b647d498da
BLAKE2b-256 0c1e7fc2cf71efc30496815c4812bf16f607047aaa55c8be15fbd3c5ca734f76

See more details on using hashes here.

File details

Details for the file soe_vinorm-0.1.7-py3-none-any.whl.

File metadata

  • Download URL: soe_vinorm-0.1.7-py3-none-any.whl
  • Upload date:
  • Size: 86.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.7.19

File hashes

Hashes for soe_vinorm-0.1.7-py3-none-any.whl
Algorithm Hash digest
SHA256 05e4880d66340f60ee5963b5be1036db2259d1fa2ba959f2becebbed14b77022
MD5 34ca1b558b3f53ec06a56ef6b2bdec9a
BLAKE2b-256 cfd0c6e34fd4c2cc83d8e4c5e008c4725eb571b7611f61d7df980082455df9ed

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page