Skip to main content

An effective text normalization tool for Vietnamese

Project description

Soe Vinorm - Vietnamese Text Normalization Toolkit

Soe Vinorm is an effective and extensible toolkit for Vietnamese text normalization, designed for use in Text-to-Speech (TTS) and NLP pipelines. It detects and expands non-standard words (NSWs) such as numbers, dates, abbreviations, and more, converting them into their spoken forms. This project is based on the paper Non-Standard Vietnamese Word Detection and Normalization for Text-to-Speech.

Installation

Option 1: Clone the repository (for development)

# Clone the repository
git clone https://github.com/vinhdq842/soe-vinorm.git
cd soe-vinorm

# Install dependencies including development dependencies (using uv)
uv sync --dev

Option 2: Install from PyPI

# Install using uv
uv pip install soe-vinorm

# Or using pip
pip install soe-vinorm

Option 3: Install from source

# Install directly from GitHub
uv pip install git+https://github.com/vinhdq842/soe-vinorm.git

Usage

from soe_vinorm import SoeNormalizer

normalizer = SoeNormalizer()
text = 'Từ năm 2021 đến nay, đây là lần thứ 3 Bộ Công an xây dựng thông tư để quy định liên quan đến mẫu hộ chiếu, giấy thông hành.'

result = normalizer.normalize(text)
print(result)
# Output: Từ năm hai nghìn không trăm hai mươi mốt đến nay , đây là lần thứ ba Bộ Công an xây dựng thông tư để quy định liên quan đến mẫu hộ chiếu , giấy thông hành .

Quick function usage

from soe_vinorm import normalize_text

text = "1kg dâu 25 quả, giá 700.000 - Trung bình 30.000đ/quả"
result = normalize_text(text)
print(result)
# Output: một ki lô gam dâu hai mươi lăm quả , giá bảy trăm nghìn - Trung bình ba mươi nghìn đồng trên quả

Batch processing

from soe_vinorm import batch_normalize_texts

texts = [
    "Tôi có 123.456 đồng trong tài khoản",
    "ĐT Việt Nam giành HCV tại SEA Games 32",
    "Nhiệt độ hôm nay là 25°C, ngày 25/04/2014",
    "Tốc độ xe đạt 60km/h trên quãng đường 150km"
]

# Process multiple texts in parallel (4 worker processes)
results = batch_normalize_texts(texts, n_jobs=4)

for original, normalized in zip(texts, results):
    print(f"Original: {original}")
    print(f"Normalized: {normalized}")
    print("-" * 50)

Output:

Original: Tôi có 123.456 đồng trong tài khoản
Normalized: Tôi có một trăm hai mươi ba nghìn bốn trăm năm mươi sáu đồng trong tài khoản
--------------------------------------------------
Original: ĐT Việt Nam giành HCV tại SEA Games 32
Normalized: đội tuyển Việt Nam giành Huy chương vàng tại SEA Games ba mươi hai
--------------------------------------------------
Original: Nhiệt độ hôm nay là 25°C, ngày 25/04/2014
Normalized: Nhiệt độ hôm nay là hai mươi lăm độ xê , ngày hai mươi lăm tháng bốn năm hai nghìn không trăm mười bốn
--------------------------------------------------
Original: Tốc độ xe đạt 60km/h trên quãng đường 150km
Normalized: Tốc độ xe đạt sáu mươi ki lô mét trên giờ trên quãng đường một trăm năm mươi ki lô mét
--------------------------------------------------

Approach: Two-stage normalization

Preprocessing & tokenizing

  • The extra spaces, ASCII arts, emojis, HTML entities, unspoken words, etc. are removed.
  • A Regex-based tokenizer is then used to split the very sentence into tokens.

Stage 1: Non-standard word detection

  • Use a sequence tagger to extract non-standard words (NSWs) and categorize them into different types (18 in total).
  • Later, these NSWs can be verbalized properly according to their types.
  • The sequence tagger can be any kind of sequence labeling models. This implementation uses Conditional Random Field due to the shortage of data.

Stage 2: Non-standard word normalization

  • With the NSWs detected in Stage 1 and their respective types, Regex-based expanders are applied to get the normalized results.
  • Each NSW type has its own dedicated expander.
  • The normalized results are then inserted into the original sentence, resulting in the desired normalized sentence.

Minor details

  • Foreign NSWs are kept as is at the moment.
  • To expand Abbreviation NSWs, a language model is used (i.e. BERT), incorporated with a Vietnamese abbreviation dictionary.
  • ...

Testing

Run all tests with:

pytest tests

Author

License

MIT License

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

soe_vinorm-0.1.2.tar.gz (217.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

soe_vinorm-0.1.2-py3-none-any.whl (82.0 kB view details)

Uploaded Python 3

File details

Details for the file soe_vinorm-0.1.2.tar.gz.

File metadata

  • Download URL: soe_vinorm-0.1.2.tar.gz
  • Upload date:
  • Size: 217.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.7.14

File hashes

Hashes for soe_vinorm-0.1.2.tar.gz
Algorithm Hash digest
SHA256 617c48e83e6b8c26079ff5ffa1de8017eac5d3d7e3f2441b5a4fc20a99845be8
MD5 91564b5a890682d1fa5e703dce2a94c1
BLAKE2b-256 80574362e68c7c3e198c95df8e97d4bbc52b8f5f13b4c0604d4a549d00b11087

See more details on using hashes here.

File details

Details for the file soe_vinorm-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: soe_vinorm-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 82.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.7.14

File hashes

Hashes for soe_vinorm-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 bfc9f86fa7db966bb2c9227d7609c113f032bffd8c7c3b9ba6950bc207a33ee9
MD5 550815089f02d75f24ac7d30cbe2e69e
BLAKE2b-256 2bebebb497a754c3bf92313a47f9931aea4cfc0eeb2801a2a70dece2d98ff1b4

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page