Skip to main content

Multilingual name romanization lookup tables: Chinese, Japanese, Korean, Arabic, Vietnamese, Indian, Persian, Hebrew, Thai, Greek, Turkish, Russian, Indonesian/Malay

Project description

PyPI Tests License: MIT Sponsor

name-variants

"Chan" is simultaneously 陈 and 찬 and จัน — lookup() returns all of them.

1,558 name entries across 18 language tables. Every romanization system produces a member of an equivalence class: no canonical form, no ordering dependency, no silent data loss. share_cluster("Hsu", "Xu") is True. lookup("Chan") returns a Chinese surname cluster and a Korean given-name cluster, sorted by bearer count.

pip install name-variants

The core idea

A NameCluster is a frozenset of co-equal representations. , , chen, chan, tan, ong are all members of the same Chinese surname cluster — none is more "real" than another. lookup() returns every cluster that contains your query, sorted by frequency:

from name_variants import lookup, share_cluster

clusters = lookup("Chan")
# [NameCluster(language='chinese', 8 forms),
#  NameCluster(language='korean_given', 3 forms)]

# Both Chinese scripts are in the same cluster — co-equal
assert "陈" in clusters[0]   # Simplified
assert "陳" in clusters[0]   # Traditional

# Membership is case-insensitive
assert "CHAN" in clusters[0]

# Ambiguity is surfaced, not suppressed
assert len(clusters) == 2    # Chinese AND Korean, not one-or-the-other

API

lookup() — all matching clusters

from name_variants import lookup

lookup("Chan")
# [NameCluster(language='chinese', 8 forms),
#  NameCluster(language='korean_given', 3 forms)]

lookup("Nguyen")
# [NameCluster(language='vietnamese', 4 forms)]

lookup("Smith")
# []

Results are sorted by frequency descending — most statistically likely interpretation first.

share_cluster() — equivalence check

from name_variants import share_cluster

share_cluster("Chan", "Chen")        # True  — same Chinese cluster
share_cluster("Chou", "Zhou")        # True  — Wade-Giles = Pinyin
share_cluster("Chiang", "Jiang")     # True  — Chiang Kai-shek / 蒋介石
share_cluster("Hsu", "Xu")           # True  — Taiwan diaspora romanization
share_cluster("Tsao", "Cao")         # True  — Ts'ao Ts'ao / 曹操
share_cluster("Chan", "Kim")         # False — different names
share_cluster("", "Chan")            # False — empty input

lookup_dialect() — Chinese romanization system tag

from name_variants import lookup_dialect

lookup_dialect("chen")   # "mandarin_pinyin"
lookup_dialect("chan")   # "cantonese"
lookup_dialect("tan")    # "hokkien"
lookup_dialect("chou")   # "wade_giles"
lookup_dialect("hsu")    # "wade_giles"
lookup_dialect("陳")     # "traditional"
lookup_dialect("Smith")  # None

normalize() — text preprocessing

from name_variants import normalize

normalize("  NGUYỄN  ")                    # "nguyễn"
normalize("Nguyễn", strip_diacritics=True) # "nguyen"
normalize("chan​")                          # strips zero-width spaces

CLI

nv lookup Chan
# [chinese] (~90M bearers)
#   陈  陳  chan  chen  tan  ...
# [korean_given]
#   찬  chan  chahn

nv match Chan Chen          # true
nv match Chan Kim           # false
nv match --exit-code Chan Chen && echo same   # shell-scripting friendly

nv canonicalize-csv names.csv --col name --out out.csv
# adds {name}_canonical column

nv dedupe names.csv --col name --out out.csv
# adds cluster_id column grouping romanization variants

Pandas accessor

import pandas as pd
import name_variants  # registers .nv accessor

s = pd.Series(["Chan", "Chen", "Smith", "Park"])

s.nv.lookup()
# 0    [NameCluster(chinese, ...), NameCluster(korean_given, ...)]
# 1    [NameCluster(chinese, ...)]
# 2    []
# 3    [NameCluster(korean, ...)]

s.nv.cluster_id()
# 0    a3f2b1c4d5e6   ← same as row 1 (Chan and Chen share chinese cluster)
# 1    a3f2b1c4d5e6
# 2                   ← empty string for unknown
# 3    9b8c7d6e5f4a

a = pd.Series(["Chan", "Park"])
b = pd.Series(["Chen", "Bak"])
a.nv.share_cluster_with(b)   # [True, True]

Install the extra: pip install "name-variants[pandas]"


Language tables

Language Entries Coverage
chinese 140 Pinyin + Wade-Giles + Cantonese + Hokkien + Hakka + Teochew + Traditional
japanese 143 Hepburn + macron variants
korean 100 Revised Romanization + McCune-Reischauer
arabic 92 Multiple transliteration systems
vietnamese 84 Diacritics + stripped forms
russian 79 Multiple transliteration systems
indonesian_malay 77
persian 80
indian_hindi 80
hebrew 75
turkish 74 Dotted-İ variants
greek 60
thai 68
indian_bengali 56
indian_tamil 53
chinese_given 120 Common given-name characters with Pinyin
korean_given 70 Common given-name syllables
japanese_given 107 Common given-name kanji
from name_variants import ALL_TABLES
list(ALL_TABLES.keys())   # all 18 table names

Chinese romanization systems

System Examples
Mandarin Pinyin Zhou, Zhang, Wang, Xu
Wade-Giles Chou, Chang, Wang, Hsu, Tsao, Kuo, Hsieh
Cantonese (Jyutping/Yale) Chan, Wong, Ng, Lam, Tsui
Hokkien/Min Nan Tan, Ng, Lim, Goh
Hakka Fong, Thong
Teochew Teo, Ng
Postal romanization Peking, Nanking, Chungking
Traditional characters 陳, 劉, 張, 楊, 趙

NameCluster reference

@dataclass(frozen=True)
class NameCluster:
    forms: frozenset[str]    # all representations — co-equal
    language: str            # "chinese", "korean", "vietnamese", etc.
    frequency: int | None    # approximate global bearer count

    def __contains__(self, text: str) -> bool  # case-insensitive
    def __iter__(self)                          # iterate all forms
    def __len__(self)

Optional extras

pip install "name-variants[pandas]"   # pandas Series .nv accessor

Why equivalence classes instead of a canonical key?

Early versions returned one "canonical" form per romanization string. This forced a false choice: "Chan" had to map to either or , not both. Table ordering became load-bearing — whichever table was imported last won. Romanizations had to be stripped from given-name tables to prevent collisions.

The NameCluster model eliminates this: every romanization system's output is just another member of a frozenset. lookup() returns all matching clusters. Ambiguity is surfaced, not suppressed. The most likely interpretation comes first by frequency.


Contributing

git clone https://github.com/SecurityRonin/name-variants
cd name-variants
pip install -e ".[dev]"
pytest

Data files are in name_variants/*_names.py and name_variants/*_surnames.py. Each entry is a plain Python dict — easy to read and edit:

"陈": {
    "forms": ["陳", "chen", "chan", "tan", ...],
    "frequency": 90_000_000,
    "dialects": {
        "chen": "mandarin_pinyin",
        "chan": "cantonese",
        "tan":  "hokkien",
        "陳":   "traditional",
    },
},

Adding a new variant is one edit to one entry — forms, frequency, and dialect tag colocated.


Privacy Policy · Terms of Service · © 2026 Security Ronin Ltd

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

name_variants-0.1.1.tar.gz (48.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

name_variants-0.1.1-py3-none-any.whl (45.0 kB view details)

Uploaded Python 3

File details

Details for the file name_variants-0.1.1.tar.gz.

File metadata

  • Download URL: name_variants-0.1.1.tar.gz
  • Upload date:
  • Size: 48.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for name_variants-0.1.1.tar.gz
Algorithm Hash digest
SHA256 3816adc8572ca801d88f79f57a229efd8ea7c7c273a620bea5cf27660bad0b37
MD5 c60a84633225221b9290a2bb4b85dc38
BLAKE2b-256 5beb1c3237c000ff35d167b4e3fed37d3c462334d01abf33dba054562641332b

See more details on using hashes here.

File details

Details for the file name_variants-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: name_variants-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 45.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for name_variants-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 b7ba8360ad8474d2ee5635dd618b1d0528f4cd9bf060541f9ee9164e3fa362f1
MD5 cfe3e099707b313776e6706fbcda8a87
BLAKE2b-256 84abde005f08aada24576f38821b2e3cc7ee82ef270540bf304d99bae959e36e

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page