Skip to main content

A tool to scan HTML for Chinese characters

Project description

Xiwen 析文

A tool to scan HTML for Chinese characters.

Overview

Use Xiwen to scan websites for Chinese characters — hanzi — and:

  • analyse the content by HSK grade
  • identify character variants
  • export character sets for further use

The analysis describes the breakdown by HSK grade (see below) and character lists can be exported for any combination of those levels, or less common hanzi beyond the HSK grades.

Data exports provide hanzi by HSK grade in traditional and simplified Chinese, their pinyin, count within the text, and character frequency.

Who this is for

Mandarin learners can use Xiwen to determine the expected difficulty of an article or book relative to their current reading level, and create character lists for further study.

Instructors can use it to assess the suitability of reading materials for their students, and produce vocabulary lists.

HSK

HSK — Hanyu Shuiping Kaoshi 汉语水平考试 — is a series of examinations designed to test Chinese language proficiency in simplified Chinese.

In its latest form the HSK consists of nine levels, and covers 3,000 simplified hanzi and 11,092 vocabulary items. The advanced levels — seven to nine — share 1,200 hanzi that are tested together.

To approximate a traditional hanzi version of the HSK, Xiwen maps the HSK hanzi to traditional Chinese equivalents. In most cases this is a one-to-one conversion, but in several cases there are two or more traditional hanzi that reflect distinct meanings of the single simplified character.

For example:

  • "发": ["發", "髮"]
  • "了": ["了", "瞭"]
  • "面": ["面", "麵"]

Or even:

  • "只": ["只", "衹", "隻"]
  • "台": ["台", "檯", "臺", "颱"]

A list of these "polymaps" — not all of which relate to hanzi in the HSK — can be found in the Wikipedia article Ambiguous character mappings.

This approach isn't perfect: obscure definitions implied by a distinct traditional hanzi may be far less frequent than the common conversion of a simplified hanzi.

The table below lists the number of simplified hanzi per grade, and the number of mappings to traditional equivalents.

HSK Grade Simp. Hanzi Running Total Trad. Hanzi Equivalents Running Total
1 300 300 313 313
2 300 600 314 627
3 300 900 312 939
4 300 1200 316 1255
5 300 1500 310 1565
6 300 1800 310 1875
7-9 1200 3000 1214 3089

Installation

GitHub repo

Clone xiwen from GitHub for the full code, files used to generate the character lists and a test suite.

$ git clone git@github.com:essteer/xiwen

Change into the xiwen directory then create and activate a virtual environment — the below example uses Astral's uv; substitute pip or use another package manager as needed — then install the dev dependencies:

$ uv venv
$ source .venv/bin/activate
$ uv pip install -r requirements.txt

$ uv venv
$ .venv\Scripts\activate
$ uv pip install -r requirements.txt

Operation

GitHub repo

To run xiwen as a CLI tool, navigate to the project root directory and run:

$ source .venv/bin/activate
$ python3 -m main

$ .venv\Scripts\activate
$ python -m main

The src/resources/ directory contains main.py, which was used to create the dataset needed to run this program under src/xiwen/assets/ by pairing simplified and traditional character sets with their pinyin, HSK grades, and character frequencies as identified in the MTSU dataset. The source data is kept under src/resources/assets/.

The functional program is contained in src/xiwen/. interface.py is the interactive component for the CLI tool. It receives user input and makes function calls to modules in utils/. Those files form the program's ETL pipeline including the following functions:

  • break down text into individual hanzi (extract.py)
  • sort hanzi as HSK-level simplified or traditional hanzi, or outliers (transform.py)
  • determine the overall character variant of the text as simplified or traditional, or a mix (analyse.py)
  • compute the grade-based and cumulative numbers of unique hanzi and total hanzi in the text (analyse.py)

Character sets can then be exported to CSV.

Sources

This repo makes use of datasets of HSK vocabulary and character frequency lists in the public domain as indicated below - credit goes to those involved in their creation and distribution.

  • Hanyu Shuiping Kaoshi (HSK) 3.0 character list: "hsk30-chars.csv", hsk30, ivankra, GitHub

  • Character frequency list: "CharFreq-Modern.csv", Da, Jun. 2004, Chinese text computing, Middle Tennessee State University

  • Multiple character mappings: "Ambiguous character mappings", Wikipedia

  • Simplified character set demo: "Folding Beijing" 《北京折叠》, Hao Jingfang 郝景芳, 2012

  • Traditional character set demo: "Tao Te Ching" 《道德經》, Lao Tzu 老子, 400BC

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

xiwen-0.2.1.tar.gz (278.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

xiwen-0.2.1-py3-none-any.whl (279.6 kB view details)

Uploaded Python 3

File details

Details for the file xiwen-0.2.1.tar.gz.

File metadata

  • Download URL: xiwen-0.2.1.tar.gz
  • Upload date:
  • Size: 278.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.0 CPython/3.12.3

File hashes

Hashes for xiwen-0.2.1.tar.gz
Algorithm Hash digest
SHA256 13a2a4ed28204d116f7ffe8dc557b87ca36bb10c7ea893ef88a300f5fff809bd
MD5 1c0971f5dd50a8735f0240d123fe0248
BLAKE2b-256 267f04b1dddbc31b198c8f6a808f2600f04716d31c4736f292b03d569a14fed2

See more details on using hashes here.

File details

Details for the file xiwen-0.2.1-py3-none-any.whl.

File metadata

  • Download URL: xiwen-0.2.1-py3-none-any.whl
  • Upload date:
  • Size: 279.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.0 CPython/3.12.3

File hashes

Hashes for xiwen-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 0c2bb6e662daddfdd550974d605c58a40cd5e3303565509f34aa6559ad3a9d09
MD5 2d09d5a02dc224cb0677003e7ecd50fc
BLAKE2b-256 b2bd9dec252bb5e58fd8d2cfcb4c29f3be71f26575e7e80458ae3dc1cb4405a3

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page