Skip to main content

No project description provided

Project description

WorldAlphabets

A tool to access alphabets of the world with Python and Node interfaces.

Usage

Python

To load the data in Python:

from worldalphabets import get_available_codes, load_alphabet

codes = get_available_codes()
print("Loaded", len(codes), "alphabets")

alphabet = load_alphabet("en")
print(alphabet.uppercase[:5])  # ['A', 'B', 'C', 'D', 'E']
print(alphabet.frequency['e'])

Node.js

From npm

Install the package from npm:

npm install worldalphabets

Then, you can use the functions in your project:

const {
  getUppercase,
  getLowercase,
  getFrequency,
  getAvailableCodes,
} = require('worldalphabets');

async function main() {
  const codes = await getAvailableCodes();
  console.log('Available codes (first 5):', codes.slice(0, 5));

  const uppercaseEn = await getUppercase('en');
  console.log('English uppercase:', uppercaseEn);

  const lowercaseFr = await getLowercase('fr');
  console.log('French lowercase:', lowercaseFr);

  const frequencyDe = await getFrequency('de');
  console.log('German frequency for "a":', frequencyDe['a']);
}

main();

TypeScript projects receive typings automatically via index.d.ts.

Local Usage

If you have cloned the repository, you can use the module directly:

const { getUppercase } = require('./index');

async function main() {
    const uppercaseEn = await getUppercase('en');
    console.log('English uppercase:', uppercaseEn);
}

main();

Alphabet Index

This library also provides an index of all available alphabets with additional metadata.

Python

from worldalphabets import get_index_data, get_language

# Get the entire index
index = get_index_data()
print(f"Index contains {len(index)} languages.")

# Get information for a specific language
lang_info = get_language("he")
print(f"Language: {lang_info['language-name']}")
print(f"Script Type: {lang_info['script-type']}")
print(f"Direction: {lang_info['direction']}")

Node.js

const { getIndexData, getLanguage } = require('worldalphabets');

async function main() {
  // Get the entire index
  const index = await getIndexData();
  console.log(`Index contains ${index.length} languages.`);

  // Get information for a specific language
  const langInfo = await getLanguage('he');
  console.log(`Language: ${langInfo['language-name']}`);
  console.log(`Script Type: ${langInfo['script-type']}`);
  console.log(`Direction: ${langInfo['direction']}`);
}

main();

Supported Languages

For a detailed list of supported languages and their metadata, see the Alphabet Table.

Developer Guide

This project uses the kalenchukov/Alphabet Java repository as the source for alphabet data. A helper script clones the repository, scans all *Alphabet.java files, downloads a sample Wikipedia article for supported languages, and writes JSON files containing the alphabet and estimated letter frequencies. A second utility can replace those estimates with corpus frequencies from the Simia unigrams dataset.

Each JSON file includes:

  • alphabetical – letters of the alphabet (uppercase when the script has case)
  • uppercase – uppercase letters
  • lowercase – lowercase letters
  • frequency – relative frequency of each lowercase letter (zero when no sample text is available)

Example JSON snippet:

{
  "alphabetical": ["A", "B", ...],
  "uppercase": ["A", "B", ...],
  "lowercase": ["a", "b", ...],
  "frequency": {"a": 0.084, "b": 0.0208, ...}
}

Setup

This project uses uv for dependency management. To set up the development environment:

# Install uv
pipx install uv

# Create and activate a virtual environment
uv venv
source .venv/bin/activate

# Install dependencies
uv pip install -e '.[dev]'

Data Generation

Extract alphabets

uv run scripts/extract_alphabets.py

The script clones the Java project and stores JSON files for every available alphabet under data/alphabets/, named by ISO language code. If no sample text is available, frequency values default to zero and the language is recorded in data/todo_languages.csv for follow-up.

Update letter frequencies

uv run scripts/update_frequencies.py

This script downloads the unigrams.zip archive and rewrites each alphabet's frequency mapping using the published counts.

Generate alphabets from locale data

Derive an alphabet from an ICU locale's exemplar character set:

uv run scripts/generate_alphabet_from_locale.py <code> --locale <locale>

The script writes data/alphabets/<code>.json, using the locale's standard exemplar set for the base letters and populating frequency values from the Simia unigrams dataset when available. Locales without exemplar data are skipped.

Generate alphabets from unigrams

For languages present in the Simia dataset but missing here:

uv run scripts/generate_alphabet_from_unigrams.py <code> --locale <locale> \
  --block <Unicode block>

The script writes data/alphabets/<code>.json. To list missing codes:

uv run scripts/missing_unigram_languages.py

Generate missing alphabets

Create alphabet files for every language in the Simia unigrams dataset that does not yet have one:

uv run scripts/generate_missing_alphabets.py --limit 10

Omit --limit to process all missing languages. Each file is written under data/alphabets/ and combines ICU exemplar characters with Simia frequencies.

Linting and type checking

ruff check .
mypy .

Future work

  • Add sample text or unigram support for more languages.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

worldalphabets-0.1.0.tar.gz (228.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

worldalphabets-0.1.0-py3-none-any.whl (259.1 kB view details)

Uploaded Python 3

File details

Details for the file worldalphabets-0.1.0.tar.gz.

File metadata

  • Download URL: worldalphabets-0.1.0.tar.gz
  • Upload date:
  • Size: 228.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for worldalphabets-0.1.0.tar.gz
Algorithm Hash digest
SHA256 92ee53df84d836990efe90b4f92702c18d31a08e05a1e42e0c55e64542eb0cd2
MD5 1213706eade33702ae1d0174547b9872
BLAKE2b-256 9f7941261c9b9e6a768ec4d0e61756b86485a004520a4e9a78b3587dc4edd1f0

See more details on using hashes here.

Provenance

The following attestation bundles were made for worldalphabets-0.1.0.tar.gz:

Publisher: publish.yml on willwade/WorldAlphabets

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file worldalphabets-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: worldalphabets-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 259.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for worldalphabets-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 2dbfbb87ff1fa8d35e5ea09964d876cdf0ae10567733f2b520d66f081cea6487
MD5 31256cd91f802ce020fd2b0dd4124061
BLAKE2b-256 1564dac810fa78dc4638366e57be63f65d8e2e32edc63d60031df875f9de125b

See more details on using hashes here.

Provenance

The following attestation bundles were made for worldalphabets-0.1.0-py3-none-any.whl:

Publisher: publish.yml on willwade/WorldAlphabets

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page