Skip to main content

A Python package for consuming Wikipedia language models including tokenizers, ngram models, markov chains, and vocabularies.

Project description

Wikilangs

PyPI version License: MIT Python 3.11+ DOI

A Python package for consuming Wikipedia language models including tokenizers, n-gram models, Markov chains, vocabularies, and datasets.

Features

  • BPE Tokenizers: Pre-trained tokenizers for 300+ languages and utilities for LLM integration
  • N-gram Models: Simple language models for text scoring and next token prediction
  • Markov Chains: Text generation models with configurable depth
  • Vocabularies: Comprehensive word dictionaries with frequency information
  • Embeddings: Position-aware cross-lingual word embeddings via BabelVec designed for resource-constrained environments
  • Datasets: Wikipedia text data in various splits (1k, 5k, 10k, train) available via wikisets.
  • Multi-language Support: All models available for 300+ Wikipedia languages
  • Comprehensive Evaluation: Each language includes complete model evaluations and metrics available at https://huggingface.co/wikilangs/{lang} (e.g., https://huggingface.co/wikilangs/ary, https://huggingface.co/wikilangs/en, https://huggingface.co/wikilangs/fr)
  • Easy API: Simple, intuitive interface for loading and using models

Installation

pip install wikilangs

Quick Start

from wikilangs import tokenizer, ngram, markov, vocabulary, embeddings, languages

# Create a tokenizer (date defaults to 'latest')
tok = tokenizer(lang='en', vocab_size=16000)

# Tokenize text
tokens = tok.tokenize("Hello, world!")
token_ids = tok.encode("Hello, world!")
print(tokens)  # ['_he', 'l', 'lo', ',', '_world', '!']
print(token_ids)  # [1234, 5, 5678, 9, 10, 11]

# Create an n-gram model
ng = ngram(lang='en', gram_size=3)

# Score text
score = ng.score("This is a sample sentence.")
print(score)  # -12.345

# Predict next token
predictions = ng.predict_next("This is a", top_k=5)
print(predictions)  # [('sample', 0.85), ('test', 0.05), ...]

# Create a Markov chain
mc = markov(lang='en', depth=2)

# Generate text
text = mc.generate(length=50)
print(text)  # "Generated text using the Markov chain model..."

# Create a vocabulary
vocab = vocabulary(lang='en')

# Look up a word
word_info = vocab.lookup("example")
print(word_info)  # {'token': 'example', 'frequency': 12345, 'idf_score': 7.91, 'rank': 25436}

# Create embeddings
emb = embeddings(lang='ary', dimension=32)

# Get word vector
vec = emb.embed_word("مرحبا")
print(vec.shape)  # (32,)

# Get sentence vector (supports average, rope, decay, sinusoidal)
sent_vec = emb.embed_sentence("مرحبا بالعالم", method='rope')
print(sent_vec.shape)  # (32,)

# List available languages
available_langs = languages()
print(f"Available languages: {available_langs[:5]}...")

API Reference

tokenizer(lang, date='latest', vocab_size=16000, format='sentencepiece')

Create a BPE tokenizer instance.

Parameters:

  • lang (str): Language code (e.g., 'en', 'fr', 'ary')
  • date (str): Date of the model (format: YYYYMMDD, default: 'latest')
  • vocab_size (int): Vocabulary size (8000, 16000, 32000, 64000)
  • format (str): Output format ('sentencepiece' or 'huggingface')

Returns:

  • BPETokenizer: Initialized tokenizer instance

ngram(lang, date='latest', gram_size=3, variant='word')

Create an n-gram model instance.

Parameters:

  • lang (str): Language code (e.g., 'en', 'fr', 'ary')
  • date (str): Date of the model (format: YYYYMMDD, default: 'latest')
  • gram_size (int): Size of n-grams (2, 3, 4, 5)
  • variant (str): Type of n-grams ('word' or 'subword')

Returns:

  • NGramModel: Initialized n-gram model instance

markov(lang, date='latest', depth=2, variant='word')

Create a Markov chain model instance.

Parameters:

  • lang (str): Language code (e.g., 'en', 'fr', 'ary')
  • date (str): Date of the model (format: YYYYMMDD, default: 'latest')
  • depth (int): Depth of the Markov chain (1, 2, 3, 4, 5)
  • variant (str): Type of transitions ('word' or 'subword')

Returns:

  • MarkovChain: Initialized Markov chain model instance

vocabulary(lang, date='latest')

Create a vocabulary instance.

Parameters:

  • lang (str): Language code (e.g., 'en', 'fr', 'ary')
  • date (str): Date of the model (format: YYYYMMDD, default: 'latest')

Returns:

  • WikilangsVocabulary: Initialized vocabulary instance

embeddings(lang, date='latest', dimension=32)

Create an embeddings instance using BabelVec.

Parameters:

  • lang (str): Language code (e.g., 'en', 'fr', 'ary')
  • date (str): Date of the model (format: YYYYMMDD, default: 'latest')
  • dimension (int): Embedding dimension (default: 32)

Returns:

  • Embeddings: Initialized embeddings instance (if babelvec is installed)
  • tuple: (file_path, metadata) if babelvec is not installed

languages(date='latest')

List available language codes for a given date.

Parameters:

  • date (str): Date of the dataset (format: YYYYMMDD, default: 'latest')

Returns:

  • list[str]: List of available language codes

languages_with_metadata(date='latest')

Get available language codes with ISO 639 metadata enrichment.

Parameters:

  • date (str): Date of the dataset (format: YYYYMMDD, default: 'latest')

Returns:

  • list[LanguageInfo]: List of LanguageInfo objects with ISO 639 metadata (name, alpha_2, alpha_3, etc.)

Available Languages

Models are available for 300+ Wikipedia languages including:

  • English (en)
  • French (fr)
  • Spanish (es)
  • German (de)
  • Arabic (ar)
  • Chinese (zh)
  • Japanese (ja)
  • Korean (ko)
  • And many more...

Available Dates

Models are updated regularly. Check the Hugging Face organization for the latest available dates.

Embeddings

For advanced embedding operations, install BabelVec:

pip install babelvec

Using the wikilangs API:

from wikilangs import embeddings

# Load embeddings (defaults to 32 dimensions)
emb = embeddings(lang='ary')

# Get word vector
vec = emb.embed_word("مرحبا")

# Position-aware sentence embedding (supports 'average', 'rope', 'decay', 'sinusoidal')
sent_vec = emb.embed_sentence("مرحبا بالعالم", method='rope')

Or using BabelVec directly:

from huggingface_hub import hf_hub_download
from babelvec import BabelVec

# Load embeddings
embedding_file = hf_hub_download(
    repo_id='wikilangs/ary',
    filename='models/embeddings/monolingual/ary_32d.bin',
    repo_type='model'
)
model = BabelVec.load(embedding_file)

Examples

Check out the demo scripts:

  • demo_models.py - Basic model operations
  • demo_embeddings.py - Embedding operations with BabelVec
  • demo_comprehensive.py - All models working together

Development

Install dependencies

pip install -r requirements.txt

Run tests

pytest tests/

Acknowledgments

We are deeply grateful to our generous sponsor Featherless.ai for making this project possible.

Created and maintained by Omar Kamali from Omneity Labs.

Wikilangs is built on top of the incredible work by the Wikimedia Foundation and the open-source community. All content maintains the original CC-BY-SA-4.0 license.

License

MIT License - see LICENSE for details.

Citation

If you use this package in your research, please cite:

@misc{wikilangs2025,
  title = {Wikilangs: Open NLP Models for Wikipedia Languages},
  author = {Kamali, Omar},
  year = {2025},
  publisher = {Zenodo},
  doi = {10.5281/zenodo.18073153},
  url = {https://huggingface.co/wikilangs},
  institution = {Omneity Labs}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

wikilangs-0.1.3.tar.gz (26.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

wikilangs-0.1.3-py3-none-any.whl (20.7 kB view details)

Uploaded Python 3

File details

Details for the file wikilangs-0.1.3.tar.gz.

File metadata

  • Download URL: wikilangs-0.1.3.tar.gz
  • Upload date:
  • Size: 26.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for wikilangs-0.1.3.tar.gz
Algorithm Hash digest
SHA256 1719898bd2240d5988f0689322c6a24e5c1151c7b08215b62a0fb07be612a1ea
MD5 cfbdfec81713566228288ed4211e758a
BLAKE2b-256 fe9df7881bb214062c3af0300de28e70e0ba6a911a6da6b362a6af2ed2e79d29

See more details on using hashes here.

Provenance

The following attestation bundles were made for wikilangs-0.1.3.tar.gz:

Publisher: publish.yml on wikilangs/wikilangs

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file wikilangs-0.1.3-py3-none-any.whl.

File metadata

  • Download URL: wikilangs-0.1.3-py3-none-any.whl
  • Upload date:
  • Size: 20.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for wikilangs-0.1.3-py3-none-any.whl
Algorithm Hash digest
SHA256 bd6d0a20557a2ebdf4a2a75f8dd2b1f8ff6f710ce31c579634cd4a2e57072441
MD5 17f1a38c00dd32a8eb76784f6308a1f6
BLAKE2b-256 93be29a7b6c6dafb0e875a83b6e8b18f6a2ba58ce584df6dc2ab59534e6d65c3

See more details on using hashes here.

Provenance

The following attestation bundles were made for wikilangs-0.1.3-py3-none-any.whl:

Publisher: publish.yml on wikilangs/wikilangs

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page