Skip to main content

A Python package for consuming Wikipedia language models including tokenizers, ngram models, markov chains, and vocabularies.

Project description

Wikilangs

PyPI version License: MIT Python 3.11+ DOI

A Python package for consuming Wikipedia language models including tokenizers, n-gram models, Markov chains, vocabularies, and datasets.

Features

  • BPE Tokenizers: Pre-trained tokenizers for 300+ languages and utilities for LLM integration
  • N-gram Models: Simple language models for text scoring and next token prediction
  • Markov Chains: Text generation models with configurable depth
  • Vocabularies: Comprehensive word dictionaries with frequency information
  • Embeddings: Position-aware cross-lingual word embeddings via BabelVec designed for resource-constrained environments
  • Datasets: Wikipedia text data in various splits (1k, 5k, 10k, train) available via wikisets.
  • Multi-language Support: All models available for 300+ Wikipedia languages
  • Comprehensive Evaluation: Each language includes complete model evaluations and metrics available at https://huggingface.co/wikilangs/{lang} (e.g., https://huggingface.co/wikilangs/ary, https://huggingface.co/wikilangs/en, https://huggingface.co/wikilangs/fr)
  • Easy API: Simple, intuitive interface for loading and using models

Installation

pip install wikilangs

Quick Start

from wikilangs import tokenizer, ngram, markov, vocabulary, embeddings, languages

# Create a tokenizer (date defaults to 'latest')
tok = tokenizer(lang='en', vocab_size=16000)

# Tokenize text
tokens = tok.tokenize("Hello, world!")
token_ids = tok.encode("Hello, world!")
print(tokens)  # ['_he', 'l', 'lo', ',', '_world', '!']
print(token_ids)  # [1234, 5, 5678, 9, 10, 11]

# Create an n-gram model
ng = ngram(lang='en', gram_size=3)

# Score text
score = ng.score("This is a sample sentence.")
print(score)  # -12.345

# Predict next token
predictions = ng.predict_next("This is a", top_k=5)
print(predictions)  # [('sample', 0.85), ('test', 0.05), ...]

# Create a Markov chain
mc = markov(lang='en', depth=2)

# Generate text
text = mc.generate(length=50)
print(text)  # "Generated text using the Markov chain model..."

# Create a vocabulary
vocab = vocabulary(lang='en')

# Look up a word
word_info = vocab.lookup("example")
print(word_info)  # {'token': 'example', 'frequency': 12345, 'idf_score': 7.91, 'rank': 25436}

# Create embeddings
emb = embeddings(lang='ary', dimension=32)

# Get word vector
vec = emb.embed_word("مرحبا")
print(vec.shape)  # (32,)

# Get sentence vector (supports average, rope, decay, sinusoidal)
sent_vec = emb.embed_sentence("مرحبا بالعالم", method='rope')
print(sent_vec.shape)  # (32,)

# List available languages
available_langs = languages()
print(f"Available languages: {available_langs[:5]}...")

API Reference

tokenizer(lang, date='latest', vocab_size=16000, format='sentencepiece')

Create a BPE tokenizer instance.

Parameters:

  • lang (str): Language code (e.g., 'en', 'fr', 'ary')
  • date (str): Date of the model (format: YYYYMMDD, default: 'latest')
  • vocab_size (int): Vocabulary size (8000, 16000, 32000, 64000)
  • format (str): Output format ('sentencepiece' or 'huggingface')

Returns:

  • BPETokenizer: Initialized tokenizer instance

ngram(lang, date='latest', gram_size=3, variant='word')

Create an n-gram model instance.

Parameters:

  • lang (str): Language code (e.g., 'en', 'fr', 'ary')
  • date (str): Date of the model (format: YYYYMMDD, default: 'latest')
  • gram_size (int): Size of n-grams (2, 3, 4, 5)
  • variant (str): Type of n-grams ('word' or 'subword')

Returns:

  • NGramModel: Initialized n-gram model instance

markov(lang, date='latest', depth=2, variant='word')

Create a Markov chain model instance.

Parameters:

  • lang (str): Language code (e.g., 'en', 'fr', 'ary')
  • date (str): Date of the model (format: YYYYMMDD, default: 'latest')
  • depth (int): Depth of the Markov chain (1, 2, 3, 4, 5)
  • variant (str): Type of transitions ('word' or 'subword')

Returns:

  • MarkovChain: Initialized Markov chain model instance

vocabulary(lang, date='latest')

Create a vocabulary instance.

Parameters:

  • lang (str): Language code (e.g., 'en', 'fr', 'ary')
  • date (str): Date of the model (format: YYYYMMDD, default: 'latest')

Returns:

  • WikilangsVocabulary: Initialized vocabulary instance

embeddings(lang, date='latest', dimension=32)

Create an embeddings instance using BabelVec.

Parameters:

  • lang (str): Language code (e.g., 'en', 'fr', 'ary')
  • date (str): Date of the model (format: YYYYMMDD, default: 'latest')
  • dimension (int): Embedding dimension (default: 32)

Returns:

  • Embeddings: Initialized embeddings instance (if babelvec is installed)
  • tuple: (file_path, metadata) if babelvec is not installed

languages(date='latest')

List available language codes for a given date.

Parameters:

  • date (str): Date of the dataset (format: YYYYMMDD, default: 'latest')

Returns:

  • list[str]: List of available language codes

languages_with_metadata(date='latest')

Get available language codes with ISO 639 metadata enrichment.

Parameters:

  • date (str): Date of the dataset (format: YYYYMMDD, default: 'latest')

Returns:

  • list[LanguageInfo]: List of LanguageInfo objects with ISO 639 metadata (name, alpha_2, alpha_3, etc.)

Available Languages

Models are available for 300+ Wikipedia languages including:

  • English (en)
  • French (fr)
  • Spanish (es)
  • German (de)
  • Arabic (ar)
  • Chinese (zh)
  • Japanese (ja)
  • Korean (ko)
  • And many more...

Available Dates

Models are updated regularly. Check the Hugging Face organization for the latest available dates.

Embeddings

For advanced embedding operations, install BabelVec:

pip install babelvec

Using the wikilangs API:

from wikilangs import embeddings

# Load embeddings (defaults to 32 dimensions)
emb = embeddings(lang='ary')

# Get word vector
vec = emb.embed_word("مرحبا")

# Position-aware sentence embedding (supports 'average', 'rope', 'decay', 'sinusoidal')
sent_vec = emb.embed_sentence("مرحبا بالعالم", method='rope')

Or using BabelVec directly:

from huggingface_hub import hf_hub_download
from babelvec import BabelVec

# Load embeddings
embedding_file = hf_hub_download(
    repo_id='wikilangs/ary',
    filename='models/embeddings/monolingual/ary_32d.bin',
    repo_type='model'
)
model = BabelVec.load(embedding_file)

Examples

Check out the demo scripts:

  • demo_models.py - Basic model operations
  • demo_embeddings.py - Embedding operations with BabelVec
  • demo_comprehensive.py - All models working together

Development

Install dependencies

pip install -r requirements.txt

Run tests

pytest tests/

Acknowledgments

We are deeply grateful to our generous sponsor Featherless.ai for making this project possible.

Created and maintained by Omar Kamali from Omneity Labs.

Wikilangs is built on top of the incredible work by the Wikimedia Foundation and the open-source community. All content maintains the original CC-BY-SA-4.0 license.

License

MIT License - see LICENSE for details.

Citation

If you use this package in your research, please cite:

@misc{wikilangs2025,
  title = {Wikilangs: Open NLP Models for Wikipedia Languages},
  author = {Kamali, Omar},
  year = {2025},
  publisher = {Zenodo},
  doi = {10.5281/zenodo.18073153},
  url = {https://huggingface.co/wikilangs},
  institution = {Omneity Labs}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

wikilangs-0.1.2.tar.gz (26.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

wikilangs-0.1.2-py3-none-any.whl (20.5 kB view details)

Uploaded Python 3

File details

Details for the file wikilangs-0.1.2.tar.gz.

File metadata

  • Download URL: wikilangs-0.1.2.tar.gz
  • Upload date:
  • Size: 26.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for wikilangs-0.1.2.tar.gz
Algorithm Hash digest
SHA256 90e7835347794ac68e31c51c4260b8b69c7583294fca14dfddc598c6212d9f73
MD5 8d1eb38af59b6eec999f793303b8d05d
BLAKE2b-256 ff1d07cc6a8130709aaaa4aa79073f3f0623a11d71e809622f00ee0329012695

See more details on using hashes here.

Provenance

The following attestation bundles were made for wikilangs-0.1.2.tar.gz:

Publisher: publish.yml on wikilangs/wikilangs

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file wikilangs-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: wikilangs-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 20.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for wikilangs-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 21940e6b77d460a312450aa71189aacb673a7cb61c0f0c8e1fefe53849fae0f3
MD5 8a2a9d23f661bd5bc84cb604b1b8d33c
BLAKE2b-256 a507a524f12824e506f3c3ab7877f061de601e6c58052c5def7e41f00ef3411f

See more details on using hashes here.

Provenance

The following attestation bundles were made for wikilangs-0.1.2-py3-none-any.whl:

Publisher: publish.yml on wikilangs/wikilangs

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page