Skip to main content

A Python package for consuming Wikipedia language models including tokenizers, ngram models, markov chains, and vocabularies.

Project description

Wikilangs

PyPI version License: MIT Python 3.8+

A Python package for consuming Wikipedia language models including tokenizers, n-gram models, Markov chains, and vocabularies.

Features

  • BPE Tokenizers: Pre-trained tokenizers for 100+ languages
  • N-gram Models: Language models for text scoring and next token prediction
  • Markov Chains: Text generation models with configurable depth
  • Vocabularies: Comprehensive word dictionaries with frequency information
  • Multi-language Support: Models available for 100+ Wikipedia languages
  • Easy API: Simple, intuitive interface for loading and using models

Installation

pip install wikilangs

Quick Start

from wikilangs import tokenizer, ngram, markov, vocabulary

# Create a tokenizer
tok = tokenizer(date='20251201', lang='en', vocab_size=16000)

# Tokenize text
tokens = tok.tokenize("Hello, world!")
token_ids = tok.encode("Hello, world!")
print(tokens)  # ['Hello', ',', '▁world', '!']
print(token_ids)  # [1234, 5, 5678, 9]

# Create an n-gram model
ng = ngram(date='20251201', lang='en', gram_size=3)

# Score text
score = ng.score("This is a sample sentence.")
print(score)  # -12.345

# Predict next token
predictions = ng.predict_next("This is a", top_k=5)
print(predictions)  # [('sample', 0.85), ('test', 0.05), ...]

# Create a Markov chain
mc = markov(date='20251201', lang='en', depth=2)

# Generate text
text = mc.generate(length=50)
print(text)  # "Generated text using the Markov chain model..."

# Create a vocabulary
vocab = vocabulary(date='20251201', lang='en')

# Look up a word
word_info = vocab.lookup("example")
print(word_info)  # {'frequency': 12345, 'definition': '...'}

# Get word frequency
freq = vocab.get_frequency("example")
print(freq)  # 12345

API Reference

tokenizer(date, lang, vocab_size=16000, local_dir=None)

Create a BPE tokenizer instance.

Parameters:

  • date (str): Date of the model (format: YYYYMMDD)
  • lang (str): Language code (e.g., 'en', 'fr', 'ary')
  • vocab_size (int): Vocabulary size (8000, 16000, 32000, 64000)
  • local_dir (str, optional): Local directory to look for models first

Returns:

  • BPETokenizer: Initialized tokenizer instance

ngram(date, lang, gram_size=3, local_dir=None)

Create an n-gram model instance.

Parameters:

  • date (str): Date of the model (format: YYYYMMDD)
  • lang (str): Language code (e.g., 'en', 'fr', 'ary')
  • gram_size (int): Size of n-grams (2, 3, 4, 5)
  • local_dir (str, optional): Local directory to look for models first

Returns:

  • NGramModel: Initialized n-gram model instance

markov(date, lang, depth=2, local_dir=None)

Create a Markov chain model instance.

Parameters:

  • date (str): Date of the model (format: YYYYMMDD)
  • lang (str): Language code (e.g., 'en', 'fr', 'ary')
  • depth (int): Depth of the Markov chain (1, 2, 3, 4, 5)
  • local_dir (str, optional): Local directory to look for models first

Returns:

  • MarkovChain: Initialized Markov chain model instance

vocabulary(date, lang, local_dir=None)

Create a vocabulary instance.

Parameters:

  • date (str): Date of the model (format: YYYYMMDD)
  • lang (str): Language code (e.g., 'en', 'fr', 'ary')
  • local_dir (str, optional): Local directory to look for models first

Returns:

  • WikilangsVocabulary: Initialized vocabulary instance

Available Languages

Models are available for 100+ Wikipedia languages including:

  • English (en)
  • French (fr)
  • Spanish (es)
  • German (de)
  • Arabic (ar)
  • Chinese (zh)
  • Japanese (ja)
  • Korean (ko)
  • And many more...

Available Dates

Models are updated regularly. Check the Hugging Face dataset for the latest available dates.

Examples

Check out the examples directory for Jupyter notebooks demonstrating various use cases.

Development

Install dependencies

pip install -r requirements.txt

Run tests

pytest tests/

Build package

python -m build

License

This project is licensed under the MIT License - see the LICENSE file for details.

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

Acknowledgments

  • Models trained on Wikipedia data
  • Uses the vocabulous library for vocabulary functionality
  • Hosted on Hugging Face Datasets

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

wikilangs-0.1.0.tar.gz (3.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

wikilangs-0.1.0-py3-none-any.whl (3.8 kB view details)

Uploaded Python 3

File details

Details for the file wikilangs-0.1.0.tar.gz.

File metadata

  • Download URL: wikilangs-0.1.0.tar.gz
  • Upload date:
  • Size: 3.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.6

File hashes

Hashes for wikilangs-0.1.0.tar.gz
Algorithm Hash digest
SHA256 d8a4fb17861c4fdea194e3e21c68c651126a4f79e493ed53b068f9efdb327d6e
MD5 8250e603dbd70f4cff1eac312aae1b30
BLAKE2b-256 cc1e6fb9d52c902fd1c76ee3ff1b97ad67035080797d89a41f8603681905c339

See more details on using hashes here.

File details

Details for the file wikilangs-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: wikilangs-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 3.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.6

File hashes

Hashes for wikilangs-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 b3483270b6308e392986678eb9d9b03503a7ad6e0d3fdc85b410d28b926d6318
MD5 8169781d219dcde24e7c766c6d59f7a9
BLAKE2b-256 04f630cc51c7b53b5b7368f727db7668c2e27a8e1ebc3289ffb7327b8854a608

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page