DOM-aware tokenization for 🤗 Hugging Face language models
Project description
DOM tokenizers
DOM-aware tokenization for Hugging Face language models.
TL;DR
Input:
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
<meta name="viewport" content="width=device-width">
<title>Hello world</title>
<script>
document.getElementById("demo").innerHTML = "Hello JavaScript!";
</script>
...
Output:
Installation
With PIP
pip install dom-tokenizers[train]
From sources
git clone https://github.com/gbenson/dom-tokenizers.git
cd dom-tokenizers
python3 -m venv .venv
. .venv/bin/activate
pip install --upgrade pip
pip install -e .[dev,train]
Train a tokenizer
On the command line
Check everything's working using a small dataset of around 300 examples:
train-tokenizer gbenson/interesting-dom-snapshots
Train a tokenizer with a 10,000-token vocabulary using a dataset of 4,536 examples and upload it to the Hub:
train-tokenizer gbenson/webui-dom-snapshots -n 10000 -N 4536
huggingface-cli login
huggingface-cli upload dom-tokenizer-10k
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
dom_tokenizers-0.0.17.tar.gz
(85.1 kB
view hashes)
Built Distribution
Close
Hashes for dom_tokenizers-0.0.17-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 1e14b260d25b0823db01aaf66ee08f836d7140a9aba9779ce899079ae6cfde6d |
|
MD5 | 030b48548448afd2fbb4df273c441465 |
|
BLAKE2b-256 | e15fdc2631c1a691e53534e811d602642cbfc087a722021c766d6bb3bf58868b |