A lightweight utility to align NER labels with tokenized input for HuggingFace models.
Project description
tokenaligner
tokenaligner is a simple and lightweight Python package for aligning NER tags with tokenized subwords using Hugging Face tokenizers. It's especially useful when preparing datasets for training Named Entity Recognition (NER) models with Hugging Face Transformers.
✨ Features
- Aligns word-level NER tags with subword tokenization
- Handles padding and truncation options
- Supports Hugging Face
Datasetsfor training - Batch tokenization support for faster preprocessing
📦 Installation
pip install tokenaligner
🚀 Quick Start
1. Tokenize and Align NER Tags
from tokenaligner import TokenAligner
from transformers import AutoTokenizer
# Sample data
tokens = [["Hugging", "Face", "is", "based", "in", "New", "York", "."]]
tags = [["B-ORG", "I-ORG", "O", "O", "O", "B-LOC", "I-LOC", "O"]]
# Load a tokenizer
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
# Tokenize and align
aligner = TokenAligner()
aligned = aligner.tokenize_and_align(
tokens_list=tokens,
tags_list=tags,
tokenizer=tokenizer,
label_all_tokens=True,
return_hf_dataset=False
)
print(aligned[0]["input_ids"])
print(aligned[0]["labels"])
2. Return a Hugging Face Dataset
hf_dataset = aligner.tokenize_and_align(
tokens_list=tokens,
tags_list=tags,
tokenizer=tokenizer,
return_hf_dataset=True
)
print(hf_dataset[0])
3. Use with a Trainer API
from transformers import AutoModelForTokenClassification, Trainer, TrainingArguments
model = AutoModelForTokenClassification.from_pretrained("bert-base-cased", num_labels=9)
trainer = Trainer(
model=model,
args=TrainingArguments(output_dir="./results", evaluation_strategy="epoch"),
train_dataset=hf_dataset,
eval_dataset=hf_dataset
)
trainer.train()
⚙️ Parameters
TokenAligner.tokenize_and_align(...)
| Parameter | Type | Description |
|---|---|---|
tokens_list |
List[List[str]] |
List of tokenized sentences |
tags_list |
List[List[str]] |
Corresponding entity tags |
tokenizer |
PreTrainedTokenizer |
Hugging Face tokenizer |
batch_size |
int |
Batch size for tokenization (default: 1000) |
padding |
bool |
Apply padding (default: False) |
truncation |
bool |
Apply truncation (default: False) |
label_all_tokens |
bool |
Label all subtokens with the same label (default: False) |
return_hf_dataset |
bool |
Return Hugging Face dataset object (default: False) |
📬 Contributing
Pull requests are welcome! For major changes, please open an issue first to discuss what you’d like to change.
📄 License
🙌 Acknowledgements
Built on top of Hugging Face Transformers and datasets.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file tokenaligner-0.1.1.tar.gz.
File metadata
- Download URL: tokenaligner-0.1.1.tar.gz
- Upload date:
- Size: 4.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.8.6
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a289bb0c0dc5e469d0e8dcf13298a454c1927df98b8a4c956e0fabb994ba6258
|
|
| MD5 |
d16f96882822057ce05b3d2481f52e1c
|
|
| BLAKE2b-256 |
ab86ed7f6d198204dee37a2dbaf3fd960a20193334e99a6282bfa04985d01b43
|
File details
Details for the file tokenaligner-0.1.1-py3-none-any.whl.
File metadata
- Download URL: tokenaligner-0.1.1-py3-none-any.whl
- Upload date:
- Size: 4.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.8.6
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a28518e650ae69d121fa65ae0eed2853010e047c433cad7c3155f43b74e69b85
|
|
| MD5 |
c23bf9a87015e5c6d69588dfe1c61ab5
|
|
| BLAKE2b-256 |
3cd7a8ae4fd99ad05a1b0a941c3cb4d9a9964243d02831234423fa5d39d019da
|