Generalist and Lightweight Model for Text Classification
Project description
⭐ GLiClass: Generalist and Lightweight Model for Sequence Classification
GLiClass is an efficient, zero-shot sequence classification model inspired by the GLiNER framework. It achieves comparable performance to traditional cross-encoder models while being significantly more computationally efficient, offering classification results approximately 10 times faster by performing classification in a single forward pass.
📄 Blog
•
📢 Discord
•
📺 Demo
•
🤗 Available models
•
🚀 Quick Start
Install GLiClass easily using pip:
pip install gliclass
Install from Source
Clone and install directly from GitHub:
git clone https://github.com/Knowledgator/GLiClass
cd GLiClass
python -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activate
pip install -r requirements.txt
pip install .
Verify your installation:
import gliclass
print(gliclass.__version__)
🧑💻 Usage Example
from gliclass import GLiClassModel, ZeroShotClassificationPipeline
from transformers import AutoTokenizer
model = GLiClassModel.from_pretrained("knowledgator/gliclass-small-v1.0")
tokenizer = AutoTokenizer.from_pretrained("knowledgator/gliclass-small-v1.0")
pipeline = ZeroShotClassificationPipeline(
model, tokenizer, classification_type='multi-label', device='cuda:0'
)
text = "One day I will see the world!"
labels = ["travel", "dreams", "sport", "science", "politics"]
results = pipeline(text, labels, threshold=0.5)[0]
for result in results:
print(f"{result['label']} => {result['score']:.3f}")
🔥 New Features
Hierarchical Labels
GLiClass now supports hierarchical label structures using dot notation:
hierarchical_labels = {
"sentiment": ["positive", "negative", "neutral"],
"topic": ["product", "service", "shipping"]
}
text = "The product quality is amazing but delivery was slow"
results = pipeline(text, hierarchical_labels, threshold=0.5)[0]
for result in results:
print(f"{result['label']} => {result['score']:.3f}")
# Output:
# sentiment.positive => 0.892
# topic.product => 0.921
# topic.shipping => 0.763
Get hierarchical output matching your input structure:
results = pipeline(text, hierarchical_labels, return_hierarchical=True)[0]
print(results)
# Output:
# {
# "sentiment": {"positive": 0.892, "negative": 0.051, "neutral": 0.124},
# "topic": {"product": 0.921, "service": 0.153, "shipping": 0.763}
# }
Few-Shot Examples
Improve classification accuracy with in-context examples using the <<EXAMPLE>> token:
examples = [
{
"text": "Love this item, great quality!",
"labels": ["positive", "product"]
},
{
"text": "Customer support was unhelpful",
"labels": ["negative", "service"]
}
]
text = "Fast delivery and the item works perfectly!"
labels = ["positive", "negative", "product", "service", "shipping"]
results = pipeline(text, labels, examples=examples, threshold=0.5)[0]
for result in results:
print(f"{result['label']} => {result['score']:.3f}")
Task Description Prompts
Add custom prompts to guide the classification task:
text = "The battery life on this phone is incredible"
labels = ["positive", "negative", "neutral"]
results = pipeline(
text,
labels,
prompt="Classify the sentiment of this product review:",
threshold=0.5
)[0]
Use per-text prompts for batch processing:
texts = ["Review about electronics", "Review about clothing"]
prompts = [
"Analyze this electronics review:",
"Analyze this clothing review:"
]
results = pipeline(texts, labels, prompt=prompts)
Long Document Classification
Process long documents with automatic text chunking:
from gliclass import ZeroShotClassificationWithChunkingPipeline
chunking_pipeline = ZeroShotClassificationWithChunkingPipeline(
model,
tokenizer,
text_chunk_size=8192,
text_chunk_overlap=256,
labels_chunk_size=8
)
long_document = "..." # Very long text
labels = ["category1", "category2", "category3"]
results = chunking_pipeline(long_document, labels, threshold=0.5)
🌟 Retrieval-Augmented Classification (RAC)
With new models trained with retrieval-agumented classification, such as this model you can specify examples to improve classification accuracy:
example = {
"text": "A new machine learning platform automates complex data workflows but faces integration issues.",
"all_labels": ["AI", "automation", "data_analysis", "usability", "integration"],
"true_labels": ["AI", "integration", "automation"]
}
text = "The new AI-powered tool streamlines data analysis but has limited integration capabilities."
labels = ["AI", "automation", "data_analysis", "usability", "integration"]
results = pipeline(text, labels, threshold=0.1, rac_examples=[example])[0]
for predict in results:
print(f"{predict['label']} => {predict['score']:.3f}")
🎯 Key Use Cases
- Sentiment Analysis: Rapidly classify texts as positive, negative, or neutral.
- Document Classification: Efficiently organize and categorize large document collections.
- Search Results Re-ranking: Improve relevance and precision by reranking search outputs.
- News Categorization: Automatically tag and organize news articles into predefined categories.
- Fact Checking: Quickly validate and categorize statements based on factual accuracy.
🛠️ How to Train
Prepare your training data as follows:
[
{"text": "Sample text.", "all_labels": ["sports", "science", "business"], "true_labels": ["sports"]},
...
]
Optionally, specify confidence scores explicitly:
[
{"text": "Sample text.", "all_labels": ["sports", "science"], "true_labels": {"sports": 0.9}},
...
]
Please, refer to the train.py script to set up your training from scratch or fine-tune existing models.
⚙️ Advanced Configuration
Architecture Types
GLiClass supports multiple architecture types:
- uni-encoder: Single encoder for both text and labels (default, most efficient)
- bi-encoder: Separate encoders for text and labels
- bi-encoder-fused: Bi-encoder with label embeddings fused into text encoding
- encoder-decoder: Encoder-decoder architecture for sequence-to-sequence tasks
from gliclass import GLiClassBiEncoder
# Load a bi-encoder model
model = GLiClassBiEncoder.from_pretrained("knowledgator/gliclass-biencoder-v1.0")
Pooling Strategies
Configure how token embeddings are pooled:
first: First token (CLS token)avg: Average poolingmax: Max poolinglast: Last tokensum: Sum poolingrms: Root mean square poolingabs_max: Max of absolute valuesabs_avg: Average of absolute values
from gliclass import GLiClassModelConfig
config = GLiClassModelConfig(
pooling_strategy='avg',
class_token_pooling='average' # or 'first'
)
Scoring Mechanisms
Choose different scoring mechanisms for classification:
simple: Dot product (fastest)weighted-dot: Weighted dot product with learned projectionsmlp: Multi-layer perceptron scorerhopfield: Hopfield network-based scorer
config = GLiClassModelConfig(
scorer_type='mlp'
)
Gotcha — here’s a much leaner, README-style version, no fluff, just what matters 👇
Flash Attention Backends
GLiClass supports optional flash attention backends for faster inference.
Install
pip install flashdeberta # DeBERTa v2
pip install turbot5 # T5 / mT5
FlashDeBERTa (DeBERTa v2)
Enable via environment variable:
export USE_FLASHDEBERTA=1
If flashdeberta is installed, DeBERTa v2 models will use FlashDebertaV2Model.
Otherwise, GLiClass falls back to DebertaV2Model.
TurboT5 (T5 / mT5)
Enable via environment variable:
export TURBOT5_ATTN_TYPE=triton-basic
If turbot5 is installed, T5 / mT5 models will use FlashT5EncoderModel.
Otherwise, GLiClass falls back to T5EncoderModel.
Notes:
- Flash backends are optional
- Enabled automatically when available
- No code changes required
Want it even tighter (single block), or is this the sweet spot?
📚 Citations
If you find GLiClass useful in your research or project, please cite our papers:
@misc{stepanov2025gliclassgeneralistlightweightmodel,
title={GLiClass: Generalist Lightweight Model for Sequence Classification Tasks},
author={Ihor Stepanov and Mykhailo Shtopko and Dmytro Vodianytskyi and Oleksandr Lukashov and Alexander Yavorskyi and Mykyta Yaroshenko},
year={2025},
eprint={2508.07662},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2508.07662},
}
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file gliclass-0.1.14.tar.gz.
File metadata
- Download URL: gliclass-0.1.14.tar.gz
- Upload date:
- Size: 45.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
476c612b70ef31c6c73493360339e08b2aa51012ef064ace8c198732c39933ce
|
|
| MD5 |
15fe9c8ad3ce5f536e7843b3fe05f32a
|
|
| BLAKE2b-256 |
5c3db0a4531fb1a2621c9e9ae5061f3df25e402e5265b9caff39651487791695
|
File details
Details for the file gliclass-0.1.14-py3-none-any.whl.
File metadata
- Download URL: gliclass-0.1.14-py3-none-any.whl
- Upload date:
- Size: 45.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
8f8bccb055a1299137eacc5bc0cbd7df8e336866d55f6731b5c8f08de0ab4697
|
|
| MD5 |
36964f5b81092766f5b3cf403aac492d
|
|
| BLAKE2b-256 |
79736e14682dfa572d3d43880e20869203e59689ee235c9a2accdd05f7f9f5bc
|