Toxic Spans Prediction
Project description
HateSpans
We provide state-of-the-art models to detect toxic spans in text. We have evaluated our models on Toxic Spanstask at SemEval 2021 (Task 5).
Installation
You first need to install PyTorch. The recommended PyTorch version is 1.6. Please refer to PyTorch installation page regarding the specific install command for your platform.
When PyTorch has been installed, you can install HateSpans from pip.
From pip
pip install hatespans
Pretrained HateSpans Models
We will be keep releasing new models. Please keep in touch.
Models | Average F1 |
---|---|
small | 0.6652 |
Prediction
Following code can be used to predict toxic spans in text. Upon executing, it will download the relevant model and return the toxic spans.
from hatespans.app.hate_spans_app import HateSpansApp
app = HateSpansApp("small", use_cuda=False)
print(app.predict_hate_spans("You motherfucking cunt", spans=True))
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
hate_spans-0.1.0b0.tar.gz
(39.4 kB
view hashes)
Built Distribution
Close
Hashes for hate_spans-0.1.0b0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 5872ca84c7ed2607c827327adc9dbc871084583f5898c207bc298d86ef22f3a5 |
|
MD5 | 0f253b5e75c4a227f2831958b26ea69a |
|
BLAKE2b-256 | a7fc108b2fb999de672d07a4fbfd812f183e5651bc86c10975d277ba00a0e7d8 |