Module for determining the level of toxicity of the text
Project description
ToxicityClassificator
Module for predicting toxicity messages in Russian and English
Usage example
from toxicityclassifier import *
classifier = ToxicityClassificatorV1()
print(classifier.predict(text)) # (0 or 1, probability)
print(classifier.get_probability(text)) # probability
print(classifier.classify(text)) # 0 or 1
Weights
Weight for classification (if probability >= weight => 1 else 0)
classifier.weight = 0.5
Weight for language detection (English or Russian)
if the percentage of the Russian language >= language_weight, then the Russian model is used, otherwise the English one
classifier.language_weight = 0.5
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Close
Hashes for toxicityclassifier-1.0.1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 8bf5a2ba26447cbd96875bb00112cf5b6bb84d2a4bbf1c52225cfb9031dc3f89 |
|
MD5 | dc3ed9369985affb3eeb20c7a0e69b13 |
|
BLAKE2b-256 | 27a1863dc2e2c1be13abc6dfec7bd3fb7873b9c253ca1d3e10fd6d509d9824ad |