Module encoding and encrypting text by key
Project description
ToxicityClassificator
Module for predicting toxicity messages in Russian and English
Usage example
from toxicityclassifier import *
classifier = ToxicityClassificator()
print(classifier.predict(text)) # (0 or 1, probability)
print(classifier.get_probability(text)) # probability
print(classifier.classify(text)) # 0 or 1
Weights
Weight for classification (if probability >= weight => 1 else 0)
classifier.weight = 0.5
Weight for language detection (English or Russian)
if the percentage of the Russian language >= language_weight, then the Russian model is used, otherwise the English one
classifier.language_weight = 0.5
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Close
Hashes for toxicityclassifier-0.1.10.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | 5ed4882266244f9930b92f1fe006ebb99b19f461e47b8b0a65704f404ff94e9c |
|
MD5 | 1f48c271aa2de280f33814817c6b12af |
|
BLAKE2b-256 | 9f5392582683b28ff55f2a55099b4c299d5d5760fd039e74ef598271b48ad556 |
Close
Hashes for toxicityclassifier-0.1.10-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 890a0538485d1146018e1cb1fc6bf3dbaa4a6db47f9de91970a2cc5dfa0eac1d |
|
MD5 | 3414bba74a02ed3604b5469d7522216a |
|
BLAKE2b-256 | 3df27dcfdadcd502e08c911de916b8e678fe45f78da38762a4aa7052d5564484 |