Skip to main content

The package that detect & switch the curse word in the sentence by using deep learning

Project description

anti-cursing

"anti-cursing" is a python package that detects and switches negative or any kind of cursing word from sentences or comments whatever๐Ÿคฌ

You just install the package the way you install any other package and then you can use it in your code.

The whole thing is gonna be updated soon.

So this is the very first idea

But you can find my package in pypi(https://pypi.org/project/anti-cursing/0.0.1/)

๐Ÿ™๐ŸปPlz bare with the program to install model's weight and bias from huggingface at the first time you use the package.

image

Concept

There are often situations where you have to code something, detect a forbidden word, and change it to another word. Hardcoding all parts is very inconvenient, and in the Python ecosystem, there are many packages to address. One of them is "anti-cursing".

The package, which operates exclusively for Korean, does not simply change the banned word by setting it up, but detects and replaces the banned word by learning a deep learning model.

Therefore, it is easy to cope with new malicious words as long as they are learned. For this purpose, semi-supervied learning through pseudo labeling is used.

Additionally, instead of changing malicious words to special characters such as --- or ***, you can convert them into emojis to make them more natural.

Contents

Installation

You can install the package using pip:

pip install anti-cursing

it doesn't work yet, but it will soon!!๐Ÿ‘จ๐Ÿปโ€๐Ÿ’ป

Usage

from anti_cursing.utils import antiCursing

antiCursing.anti_cur("๋‚˜๋Š” ๋„ˆ๊ฐ€ ์ข‹์ง€๋งŒ, ๋„ˆ๋Š” ๋„ˆ๋ฌด ๊ฐœ์ƒˆ๋ผ์•ผ")
๋‚˜๋Š” ๋„ˆ๊ฐ€ ์ข‹์ง€๋งŒ, ๋„ˆ๋Š” ๋„ˆ๋ฌด ๐Ÿ‘ผ๐Ÿป์•ผ

Model-comparison

Classification KcElectra KoBERT RoBERTa-base RoBERTa-large
Validation Accuracy 0.88680 0.85721 0.83421 0.86994
Validation Loss 1.00431 1.23237 1.30012 1.16179
Training Loss 0.09908 0.03761 0.0039 0.06255
Epoch 10 40 20 20
Batch-size 8 32 16 32
transformers beomi/KcELECTRA-base skt/kobert-base-v1 xlm-roberta-base klue/roberta-large

Dataset

Used-api

Google translator

License

This repository is licensed under the MIT license. See LICENSE for details.

Click here to see the License information --> License

Working-example

---- some video is gonna be placed here ----

References

Sentiment Analysis Based on Deep Learning : A Comparative Study

  • Nhan Cach Dang, Maria N. Moreno-Garcia, Fernando De la Prieta. 2006. Sentiment Analysis Based on Deep Learning : A Comparative Study. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing (EMNLP 2006), pages 1โ€“8, Prague, Czech Republic. Association for Computational Linguistics.

Attention is all you need

  • Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 6000โ€“6010.

BERT : Pre-training of Deep Bidirectional Transformers for Language Understanding

  • Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 4171โ€“4186.

Electra : Pre-training Text Encoders as Discriminators Rather Than Generators

  • Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning. 2019. Electra: Pre-training text encoders as discriminators rather than generators. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 4171โ€“4186.

BIDAF : Bidirectional Attention Flow for Machine Comprehension

  • Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi. 2016. Bidirectional Attention Flow for Machine Comprehension. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2129โ€“2139.

Effect of Negation in Sentences on Sentiment Analysis and Polarity Detection

  • Partha Mukherjeea, Saptarshi Ghoshb, and Saptarshi Ghoshc. 2018. Effect of Negation in Sentences on Sentiment Analysis and Polarity Detection. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2129โ€“2139.

KOAS : Korean Text Offensiveness Analysis System

  • Seonghwan Kim, Seongwon Lee, and Seungwon Do. 2019. KOAS: Korean Text Offensiveness Analysis System. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1โ€“11.

Korean Unsmile Dataset

  • Seonghwan Kim, Seongwon Lee, and Seungwon Do. 2019. Korean Unsmile Dataset. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1โ€“11.

Project-status

80%

Future-work

update soon plz bare with me ๐Ÿ™๐Ÿป


KOREAN FROM HERE / ์—ฌ๊ธฐ๋ถ€ํ„ด ํ•œ๊ตญ์–ด ์„ค๋ช…์ž…๋‹ˆ๋‹ค.

anti-cursing

**"anti-cursing"**์€ ๋ฌธ์žฅ์ด๋‚˜ ๋Œ“๊ธ€์—์„œ ๋ถ€์ •์ ์ด๊ฑฐ๋‚˜ ๋ชจ๋“  ์ข…๋ฅ˜์˜ ์š•์„ค์„ ๊ฐ์ง€ํ•˜๊ณ  ์ „ํ™˜ํ•˜๋Š” ํŒŒ์ด์ฌ ํŒจํ‚ค์ง€์ž…๋‹ˆ๋‹ค๐Ÿคฌ

๋‹ค๋ฅธ ํŒจํ‚ค์ง€๋ฅผ ์„ค์น˜ํ•˜๋Š” ๋ฐฉ์‹๊ณผ ๋™์ผํ•˜๊ฒŒ ํŒจํ‚ค์ง€๋ฅผ ์„ค์น˜ํ•œ ๋‹ค์Œ ์ฝ”๋“œ์—์„œ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

์•„์ง ์•„์ด๋””์–ด ๊ตฌ์ƒ ๋‹จ๊ณ„์ด๊ธฐ ๋•Œ๋ฌธ์— ์•„๋ฌด๊ฒƒ๋„ ์ž‘๋™ํ•˜์ง€ ์•Š์ง€๋งŒ ๊ณง ์ž‘๋™ํ•˜๋„๋ก ์—…๋ฐ์ดํŠธํ•  ์˜ˆ์ •์ž…๋‹ˆ๋‹ค.

Pypi(https://pypi.org/project/anti-cursing/0.0.1/)์— ํŒจํ‚ค์ง€๋ฅด ์—…๋กœ๋“œํ–ˆ์Šต๋‹ˆ๋‹ค. ํ™•์ธํ•˜์‹œ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

๐Ÿ™๐ŸปํŒจํ‚ค์ง€๋ฅผ ์ฒ˜์Œ ์„ค์น˜ํ•˜์‹œ๊ณ  ์‚ฌ์šฉํ•˜์‹ค ๋•Œ ๋”ฅ๋Ÿฌ๋‹ ๋ชจ๋ธ์„ ๋ถˆ๋Ÿฌ์˜ค๊ธฐ ์œ„ํ•ด huggingface์—์„œ parsing์„ ์‹œ๋„ํ•ฉ๋‹ˆ๋‹ค. ์ฒ˜์Œ์—๋งŒ ํ•ด๋‹น ์ž‘์—…์ด ํ•„์š”ํ•˜๋‹ˆ ์‹œ๊ฐ„์ด ์กฐ๊ธˆ ๊ฑธ๋ฆผ๊ณผ ์šฉ๋Ÿ‰์„ ์ฐจ์ง€ํ•จ์„ ๊ณ ๋ คํ•ด์ฃผ์„ธ์š”

image

Concept

๋ฌด์–ธ๊ฐ€ ์ฝ”๋”ฉ์„ ํ•˜๋ฉฐ, ๊ธˆ์ง€ ๋‹จ์–ด๋ฅผ ๊ฐ์ง€ํ•˜๊ณ  ๊ทธ๊ฒƒ์„ ๋‹ค๋ฅธ ๋‹จ์–ด๋กœ ๋ฐ”๊ฟ”์•ผํ•  ์ƒํ™ฉ์ด ์ข…์ข… ์ƒ๊น๋‹ˆ๋‹ค. ๋ชจ๋“  ๋ถ€๋ถ„์„ ํ•˜๋“œ์ฝ”๋”ฉํ•˜๋Š” ๊ฒƒ์ด ๋งค์šฐ ๋ถˆํŽธํ•˜๋ฉฐ, ํŒŒ์ด์ฌ ์ƒํƒœ๊ณ„์—์„œ๋Š” ์ด๋ฅผ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•œ ๋งŽ์€ ํŒจํ‚ค์ง€๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ ์ค‘ ํ•˜๋‚˜๊ฐ€ **"anti-cursing"**์ž…๋‹ˆ๋‹ค.

ํ•œ๊ตญ์–ด ์ „์šฉ์œผ๋กœ ๋™์ž‘ํ•˜๋Š” ํ•ด๋‹น ํŒจํ‚ค์ง€๋Š” ๋‹จ์ˆœํžˆ ๊ธˆ์ง€ ๋‹จ์–ด๋ฅผ ๊ธฐ์กด์— ์„ค์ •ํ•˜์—ฌ ๋ฐ”๊พธ๋Š” ๊ฒƒ์ด ์•„๋‹Œ, ๋”ฅ๋Ÿฌ๋‹ ๋ชจ๋ธ์„ ํ•™์Šตํ•˜์—ฌ ๊ธˆ์ง€ ๋‹จ์–ด๋ฅผ ๊ฐ์ง€ํ•˜๊ณ  ๋ฐ”๊ฟ‰๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์ƒˆ๋กญ๊ฒŒ ์ƒ๊ธฐ๋Š” ์•…์„ฑ ๋‹จ์–ด์— ๋Œ€ํ•ด์„œ๋„ ํ•™์Šต๋งŒ ์ด๋ฃจ์–ด์ง„๋‹ค๋ฉด ์‰ฝ๊ฒŒ ๋Œ€์ฒ˜ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฅผ ์œ„ํ•ด pseudo labeling์„ ํ†ตํ•œ semi-supervied learning์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค.

์ถ”๊ฐ€๋กœ ์•…์„ฑ๋‹จ์–ด๋ฅผ ---๋‚˜ ***๊ฐ™์€ ํŠน์ˆ˜๋ฌธ์ž๋กœ ๋ณ€๊ฒฝํ•˜๋Š” ๊ฒƒ์ด ์•„๋‹Œ, ์ด๋ชจ์ง€๋กœ ๋ณ€ํ™˜ํ•˜์—ฌ ๋”์šฑ ์ž์—ฐ์Šค๋Ÿฝ๊ฒŒ ๋ฐ”๊ฟ€ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

๋ชฉ์ฐจ

์„ค์น˜

pip๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํŒจํ‚ค์ง€๋ฅผ ์„ค์น˜ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

pip install anti-cursing

์•„์ง ์•„๋ฌด๊ฒƒ๋„ ์ž‘๋™ํ•˜์ง€ ์•Š์ง€๋งŒ, ๊ณง ์ž‘๋™ํ•˜๋„๋ก ์—…๋ฐ์ดํŠธํ•  ์˜ˆ์ •์ž…๋‹ˆ๋‹ค๐Ÿ‘จ๐Ÿปโ€๐Ÿ’ป.

์‚ฌ์šฉ๋ฒ•

from anti_cursing.utils import antiCursing

antiCursing.anti_cur("๋‚˜๋Š” ๋„ˆ๊ฐ€ ์ข‹์ง€๋งŒ, ๋„ˆ๋Š” ๋„ˆ๋ฌด ๊ฐœ์ƒˆ๋ผ์•ผ")
๋‚˜๋Š” ๋„ˆ๊ฐ€ ์ข‹์ง€๋งŒ, ๋„ˆ๋Š” ๋„ˆ๋ฌด ๐Ÿ‘ผ๐Ÿป์•ผ

๋ชจ๋ธ ์„ฑ๋Šฅ ๋น„๊ต

Classification KcElectra KoBERT RoBERTa-base RoBERTa-large
Validation Accuracy 0.88680 0.85721 0.83421 0.86994
Validation Loss 1.00431 1.23237 1.30012 1.16179
Training Loss 0.09908 0.03761 0.0039 0.06255
Epoch 10 40 20 20
Batch-size 8 32 16 32
transformers beomi/KcELECTRA-base skt/kobert-base-v1 xlm-roberta-base klue/roberta-large

๋ฐ์ดํ„ฐ์…‹

์‚ฌ์šฉ API

Google translator

License

์ด ํ”„๋กœ์ ํŠธ๋Š” MIT ๋ผ์ด์„ผ์Šค๋ฅผ ๋”ฐ๋ฆ…๋‹ˆ๋‹ค. ์ž์„ธํ•œ ๋‚ด์šฉ์€ LICENSE ํŒŒ์ผ์„ ์ฐธ๊ณ ํ•ด์ฃผ์„ธ์š”.

๋ผ์ด์„ผ์Šค ์ •๋ณด --> License

์ž‘๋™ ์˜ˆ์‹œ

---- ์ž‘๋™ ์˜ˆ์‹œ๊ฐ€ ์ถ”๊ฐ€๋  ์˜ˆ์ •์ž…๋‹ˆ๋‹ค ----

์ฐธ๊ณ ๋ฌธํ—Œ

Sentiment Analysis Based on Deep Learning : A Comparative Study

  • Nhan Cach Dang, Maria N. Moreno-Garcia, Fernando De la Prieta. 2006. Sentiment Analysis Based on Deep Learning : A Comparative Study. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing (EMNLP 2006), pages 1โ€“8, Prague, Czech Republic. Association for Computational Linguistics.

Attention is all you need

  • Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 6000โ€“6010.

BERT : Pre-training of Deep Bidirectional Transformers for Language Understanding

  • Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 4171โ€“4186.

Electra : Pre-training Text Encoders as Discriminators Rather Than Generators

  • Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning. 2019. Electra: Pre-training text encoders as discriminators rather than generators. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 4171โ€“4186.

BIDAF : Bidirectional Attention Flow for Machine Comprehension

  • Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi. 2016. Bidirectional Attention Flow for Machine Comprehension. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2129โ€“2139.

Effect of Negation in Sentences on Sentiment Analysis and Polarity Detection

  • Partha Mukherjeea, Saptarshi Ghoshb, and Saptarshi Ghoshc. 2018. Effect of Negation in Sentences on Sentiment Analysis and Polarity Detection. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2129โ€“2139.

KOAS : Korean Text Offensiveness Analysis System

  • Seonghwan Kim, Seongwon Lee, and Seungwon Do. 2019. KOAS: Korean Text Offensiveness Analysis System. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1โ€“11.

Korean Unsmile Dataset

  • Seonghwan Kim, Seongwon Lee, and Seungwon Do. 2019. Korean Unsmile Dataset. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1โ€“11.

์ง„ํ–‰์ƒํ™ฉ

80%

๋ฐœ์ „

์•ž์œผ๋กœ ์ถ”๊ฐ€๋  ์˜ˆ์ •์ž…๋‹ˆ๋‹ค ์ž ์‹œ๋งŒ ๊ธฐ๋‹ค๋ ค์ฃผ์„ธ์š”๐Ÿ™๐Ÿป

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

anti_cursing-0.0.2-py3-none-any.whl (31.4 kB view details)

Uploaded Python 3

File details

Details for the file anti_cursing-0.0.2-py3-none-any.whl.

File metadata

  • Download URL: anti_cursing-0.0.2-py3-none-any.whl
  • Upload date:
  • Size: 31.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.9.13

File hashes

Hashes for anti_cursing-0.0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 78aa4b2382079b0a19d1a4abf4a7a2af9598bbf23a96d32a56a55258824905c6
MD5 0c75eb945640f4ee662adacc986e3746
BLAKE2b-256 133d25df0f1de802f903166aa10d5e834a24386c4c4cd027620368df6f609330

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page