Skip to main content

Different approach of chardet/cChardet, this library goal is to read human text from unknown encoding and transpose it to unicode the best we can.

Project description

Welcome to Charset for Human 👋

The Real First Universal Charset Detector
License: MIT Code Quality Badge

Library that help you read human* written text from unknown charset encoding.
Project motivated by chardet, I'm trying to resolve the issue by taking another approach.

This project offer you a alternative to Universal Charset Encoding Detector, also known as Chardet. Also as of July, August 2019 it's still an alpha release.

Feature Chardet Charset Normalizer cChardet
Fast
🐌🐌

🐌

Universal**
Reliable without distinguishable standards
Reliable with distinguishable standards
Free & Open
Native Python
Does not have specific code for specific charset

Reading Normalized TextCat Reading Text

Cats are going to enjoy newly decoded text

Chardet/cChardet have weaknesses where Charset Normalizer have not and vice versa. You could combine the strength of both lib to reach near perfect detection. 💪

* : When written, should not be gibberish.
** : They are clearly using specific code for a specific charset even if covering most of existing one

Your support

Please ⭐ this repository if this project helped you!

✨ Installation

Using PyPi

pip install charset_normalizer

🚀 Basic Usage

Just print out normalized text

from charset_normalizer import CharsetNormalizerMatches as CnM

matches = CnM.from_path('./my_subtitle.srt')

if len(matches) > 0:
    print(
        str(matches.best().first())
    )

Convert any text file to UTF-8

from charset_normalizer import CharsetNormalizerMatches as CnM
try:
    CnM.normalize('./my_subtitle.srt') # should write to disk my_subtitle-***.srt
except IOError as e:
    print('Sadly, we are unable to perform charset normalization.', str(e))

See wiki for advanced usages. Todo, not yet available.

😇 Why

When I started using Chardet, I noticed that this library was wrong most of the time when it's not about Unicode, Gb or Big5. That because some charset are easily identifiable because of there standards and Chardet does a really good job at identifying them.

I don't care about the originating charset encoding, that because two different table can produce two identical file. What I want is to get readable text, the best I can.

In a way, I'm brute forcing text decoding. How cool is that ? 😎

🍰 How

  • Discard all charset encoding table that could not fit the binary content.
  • Measure chaos, or the mess once opened with a corresponding charset encoding.
  • Extract matches with the lowest mess detected.
  • Finally, if there is too much match left, we measure coherence.

Wait a minute, what is chaos/mess and coherence according to YOU ?

Chaos : I opened hundred of text files, written by humans, with the wrong encoding table. Then I observed, then I established some ground rules about what is obvious when it's seems like a mess. I know that my interpretation of what is chaotic is very subjective, feel free to contribute in order to improve or rewrite it.

Coherence : For each language there is on earth (the best we can), we have computed letter appearance occurrences ranked. So I thought that those intel are worth something here. So I use those records against decoded text to check if I can detect intelligent design.

👤 Contributing

Contributions, issues and feature requests are very much welcome.
Feel free to check issues page if you want to contribute.

📝 License

Copyright © 2019 Ahmed TAHRI @Ousret.
This project is MIT licensed.

Letter appearances frequencies used in this project © 2012 Denny Vrandečić

LoC

It is always possible to make a difference in this world. I was told it is impossible to propose a real alternative of Chardet / uChardet in conception terms speaking.

using cloc tool on master branch of each project

Chardet Python

-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
Python                          42            491           1458          36112
-------------------------------------------------------------------------------
SUM:                            42            491           1458          36112
-------------------------------------------------------------------------------

uChardet C++

-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
C++                             51            740           2958           6927
C/C++ Header                    22            286           1039            876
CMake                            4             30              8            234
-------------------------------------------------------------------------------
SUM:                            77           1056           4005           8037
-------------------------------------------------------------------------------

Charset Normalizer Python

-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
Python                           6            170            155            977
-------------------------------------------------------------------------------
SUM:                             6            170            155            977
-------------------------------------------------------------------------------

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

charset_normalizer-0.1.2b0.tar.gz (35.6 kB view hashes)

Uploaded Source

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page