Skip to main content

The Real First Universal Charset Detector. No Cpp Bindings, Using Voodoo and Magical Artifacts.

Project description

Charset Detection, for Everyone 👋

The Real First Universal Charset Detector
Code Quality Badge Documentation Status Download Count Total

A library that helps you read text from an unknown charset encoding.
Motivated by chardet, I'm trying to resolve the issue by taking a new approach. All IANA character set names for which the Python core library provides codecs are supported.

>>>>> ❤️ Try Me Online Now, Then Adopt Me ❤️ <<<<<

This project offers you an alternative to Universal Charset Encoding Detector, also known as Chardet.

Feature Chardet Charset Normalizer cChardet
Fast


Universal**
Reliable without distinguishable standards
Reliable with distinguishable standards
Free & Open
License LGPL-2.1 MIT MPL-1.1
Native Python
Detect spoken language N/A
Supported Encoding 30 :tada: 90 40
Package Accuracy Mean per file (ns) File per sec (est)
chardet 93.5 % 126 081 168 ns 7.931 file/sec
cchardet 97.0 % 1 668 145 ns 599.468 file/sec
charset-normalizer 97.25 % 209 503 253 ns 4.773 file/sec

Reading Normalized TextCat Reading Text

** : They are clearly using specific code for a specific encoding even if covering most of used one

Your support

Please ⭐ this repository if this project helped you!

✨ Installation

Using PyPi

pip install charset_normalizer

🚀 Basic Usage

CLI

This package comes with a CLI

usage: normalizer [-h] [--verbose] [--normalize] [--replace] [--force]
                  file [file ...]
normalizer ./data/sample.1.fr.srt
+----------------------+----------+----------+------------------------------------+-------+-----------+
|       Filename       | Encoding | Language |             Alphabets              | Chaos | Coherence |
+----------------------+----------+----------+------------------------------------+-------+-----------+
| data/sample.1.fr.srt |  cp1252  |  French  | Basic Latin and Latin-1 Supplement | 0.0 % |  84.924 % |
+----------------------+----------+----------+------------------------------------+-------+-----------+

Python

Just print out normalized text

from charset_normalizer import CharsetNormalizerMatches as CnM
print(CnM.from_path('./my_subtitle.srt').best().first())

Normalize any text file

from charset_normalizer import CharsetNormalizerMatches as CnM
try:
    CnM.normalize('./my_subtitle.srt') # should write to disk my_subtitle-***.srt
except IOError as e:
    print('Sadly, we are unable to perform charset normalization.', str(e))

Upgrade your code without effort

from charset_normalizer import detect

The above code will behave the same as chardet.

See the docs for advanced usage : readthedocs.io

😇 Why

When I started using Chardet, I noticed that it was unreliable nowadays and also it's unmaintained, and most likely will never be.

I don't care about the originating charset encoding, because two different tables can produce two identical files. What I want is to get readable text, the best I can.

In a way, I'm brute forcing text decoding. How cool is that ? 😎

Don't confuse package ftfy with charset-normalizer or chardet. ftfy goal is to repair unicode string whereas charset-normalizer to convert raw file in unknown encoding to unicode.

🍰 How

  • Discard all charset encoding table that could not fit the binary content.
  • Measure chaos, or the mess once opened with a corresponding charset encoding.
  • Extract matches with the lowest mess detected.
  • Finally, if there is too much match left, we measure coherence.

Wait a minute, what is chaos/mess and coherence according to YOU ?

Chaos : I opened hundred of text files, written by humans, with the wrong encoding table. I observed, then I established some ground rules about what is obvious when it seems like a mess. I know that my interpretation of what is chaotic is very subjective, feel free to contribute in order to improve or rewrite it.

Coherence : For each language there is on earth, we have computed ranked letter appearance occurrences (the best we can). So I thought that intel is worth something here. So I use those records against decoded text to check if I can detect intelligent design.

⚡ Known limitations

  • Not intended to work on non (human) speakable language text content. eg. crypted text.
  • Language detection is unreliable when text contains two or more languages sharing identical letters.
  • Not well tested with tiny content.

👤 Contributing

Contributions, issues and feature requests are very much welcome.
Feel free to check issues page if you want to contribute.

📝 License

Copyright © 2019 Ahmed TAHRI @Ousret.
This project is MIT licensed.

Letter appearances frequencies used in this project © 2012 Denny Vrandečić

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

charset_normalizer-1.3.9.tar.gz (347.7 kB view details)

Uploaded Source

Built Distribution

charset_normalizer-1.3.9-py3-none-any.whl (34.3 kB view details)

Uploaded Python 3

File details

Details for the file charset_normalizer-1.3.9.tar.gz.

File metadata

  • Download URL: charset_normalizer-1.3.9.tar.gz
  • Upload date:
  • Size: 347.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.0.1 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.60.0 CPython/3.9.5

File hashes

Hashes for charset_normalizer-1.3.9.tar.gz
Algorithm Hash digest
SHA256 54425d9436c1cff46dfbb6b6598ac0a4c2d7b003d4787ab7daaf64528e458ed8
MD5 f4e79a1c16d9631091860180a02c9c79
BLAKE2b-256 d10a9f1d03ebd263a847cb71f177e2e497b46eb7f69b18542b5e414f7e202c02

See more details on using hashes here.

File details

Details for the file charset_normalizer-1.3.9-py3-none-any.whl.

File metadata

  • Download URL: charset_normalizer-1.3.9-py3-none-any.whl
  • Upload date:
  • Size: 34.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.0.1 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.60.0 CPython/3.9.5

File hashes

Hashes for charset_normalizer-1.3.9-py3-none-any.whl
Algorithm Hash digest
SHA256 52ab45fa063cc274e0be6ba2dab9d3e69ab5fd0542de262ace15918d48183838
MD5 d31ae17e8579198a8d3ce1207e56f021
BLAKE2b-256 494eb846068557e5b63bed6277105db374a0ab42b9b02b9dd8640e972ccb7fb4

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page