Skip to main content

Python bindings around Google Chromium's embedded compact language detection library (CLD2)

Project description

PYCLD2 - Python Bindings to CLD2

Python bindings for the Compact Langauge Detect 2 (CLD2).

Downloads Latest version Supported Python versions Development Status Download format Build status

This package contains forks of:

The goal of this project is to consolidate the upstream library with its bindings, so the user can pip install one package instead of two.

The LICENSE is the same as Chromium's LICENSE and is included in the LICENSE file for reference.

Installing

$ python -m pip install -U pycld2

Example

import pycld2 as cld2

isReliable, textBytesFound, details = cld2.detect(
    "а неправильный формат идентификатора дн назад"
)

print(isReliable)
# True
details[0]
# ('RUSSIAN', 'ru', 98, 404.0)

fr_en_Latn = """\
France is the largest country in Western Europe and the third-largest in Europe as a whole.
A accès aux chiens et aux frontaux qui lui ont été il peut consulter et modifier ses collections
et exporter Cet article concerne le pays européen aujourd’hui appelé République française.
Pour d’autres usages du nom France, Pour une aide rapide et effective, veuiller trouver votre aide
dans le menu ci-dessus.
Motoring events began soon after the construction of the first successful gasoline-fueled automobiles.
The quick brown fox jumped over the lazy dog."""

isReliable, textBytesFound, details, vectors = cld2.detect(
    fr_en_Latn, returnVectors=True
)
print(vectors)
# ((0, 94, 'ENGLISH', 'en'), (94, 329, 'FRENCH', 'fr'), (423, 139, 'ENGLISH', 'en'))

API

This package exports one function, detect(). See help(detect) for the full docstring.

The first parameter (utf8Bytes) is the text for which you want to detect language.

utf8Bytes may be either:

  • str (example: "¼ cup of flour")
  • bytes that have been encoded using UTF-8 (example: "¼ cup of flour".encode("utf-8"))

Bytes that are not UTF-8 encoded will raise a pycld2.error. For example, passing b"\xbc cup of flour" (which is "¼ cup of flour".encode("latin-1")) will raise.

All other parameters are optional:

Parameter Type/Default Use
utf8Bytes str or bytes* The text to detect language for.
isPlainText bool, default False If False, then the input is HTML and CLD will skip HTML tags, expand HTML entities, detect HTML <lang ...> tags, etc.
hintTopLevelDomain str E.g., 'id' boosts Indonesian.
hintLanguage str E.g., 'ITALIAN' or 'it' boosts Italian; see cld.LANGUAGES for all known languages.
hintLanguageHTTPHeaders str E.g., 'mi,en' boosts Maori and English.
hintEncoding str E.g, 'SJS' boosts Japanese; see cld.ENCODINGS for all known encodings.
returnVectors bool, default False If True, then the vectors indicating which language was detected in which byte range are returned in addition to details. The vectors are a sequence of (bytesOffset, bytesLength, languageName, languageCode), in order. bytesOffset is the start of the vector, bytesLength is the length of the vector. Note that there is some added CPU cost if this is True. (Approx. 2x performance hit.)
debugScoreAsQuads bool, default False Normally, several languages are detected solely by their Unicode script. Combined with appropritate lookup tables, this flag forces them instead to be detected via quadgrams. This can be a useful refinement when looking for meaningful text in these languages, instead of just character sets. The default tables do not support this use.
debugHTML bool, default False For each detection call, write an HTML file to stderr, showing the text chunks and their detected languages. See cld2/docs/InterpretingCLD2UnitTestOutput.pdf to interpret this output.
debugCR bool, default False In that HTML file, force a new line for each chunk.
debugVerbose bool, default False In that HTML file, show every lookup entry.
debugQuiet bool, default False In that HTML file, suppress most of the output detail.
debugEcho bool, default False Echo every input buffer to stderr.
bestEffort bool, default False If True, then allow low-quality results for short text, rather than forcing the result to "UNKNOWN_LANGUAGE". This may be of use for those desiring approximate results on short input text, but there is no claim that these result are very good.

*If bytes, must be UTF-8 encoded bytes.

Constants

This package exports these global constants:

Constant Description
pycld2.ENCODINGS list of the encoding names CLD recognizes (if you provide hintEncoding, it must be one of these names).
pycld2.LANGUAGES list of languages and their codes (if you provide hintLanguageCode, it must be one of the codes from these codes).
pycld2.EXTERNAL_LANGUAGES list of external languages and their codes.
pycld2.DETECTED_LANGUAGES list of all detectable languages.

What About CLD3?

Python bindings for CLD3 are available as a separate project, pycld3.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pycld2-0.40.tar.gz (41.4 MB view details)

Uploaded Source

File details

Details for the file pycld2-0.40.tar.gz.

File metadata

  • Download URL: pycld2-0.40.tar.gz
  • Upload date:
  • Size: 41.4 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/2.0.0 pkginfo/1.5.0.1 requests/2.21.0 setuptools/40.8.0 requests-toolbelt/0.9.1 tqdm/4.36.1 CPython/3.6.8

File hashes

Hashes for pycld2-0.40.tar.gz
Algorithm Hash digest
SHA256 98da2bf94903a03ff5162dfe5aab71f3cd0c89f88197dc87b46c3478cabbf87f
MD5 08c2862021aa3be1fa830a65046c6e5b
BLAKE2b-256 198e6427a3dd5f2605fbc2a41327400b4a86fc626e12fc6e593bf3cf5fd1863b

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page