Skip to main content

Pre-process documents for Natural Language Processing using spaCy models

Project description

document_processing

Install

pip install document_processing

This package provides functions to pre-process text for various NLP tasks. It uses spaCy and its models to analyse the text.

Behaviour

The entry point of this package is process_dcouments in which you put the Series of documents to process and the spaCy model name that will be loaded to transform the texts.

From a document, you can extract tokens, lemmas and entities with the get_tokens_lemmas_entities_from_document function, giving it the document returned by the previous function, and the preprocessing function, as described below.

Pre-processing functions

  • preprocess_list_of_texts: process tokens, remove stopwords, non-standard characters, etc.
  • preprocess_list_of_tweets: same as above, and remove all token that seem to be HTTP links, which are often present in Tweets.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

File details

Details for the file document_processing-1.0.1.202208310820-py3-none-any.whl.

File metadata

File hashes

Hashes for document_processing-1.0.1.202208310820-py3-none-any.whl
Algorithm Hash digest
SHA256 dd38610f57e25a78e4dc722d4bf9777b43011b95861d9b03e818c0813c654146
MD5 a46e20ca0ea7ada2713e75e8964eab17
BLAKE2b-256 95116c71405e9dac3995db4902e8c999b169e16e39ab35c3e58b000d0de49ebf

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page