Pre-process documents for Natural Language Processing using spaCy models
Project description
document_processing
Install
pip install document_processing
This package provides functions to pre-process text for various NLP tasks. It uses spaCy
and its models to analyse the text.
Behaviour
The entry point of this package is process_dcouments
in which you put the Series
of documents to process and the spaCy
model name that will be loaded to transform the texts.
From a document, you can extract tokens, lemmas and entities with the get_tokens_lemmas_entities_from_document
function, giving it the document returned by the previous function, and the preprocessing function, as described below.
Pre-processing functions
preprocess_list_of_texts
: process tokens, remove stopwords, non-standard characters, etc.preprocess_list_of_tweets
: same as above, and remove all token that seem to be HTTP links, which are often present in Tweets.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
File details
Details for the file document_processing-1.0.1.202208310820-py3-none-any.whl
.
File metadata
- Download URL: document_processing-1.0.1.202208310820-py3-none-any.whl
- Upload date:
- Size: 16.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.1 CPython/3.10.6
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | dd38610f57e25a78e4dc722d4bf9777b43011b95861d9b03e818c0813c654146 |
|
MD5 | a46e20ca0ea7ada2713e75e8964eab17 |
|
BLAKE2b-256 | 95116c71405e9dac3995db4902e8c999b169e16e39ab35c3e58b000d0de49ebf |