Skip to main content

An NLP python package for computing Boilerplate score and many other text features.

Project description

License PyPI Code Ocean Downloads

MoreThanSentiments

Besides sentiment scores, this Python package offers various ways of quantifying text corpus based on multiple works of literature. Currently, we support the calculation of the following measures:

  • Boilerplate (Lang and Stice-Lawrence, 2015)
  • Redundancy (Cazier and Pfeiffer, 2015)
  • Specificity (Hope et al., 2016)
  • Relative_prevalence (Blankespoor, 2016)

A medium blog is here: MoreThanSentiments: A Python Library for Text Quantification

Citation

If this package was helpful in your work, feel free to cite it as

Installation

The easiest way to install the toolbox is via pip (pip3 in some distributions):

pip install MoreThanSentiments

Usage

Import the Package

import MoreThanSentiments as mts

Read data from txt files

my_dir_path = "D:/YourDataFolder"
df = mts.read_txt_files(PATH = my_dir_path)

Sentence Token

df['sent_tok'] = df.text.apply(mts.sent_tok)

Clean Data

If you want to clean on the sentence level:

df['cleaned_data'] = pd.Series()    
for i in range(len(df['sent_tok'])):
    df['cleaned_data'][i] = [mts.clean_data(x,\
                                            lower = True,\
                                            punctuations = True,\
                                            number = False,\
                                            unicode = True,\
                                            stop_words = False) for x in df['sent_tok'][i]] 

If you want to clean on the document level:

df['cleaned_data'] = df.text.apply(mts.clean_data, args=(True, True, False, True, False))

For the data cleaning function, we offer the following options:

  • lower: make all the words to lowercase
  • punctuations: remove all the punctuations in the corpus
  • number: remove all the digits in the corpus
  • unicode: remove all the unicodes in the corpus
  • stop_words: remove the stopwords in the corpus

Boilerplate

df['Boilerplate'] = mts.Boilerplate(sent_tok, n = 4, min_doc = 5, get_ngram = False)

Parameters:

  • input_data: this function requires tokenized documents.
  • n: number of the ngrams to use. The default is 4.
  • min_doc: when building the ngram list, ignore the ngrams that have a document frequency strictly lower than the given threshold. The default is 5 document. 30% of the number of the documents is recommended.
  • get_ngram: if this parameter is set to "True" it will return a datafram with all the ngrams and the corresponding frequency, and "min_doc" parameter will become ineffective.
  • max_doc: when building the ngram list, ignore the ngrams that have a document frequency strictly lower than the given threshold. The default is 75% of document. It can be percentage or integer.

Redundancy

df['Redundancy'] = mts.Redundancy(df.cleaned_data, n = 10)

Parameters:

  • input_data: this function requires tokenized documents.
  • n: number of the ngrams to use. The default is 10.

Specificity

df['Specificity'] = mts.Specificity(df.text)

Parameters:

  • input_data: this function requires the documents without tokenization

Relative_prevalence

df['Relative_prevalence'] = mts.Relative_prevalence(df.text)

Parameters:

  • input_data: this function requires the documents without tokenization

For the full code script, you may check here:

CHANGELOG

Version 0.3.3 2025-01-31

  • Fixed the parameter misplace issue in Redudancy.
  • Fully upgraded the algorithm and refactored the code base. 40% - 50% speed boost on large datasets.
  • Fixed source distribution filenames to comply with PEP 625.
  • Minor optimization added.

Version 0.2.1, 2022-12-22

  • Fixed the counting bug in Specificity
  • Added max_doc parameter to Boilerplate

Version 0.2.0, 2022-10-2

  • Added the "get_ngram" feature to the Boilerplate function
  • Added the percentage as a option for "min_doc" in Boilerpate, when the given value is between 0 and 1, it will automatically become a percentage for "min_doc"

Version 0.1.3, 2022-06-10

  • Updated the usage guide
  • Minor fix to the script

Version 0.1.2, 2022-05-08

  • Initial release.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

morethansentiments-0.3.3.tar.gz (7.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

morethansentiments-0.3.3-py3-none-any.whl (7.4 kB view details)

Uploaded Python 3

File details

Details for the file morethansentiments-0.3.3.tar.gz.

File metadata

  • Download URL: morethansentiments-0.3.3.tar.gz
  • Upload date:
  • Size: 7.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.0 CPython/3.8.8

File hashes

Hashes for morethansentiments-0.3.3.tar.gz
Algorithm Hash digest
SHA256 90f26ff697e23dc1be0ed7932e53cfdf5d33aa17490b7c2d0f53800c2d0ec89d
MD5 a18e46a74f3b0770e995a2d6a90b8fd7
BLAKE2b-256 c589f0cb11ce65307c3e2722aa106ef49b69cf594ca59b1de734323b7bef4242

See more details on using hashes here.

File details

Details for the file morethansentiments-0.3.3-py3-none-any.whl.

File metadata

File hashes

Hashes for morethansentiments-0.3.3-py3-none-any.whl
Algorithm Hash digest
SHA256 ddedeaa0ad60181fc5ac96fdc77a5edd5d37311e32cb00808499b19ca75d3345
MD5 5087a807de5a2eb2f6d565140d822338
BLAKE2b-256 bd43e078959e13e8b8827e967e8a0ebe299eeb6f0518c6547b7f8f0d50409ede

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page