An NLP python package for computing Boilerplate score and many other text features.
Project description
MoreThanSentiments
Besides sentiment scores, this Python package offers various ways of quantifying text corpus based on multiple works of literature. Currently, we support the calculation of the following measures:
- Boilerplate (Lang and Stice-Lawrence, 2015)
- Redundancy (Cazier and Pfeiffer, 2015)
- Specificity (Hope et al., 2016)
- Relative_prevalence (Blankespoor, 2016)
A medium blog is here: MoreThanSentiments: A Python Library for Text Quantification
Citation
If this package was helpful in your work, feel free to cite it as
- Jiang, J., & Srinivasan, K. (2022). MoreThanSentiments: A text analysis package. Software Impacts, 100456. https://doi.org/10.1016/J.SIMPA.2022.100456
Installation
The easiest way to install the toolbox is via pip (pip3 in some distributions):
pip install MoreThanSentiments
Usage
Import the Package
import MoreThanSentiments as mts
Read data from txt files
my_dir_path = "D:/YourDataFolder"
df = mts.read_txt_files(PATH = my_dir_path)
Sentence Token
df['sent_tok'] = df.text.apply(mts.sent_tok)
Clean Data
If you want to clean on the sentence level:
df['cleaned_data'] = pd.Series()
for i in range(len(df['sent_tok'])):
df['cleaned_data'][i] = [mts.clean_data(x,\
lower = True,\
punctuations = True,\
number = False,\
unicode = True,\
stop_words = False) for x in df['sent_tok'][i]]
If you want to clean on the document level:
df['cleaned_data'] = df.text.apply(mts.clean_data, args=(True, True, False, True, False))
For the data cleaning function, we offer the following options:
- lower: make all the words to lowercase
- punctuations: remove all the punctuations in the corpus
- number: remove all the digits in the corpus
- unicode: remove all the unicodes in the corpus
- stop_words: remove the stopwords in the corpus
Boilerplate
df['Boilerplate'] = mts.Boilerplate(sent_tok, n = 4, min_doc = 5, get_ngram = False)
Parameters:
- input_data: this function requires tokenized documents.
- n: number of the ngrams to use. The default is 4.
- min_doc: when building the ngram list, ignore the ngrams that have a document frequency strictly lower than the given threshold. The default is 5 document. 30% of the number of the documents is recommended.
- get_ngram: if this parameter is set to "True" it will return a datafram with all the ngrams and the corresponding frequency, and "min_doc" parameter will become ineffective.
- max_doc: when building the ngram list, ignore the ngrams that have a document frequency strictly lower than the given threshold. The default is 75% of document. It can be percentage or integer.
Redundancy
df['Redundancy'] = mts.Redundancy(df.cleaned_data, n = 10)
Parameters:
- input_data: this function requires tokenized documents.
- n: number of the ngrams to use. The default is 10.
Specificity
df['Specificity'] = mts.Specificity(df.text)
Parameters:
- input_data: this function requires the documents without tokenization
Relative_prevalence
df['Relative_prevalence'] = mts.Relative_prevalence(df.text)
Parameters:
- input_data: this function requires the documents without tokenization
For the full code script, you may check here:
CHANGELOG
Version 0.2.1, 2022-12-22
- Fixed the counting bug in Specificity
- Added max_doc parameter to Boilerplate
Version 0.2.0, 2022-10-2
- Added the "get_ngram" feature to the Boilerplate function
- Added the percentage as a option for "min_doc" in Boilerpate, when the given value is between 0 and 1, it will automatically become a percentage for "min_doc"
Version 0.1.3, 2022-06-10
- Updated the usage guide
- Minor fix to the script
Version 0.1.2, 2022-05-08
- Initial release.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file MoreThanSentiments-0.2.1.tar.gz
.
File metadata
- Download URL: MoreThanSentiments-0.2.1.tar.gz
- Upload date:
- Size: 6.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.9.2
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | fcb81267c1441ea0ae5c1621b0c2b1f87e079f0f61dba23bcfc03c7951dff56d |
|
MD5 | b4b7e792be4f265a2044f6d9f5b797fb |
|
BLAKE2b-256 | 21d0ee190cf70b180d61eca55e14da2f4fcccf602f2cba79169ffa459240c0b1 |
File details
Details for the file MoreThanSentiments-0.2.1-py3-none-any.whl
.
File metadata
- Download URL: MoreThanSentiments-0.2.1-py3-none-any.whl
- Upload date:
- Size: 7.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.9.2
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 9f0f741957b9190bda15bd652f33657e5b58790a86993a79a88f343029ab18a3 |
|
MD5 | 9f598e67bafb5959aa02a0bf840196f1 |
|
BLAKE2b-256 | 7a6d682d055660b00793e8f999dcd831c0c7e0cf3c5b49f7489d909593d7dc3c |