Skip to main content

Python package for text mining of time-series data

Project description

pypi License: MIT

Arabica

Python package for text mining of time-series data

Text data is often recorded as a time series with significant variability over time. Some examples of time-series text data include social media conversations, product reviews, research metadata, central bank communication, and newspaper headlines. Arabica makes exploratory analysis of these datasets simple by providing:

  • Descriptive n-gram analysis: n-gram frequencies
  • Time-series n-gram analysis: n-gram frequencies over a period
  • Text visualization: n-gram heatmap, line plot, word cloud
  • Sentiment analysis: VADER sentiment classifier
  • Financial sentiment analysis: with FinVADER
  • Structural breaks identification: Jenks Optimization Method

It automatically cleans data from punctuation on input. It can also apply all or a selected combination of the following cleaning operations:

  • Remove digits from the text
  • Remove the standard list(s) of stopwords
  • Remove an additional list of stop words

Arabica works with texts of languages based on the Latin alphabet, uses cleantext for punctuation cleaning, and enables stop words removal for languages in the NLTK corpus of stopwords.

It reads dates in:

  • US-style: MM/DD/YYYY (2013-12-31, Feb-09-2009, 2013-12-31 11:46:17, etc.)
  • European-style: DD/MM/YYYY (2013-31-12, 09-Feb-2009, 2013-31-12 11:46:17, etc.) date and datetime formats.

Installation

Arabica requires Python 3.8 - 3.10, NLTK - stop words removal, cleantext - text cleaning, wordcloud - word cloud visualization, plotnine - heatmaps and line graphs, matplotlib - word clouds and graphical operations, vaderSentiment - sentiment analysis, finvader - financial sentiment analysis, and jenskpy for breakpoint identification.

To install using pip, use:

pip install arabica

Usage

  • Import the library:
from arabica import arabica_freq
from arabica import cappuccino
from arabica import coffee_break 
  • Choose a method:

arabica_freq enables a specific set of cleaning operations (lower casing, numbers, common stop words, and additional stop words removal) and returns a dataframe with aggregated unigrams, bigrams, and trigrams frequencies over a period.

def arabica_freq(text: str,                # Text
                 time: str,                # Time
                 date_format: str,         # Date format: 'eur' - European, 'us' - American
                 time_freq: str,           # Aggregation period: 'Y'/'M'/'D', if no aggregation: 'ungroup'
                 max_words: int,           # Maximum of most frequent n-grams displayed for each period
                 stopwords: [],            # Languages for stop words
                 stopwords_ext: [],        # Languages for extended stop words list
                 skip: [],                 # Remove additional stop words
                 numbers: bool = False,    # Remove numbers
                 lower_case: bool = False  # Lowercase text
) 

cappuccino enables cleaning operations (lower casing, numbers, common stop words, and additional stop words removal) and provides plots for descriptive (word cloud) and time-series (heatmap, line plot) visualization.

def cappuccino(text: str,                # Text
               time: str,                # Time
               date_format: str,         # Date format: 'eur' - European, 'us' - American
               plot: str,                # Chart type: 'wordcloud'/'heatmap'/'line'
               ngram: int,               # N-gram size, 1 = unigram, 2 = bigram, 3 = trigram
               time_freq: str,           # Aggregation period: 'Y'/'M', if no aggregation: 'ungroup'
               max_words int,            # Maximum of most frequent n-grams displayed for each period
               stopwords: [],            # Languages for stop words
               stopwords_ext: [],        # Languages for extended stop words list
               skip: [],                 # Remove additional stop words               
               numbers: bool = False,    # Remove numbers
               lower_case: bool = False  # Lowercase text
)

coffee_break provides sentiment analysis and breakpoint identification in aggregated time series of sentiment. The implemented models are:

  • VADER is a lexicon and rule-based sentiment classifier attuned explicitly to general language expressed in social media

  • FinVADER improves VADER's classification accuracy on financial texts, including two financial lexicons

Break points in the time series are identified with the Fisher-Jenks algorithm (Jenks, 1977. Optimal data classification for choropleth maps).

def coffee_break(text: str,                 # Text
                 time: str,                 # Time
                 date_format: str,          # Date format: 'eur' - European, 'us' - American
                 model: str,                # Sentiment classifier, 'vader' - general language, 'finvader' - financial text                
                 skip: [],                  # Remove additional stop words
                 preprocess: bool = False,  # Clean data from numbers and punctuation
                 time_freq: str,            # Aggregation period: 'Y'/'M'
                 n_breaks: int              # Number of breakpoints: min. 2
)

Documentation, examples and tutorials

For more examples of coding, read these tutorials:

General use:

  • Sentiment Analysis and Structural Breaks in Time-Series Text Data here
  • Visualization Module in Arabica Speeds Up Text Data Exploration here
  • Text as Time Series: Arabica 1.0 Brings New Features for Exploratory Text Data Analysis here

Applications:

  • Business Intelligence: Customer Satisfaction Measurement with N-gram and Sentiment Analysis here
  • Research meta-data analysis: Research Article Meta-data Description Made Quick and Easy here
  • Media coverage text mining
  • Social media analysis

💬 Please visit here for any questions, issues, bugs, and suggestions.

Citation

Using arabica in a paper or thesis? Please cite this paper:

@article{Koráb:2024,
  author   = {{Koráb}, P., and {Poměnková}, J.},
  title    = {Arabica: A Python package for exploratory analysis of text data},
  journal  = {Journal of Open Source Software},
  volume   = {97},
  number   = {9},
  pages    = {6186},
  year     = {2024},
  doi      = {doi.org/10.21105/joss.06186},
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

arabica-1.8.1.tar.gz (23.6 kB view details)

Uploaded Source

Built Distribution

arabica-1.8.1-py3-none-any.whl (22.2 kB view details)

Uploaded Python 3

File details

Details for the file arabica-1.8.1.tar.gz.

File metadata

  • Download URL: arabica-1.8.1.tar.gz
  • Upload date:
  • Size: 23.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.10.14

File hashes

Hashes for arabica-1.8.1.tar.gz
Algorithm Hash digest
SHA256 cd2c62aa907f6a91aa65bfb850c6a5e94cea6dfb881820c0a0ea29799dfb4e5c
MD5 d47fb3b5b9963e63c351c2f95e300c5c
BLAKE2b-256 a7dc77a2273c82c510033194e7af24410f184fe74c070db2a195450ad0a3cfb5

See more details on using hashes here.

File details

Details for the file arabica-1.8.1-py3-none-any.whl.

File metadata

  • Download URL: arabica-1.8.1-py3-none-any.whl
  • Upload date:
  • Size: 22.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.10.14

File hashes

Hashes for arabica-1.8.1-py3-none-any.whl
Algorithm Hash digest
SHA256 790592bbf895326c2ac26c6ed56516a7047224367137178cf9c9928f63d5d07f
MD5 6689d8c31e5c79e497a4e7425e488527
BLAKE2b-256 76c232f0aa8c9477ef0f706f14d5b201f9c7a19eebcfeb1015a732faa60685ac

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page