Skip to main content

A toolkit for calculating sequence entropy and informantion flow quickly. With specific applications to tweets.

Project description

ProcessEntropy

A toolkit for calculating sequence entropy rates quickly. Especially useful for cross entropy rates and measuring information flow. Application is aimed at tweets but can be used on text or sequence like data.

This toolkit uses a non-parametric entropy estimation technique which computes the longest match length between sequences to estimate their entropy. This functionality is provided by the LCSFinder package which calculates the longest common substrings with a fixed starting location of one substring. This algorithm employs properties of a sorted suffix array to allow the longest match length to be found in O(1) with O(N) precomputation.

Example Usage

# Load in example tweets dataframe
import pandas as pd
example_tweet_data = pd.read_csv('example_data/example_tweet_data.csv')

from CrossEntropy import pairwise_information_flow

# Calculate information flow between users based on temporal text usage 
pairwise_information_flow(example_tweet_data, text_col = 'tweet', label_col = 'username', time_col = 'created_at')

Requirements

  • Python 3.x with packages:
    • numba
    • nltk (for tokenization)
    • numpy
    • LCSFinder

Dependency on LCSFinder

The package LCSFinder uses a C++ backend. If this is causing issues on your machine, you can install this package without dependencies.

pip install --no-dependencies ProcessEntropy

However, you will need to ensure that the dependences numba, nltk and numpy are included.

The functions which do not depend on LCSFinder can be accessed using the *PythonOnly modules.

For example:

# Load in example tweets dataframe
import pandas as pd
example_tweet_data = pd.read_csv('example_data/example_tweet_data.csv')

from CrossEntropyPythonOnly import pairwise_information_flow

# Calculate information flow between users based on temporal text usage 
pairwise_information_flow(example_tweet_data, text_col = 'tweet', label_col = 'username', time_col = 'created_at')

Note: the PythonOnly variants do not perform identically, and will not pass all of the test cases. This is due to slight differences where empty source/target arrays can contribute non-zero lambda values. This behaviour was removed with the LCSFinder functionality.

Installation

pip install ProcessEntropy

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ProcessEntropy-1.1.2.dev0.tar.gz (11.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

ProcessEntropy-1.1.2.dev0-py3-none-any.whl (17.5 kB view details)

Uploaded Python 3

File details

Details for the file ProcessEntropy-1.1.2.dev0.tar.gz.

File metadata

  • Download URL: ProcessEntropy-1.1.2.dev0.tar.gz
  • Upload date:
  • Size: 11.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.5

File hashes

Hashes for ProcessEntropy-1.1.2.dev0.tar.gz
Algorithm Hash digest
SHA256 ffa4bfc1356f11148062e68405b96351450a706a4acecd3fc5dffcf5be7acba5
MD5 0a5e5b790c5dab87a021d4c480980840
BLAKE2b-256 fb0710a7ba3d32e2d7126c5a7dea43f7d059608daeb5814d52dbdb39a3cd018e

See more details on using hashes here.

File details

Details for the file ProcessEntropy-1.1.2.dev0-py3-none-any.whl.

File metadata

File hashes

Hashes for ProcessEntropy-1.1.2.dev0-py3-none-any.whl
Algorithm Hash digest
SHA256 c8ee896a5947b7a44ec073f1a68d5e990473410841b666fa541587d31076534e
MD5 50270fb74a5386152e34c6334940c0df
BLAKE2b-256 2f323560cb17a32d9809ee7da2857de76e00d913c250964111a61703953c6385

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page