Skip to main content

Tools for temporal text analysis: A Python package providing diachronic tools for text analysis.

Project description

PyPi JOSS Paper Preprint Poster

ttta: Tools for temporal text analysis

ttta (spoken: "triple t a") is a collection of algorithms to handle diachronic texts in an efficient and unbiased manner.

As code for temporal text analysis papers is mostly scattered across many different repositories and varies heavily in both code quality and usage interface, we thought of a solution. ttta is designed to be a provide a collection of methods with a consistent interface and a good code quality.

This package is currently a work in progress and in its beta stage, so there may be bugs and inconsistencies. If you encounter any, please report them in the issue tracker.

The package is maintained and all modules were streamlined by Kai-Robin Lange.

Contributing

If you have implemented temporal text analysis methods in Python, we would be happy to include them in this package. Your contribution will, of course, be acknowledged on this repository and all further publications. If you are interested in sharing your code, feel free to contact me at kalange@statistik.tu-dortmund.de.

Features

  • Pipeline: A class to help the user to use the respective methods in a consistent manner. The pipeline can be used to preprocess the data, split it into time chunks, train the model on each time chunk, and evaluate the results. The pipeline can be used to train and evaluate all methods in the package. This feature was implemented by Kai-Robin Lange. This feature is currently still work in progress and not usable.
  • Preprocessing: Tokenization, lemmatization, stopword removal, and more. This feature was implemented by Kai-Robin Lange.
  • LDAPrototype: A method for more consistent LDA results by training multiple LDAs and selecting the best one - the prototype. See the respective paper by Rieger et. al. here. This feature was implemented by Kai-Robin Lange.
  • RollingLDA: A method to train an LDA model on a time series of texts. The model is updated with each new time chunk. See the respective paper by Rieger et. al. here. This feature was implemented by Niklas Benner and Kai-Robin Lange.
  • TopicalChanges: A method, to detect changes in word-topic distribution over time by utilizing RollingLDA and LDAPrototype and using a time-varying bootstrap control chart. See the respective paper by Rieger et. al. here and this paper by Lange et. al. here. This feature was implemented by Kai-Robin Lange.
  • Poisson Reduced Rank Model: A method to train the Poisson Reduced Rank Model - a document scaling technique for temporal text data, based on a time series of term frequencies. See the respective paper by Jentsch et. al. here. This feature was implemented by Lars Grönberg.
  • BERT-based sense disambiguation: A method to track the frequency of a word sense over time using BERT's contextualized embeddings. This method was inspired by the respective paper by Hu et. al. here. This feature was implemented by Aymane Hachcham.
  • Word2Vec-based semantic change detection: A method that aligns Word2Vec vector spaces, trained on different time chunks, to detect changes in word meaning by comparing the embeddings. This method was inspired by this paper by Hamilton et. al.. This feature was implemented by Imene Kolli.

Upcoming features

  • Analyzing topical changes with the Narrative Policy Framework using LLMs
  • Hierarchichal Sense Modeling
  • Graphon-Network-based word sense modeling
  • Spatiotemporal topic modeling
  • Hopefully many more

Installation

You can install the latest stable release of the package from pypi. If you want the lates, unstable version, you can clone the GitHub repository.

Using pip

pip install ttta

Certain parts of the package require additional dependencies. You can install them using the following commands:

pip install ttta[wordcloud]   # For wordcloud visualizations

and

pip install ttta[embeddings]  # For embedding-based methods

or simply

pip install ttta[all]         # For all optional dependencies

Cloning the repository

pip install git+https://github.com/K-RLange/ttta.git

or

git clone https://github.com/K-RLange/ttta.git
cd ttta
pip install .

Getting started

You can find a tutorial on how to use each feature of the package in the examples folder.

Citing ttta

If you use ttta in your research, please cite the package as follows:

@misc{lange.ttta,
      title={ttta: Tools for Temporal Text Analysis}, 
      author={Kai-Robin Lange and Niklas Benner and Lars Grönberg and Aymane Hachcham and Imene Kolli and Jonas Rieger and Carsten Jentsch},
      year={2025},
      eprint={2503.02625},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2503.02625}, 
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ttta-0.9.9.tar.gz (94.0 kB view details)

Uploaded Source

File details

Details for the file ttta-0.9.9.tar.gz.

File metadata

  • Download URL: ttta-0.9.9.tar.gz
  • Upload date:
  • Size: 94.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for ttta-0.9.9.tar.gz
Algorithm Hash digest
SHA256 db9cb42eeb89f234d576e911d460d2253f665caf6213e13daac56b8359e6fed4
MD5 c375441280d2cf99639155993da6ebe0
BLAKE2b-256 e497b133de5fefbff77df7de1ced5c70ef0ce7f70b779f851472bcce11d03c0f

See more details on using hashes here.

Provenance

The following attestation bundles were made for ttta-0.9.9.tar.gz:

Publisher: python-publish.yml on K-RLange/ttta

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page