Text Mining and Topic Modeling Toolkit
tmtookit: Text mining and topic modeling toolkit
tmtoolkit is a set of tools for text mining and topic modeling with Python developed especially for the use in the social sciences. It aims for easy installation, extensive documentation and a clear programming interface while offering good performance on large datasets by the means of vectorized operations (via NumPy) and parallel computation (using Python’s multiprocessing module). It combines several known and well-tested packages such as NLTK or SciPy.
At the moment, tmtoolkit focuses on methods around the Bag-of-words model, but word embeddings may be integrated in the future.
tmtoolkit implements or provides convenient wrappers for several preprocessing methods, including:
- part-of-speech (POS) tagging
- lemmatization and stemming
- extensive token normalization / cleaning methods
- extensive pattern matching capabilities (exact matching, regular expressions or “glob” patterns) to be used in many methods of the package, e.g. for filtering on token, document or document label level, or for keywords-in-context (KWIC)
- generating n-grams
- generating sparse document-term matrices
- management of token metadata
- expanding compound words and “gluing” of specified subsequent tokens, e.g. ["White", "House"] becomes ["White_House"]
All text preprocessing methods can operate in parallel to speed up computations with large datasets.
model computation in parallel for different copora and/or parameter sets
evaluation of topic models (e.g. in order to an optimal number of topics for a given dataset) using several implemented metrics:
- model coherence (Mimno et al. 2011) or with metrics implemented in Gensim)
- KL divergence method (Arun et al. 2010)
- probability of held-out documents (Wallach et al. 2009)
- pair-wise cosine distance method (Cao Juan et al. 2009)
- harmonic mean method (Griffiths, Steyvers 2004)
- the loglikelihood or perplexity methods natively implemented in lda, sklearn or gensim
visualize topic-word distributions and document-topic distributions as word clouds or heatmaps
coherence for individual topics
integrate PyLDAVis to visualize results
- currently only German and English language texts are supported for language-dependent text preprocessing methods such as POS tagging or lemmatization
- all data must reside in memory, i.e. no streaming of large data from the hard disk (which for example Gensim supports)
- no direct support of word embeddings
The package tmtoolkit is available on PyPI and can be installed via Python package manager pip. It is highly recommended to install tmtoolkit and its dependencies in a Python Virtual Environment (“venv”) and upgrade to the latest pip version (you may also choose to install virtualenvwrapper, which makes managing venvs a lot easier).
Creating and activating a venv without virtualenvwrapper:
python3 -m venv myenv # activating the environment (on Windows type "myenv\Scripts\activate.bat") source myenv/bin/activate
Alternatively, creating and activating a venv with virtualenvwrapper:
mkvirtualenv myenv # activating the environment workon myenv
Upgrading pip (only do this when you’ve activated your venv):
pip install -U pip
Now in order to install tmtoolkit, you can choose if you want a minimal installation or install a recommended set of packages that enable most features. For the recommended installation, you can type one of the following, depending on the preferred package for topic modeling:
# recommended installation without topic modeling pip install -U tmtoolkit[recommended] # recommended installation with "lda" for topic modeling pip install -U tmtoolkit[recommended,lda] # recommended installation with "scikit-learn" for topic modeling pip install -U tmtoolkit[recommended,sklearn] # recommended installation with "gensim" for topic modeling pip install -U tmtoolkit[recommended,gensim] # you may also select several topic modeling packages pip install -U tmtoolkit[recommended,lda,sklearn,gensim]
For the minimal installation, you can just do:
pip install -U tmtoolkit
Note: For Linux and MacOS users, it’s also recommended to install the datatable package (see “Optional packages”), which makes many operations faster and more memory efficient.
The tmtoolkit package is about 19MB big, because it contains some example corpora and additional German language model data for POS tagging.
After that, you should initially run tmtoolkit’s setup routine. This makes sure that all required data files are present and downloads them if necessary:
python -m tmtoolkit setup
tmtoolkit works with Python 3.6, 3.7 or 3.8.
Requirements are automatically installed via pip. Additional packages can also be installed via pip for certain use cases (see “Optional packages”).
A special note for Windows users: tmtoolkit has been tested on Windows and works well (I recommend using the Anaconda distribution for Python when using Windows). However, you will need to wrap all code that uses multi-processing (i.e. all calls to tmtoolkit.preprocess.TMPreproc and the parallel topic modeling functions) in a if __name__ == '__main__' block like this:
def main(): # code with multi-processing comes here # ... if __name__ == '__main__': main()
For additional features, you can install further packages from PyPI via pip:
- for faster tabular data creation and access (replaces usage of pandas package in most functions): datatable. Note that datatable is currently only available for Linux and MacOS on Python 3.6 and 3.7.
- for the word cloud functions: wordcloud and Pillow.
- for Excel export: openpyxl.
- for topic modeling, one of the LDA implementations: lda, scikit-learn or gensim.
- for additional topic model coherence metrics: gensim.
For LDA evaluation metrics griffiths_2004 and held_out_documents_wallach09 it is necessary to install gmpy2 for multiple-precision arithmetic. This in turn requires installing some C header libraries for GMP, MPFR and MPC. On Debian/Ubuntu systems this is done with:
sudo apt install libgmp-dev libmpfr-dev libmpc-dev
After that, gmpy2 can be installed via pip.
Release history Release notifications
|Filename, size||File type||Python version||Upload date||Hashes|
|Filename, size tmtoolkit-0.9.0-py3-none-any.whl (19.9 MB)||File type Wheel||Python version py3||Upload date||Hashes View hashes|
|Filename, size tmtoolkit-0.9.0.tar.gz (19.8 MB)||File type Source||Python version None||Upload date||Hashes View hashes|
Hashes for tmtoolkit-0.9.0-py3-none-any.whl