Latent Semantic Analysis package based on "the standard" Latent Semantic Indexing theory.
Project description
Latent Semantic Analysis (LSA) Python package
In brief
This Python package, LatentSemanticAnalyzer
, has different functions for computations of
Latent Semantic Analysis (LSA) workflows
(using Sparse matrix Linear Algebra.) The package mirrors
the Mathematica implementation [AAp1].
(There is also a corresponding implementation in R; see [AAp2].)
The package provides:
- Class
LatentSemanticAnalyzer
- Functions for applying Latent Semantic Indexing (LSI) functions on matrix entries
- "Data loader" function for obtaining a
pandas
data frame ~580 abstracts of conference presentations
Installation
To install from GitHub use the shell command:
python -m pip install git+https://github.com/antononcube/Python-packages.git#egg=LatentSemanticAnalyzer\&subdirectory=LatentSemanticAnalyzer
To install from PyPI:
python -m pip install LatentSemanticAnalyzer
LSA workflows
The scope of the package is to facilitate the creation and execution of the workflows encompassed in this flow chart:
For more details see the article "A monad for Latent Semantic Analysis workflows", [AA1].
Usage example
Here is an example of a LSA pipeline that:
- Ingests a collection of texts
- Makes the corresponding document-term matrix using stemming and removing stop words
- Extracts 40 topics
- Shows a table with the extracted topics
- Shows a table with statistical thesaurus entries for selected words
import random
from LatentSemanticAnalyzer.LatentSemanticAnalyzer import *
from LatentSemanticAnalyzer.DataLoaders import *
import snowballstemmer
# Collection of texts
dfAbstracts = load_abstracts_data_frame()
docs = dict(zip(dfAbstracts.ID, dfAbstracts.Abstract))
# Stemmer object (to preprocess words in the pipeline below)
stemmerObj = snowballstemmer.stemmer("english")
# Words to show statistical thesaurus entries for
words = ["notebook", "computational", "function", "neural", "talk", "programming"]
# Reproducible results
random.seed(12)
# LSA pipeline
lsaObj = (LatentSemanticAnalyzer()
.make_document_term_matrix(docs=docs,
stop_words=True,
stemming_rules=True,
min_length=3)
.apply_term_weight_functions(global_weight_func="IDF",
local_weight_func="None",
normalizer_func="Cosine")
.extract_topics(number_of_topics=40, min_number_of_documents_per_term=10, method="NNMF")
.echo_topics_interpretation(number_of_terms=12, wide_form=True)
.echo_statistical_thesaurus(terms=stemmerObj.stemWords(words),
wide_form=True,
number_of_nearest_neighbors=12,
method="cosine",
echo_function=lambda x: print(x.to_string())))
Related Python packages
This package is based on the Python package
SSparseMatrix
, [AAp3]
TBF...
Related Mathematica and R packages
Mathematica
The Python pipeline above corresponds to the following pipeline for the Mathematica package [AAp1]:
lsaObj =
LSAMonUnit[aAbstracts]⟹
LSAMonMakeDocumentTermMatrix["StemmingRules" -> Automatic, "StopWords" -> Automatic]⟹
LSAMonEchoDocumentTermMatrixStatistics["LogBase" -> 10]⟹
LSAMonApplyTermWeightFunctions["IDF", "None", "Cosine"]⟹
LSAMonExtractTopics["NumberOfTopics" -> 20, Method -> "NNMF", "MaxSteps" -> 16, "MinNumberOfDocumentsPerTerm" -> 20]⟹
LSAMonEchoTopicsTable["NumberOfTerms" -> 10]⟹
LSAMonEchoStatisticalThesaurus["Words" -> Map[WordData[#, "PorterStem"]&, {"notebook", "computational", "function", "neural", "talk", "programming"}]];
R
The package
LSAMon-R
,
[AAp2], implements a software monad for LSA workflows.
References
Articles
[AA1] Anton Antonov, "A monad for Latent Semantic Analysis workflows", (2019), MathematicaForPrediction at WordPress.
Mathematica and R Packages
[AAp1] Anton Antonov, Monadic Latent Semantic Analysis Mathematica package, (2017), MathematicaForPrediction at GitHub.
[AAp2] Anton Antonov, Latent Semantic Analysis Monad in R (2019), R-packages at GitHub/antononcube.
Python packages
[AAp3] Anton Antonov, SSparseMatrix Python package, (2021), PyPI.
[AAp4] Anton Antonov, SparseMatrixRecommender Python package, (2021), PyPI.
[AAp5] Anton Antonov, RandomDataGenerators Python package, (2021), PyPI.
[AAp6] Anton Antonov, RandomMandala Python package, (2021), PyPI.
[MZp1] Marinka Zitnik and Blaz Zupan, Nimfa: A Python Library for Nonnegative Matrix Factorization, (2013-2019), PyPI.
[SDp1] Snowball Developers, SnowballStemmer Python package, (2013-2021), PyPI.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for LatentSemanticAnalyzer-0.1.0.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | 9163544302dbf4f1302f00bb386cc853eab23c5e9b9feaa405b02662cf429431 |
|
MD5 | 0b53b65d4e6adc46e6e11ea94df0c6de |
|
BLAKE2b-256 | 49798e7f29ab3140608d6e8dd0a8b19f0db4f89558947c462c5de122133fca37 |
Hashes for LatentSemanticAnalyzer-0.1.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 949669f11aa0f69f7cd38e0b9e420c72afe16fc26d2f6296a1da0b8caf4a853c |
|
MD5 | f829b8c80c4d30458deb09d80882e307 |
|
BLAKE2b-256 | 002b9616d402053c2cf8392bd958fb558fd27847992460f6b57712591ff6e476 |