Skip to main content

Tomoto, Topic Modeling Tool for Python

Project description

What is tomotopy?

tomotopy is a Python extension of tomoto (Topic Modeling Tool) which is a Gibbs-sampling based topic model library written in C++. It utilizes a vectorization of modern CPUs for maximizing speed. The current version of tomoto supports several major topic models including

  • Latent Dirichlet Allocation (tomotopy.LDAModel)

  • Labeled LDA (tomotopy.LLDAModel)

  • Partially Labeled LDA (tomotopy.PLDAModel)

  • Supervised LDA (tomotopy.SLDAModel)

  • Dirichlet Multinomial Regression (tomotopy.DMRModel)

  • Generalized Dirichlet Multinomial Regression (tomotopy.GDMRModel)

  • Hierarchical Dirichlet Process (tomotopy.HDPModel)

  • Hierarchical LDA (tomotopy.HLDAModel)

  • Multi Grain LDA (tomotopy.MGLDAModel)

  • Pachinko Allocation (tomotopy.PAModel)

  • Hierarchical PA (tomotopy.HPAModel)

  • Correlated Topic Model (tomotopy.CTModel)

  • Dynamic Topic Model (tomotopy.DTModel)

  • Pseudo-document based Topic Model (tomotopy.PTModel).

https://badge.fury.io/py/tomotopy.svg

Getting Started

You can install tomotopy easily using pip. (https://pypi.org/project/tomotopy/)

$ pip install --upgrade pip
$ pip install tomotopy

The supported OS and Python versions are:

  • Linux (x86-64) with Python >= 3.6

  • macOS >= 10.13 with Python >= 3.6

  • Windows 7 or later (x86, x86-64) with Python >= 3.6

  • Other OS with Python >= 3.6: Compilation from source code required (with c++14 compatible compiler)

After installing, you can start tomotopy by just importing.

import tomotopy as tp
print(tp.isa) # prints 'avx2', 'avx', 'sse2' or 'none'

Currently, tomotopy can exploits AVX2, AVX or SSE2 SIMD instruction set for maximizing performance. When the package is imported, it will check available instruction sets and select the best option. If tp.isa tells none, iterations of training may take a long time. But, since most of modern Intel or AMD CPUs provide SIMD instruction set, the SIMD acceleration could show a big improvement.

Here is a sample code for simple LDA training of texts from ‘sample.txt’ file.

import tomotopy as tp
mdl = tp.LDAModel(k=20)
for line in open('sample.txt'):
    mdl.add_doc(line.strip().split())

for i in range(0, 100, 10):
    mdl.train(10)
    print('Iteration: {}\tLog-likelihood: {}'.format(i, mdl.ll_per_word))

for k in range(mdl.k):
    print('Top 10 words of topic #{}'.format(k))
    print(mdl.get_topic_words(k, top_n=10))

mdl.summary()

Performance of tomotopy

tomotopy uses Collapsed Gibbs-Sampling(CGS) to infer the distribution of topics and the distribution of words. Generally CGS converges more slowly than Variational Bayes(VB) that [gensim’s LdaModel] uses, but its iteration can be computed much faster. In addition, tomotopy can take advantage of multicore CPUs with a SIMD instruction set, which can result in faster iterations.

[gensim’s LdaModel]: https://radimrehurek.com/gensim/models/ldamodel.html

Following chart shows the comparison of LDA model’s running time between tomotopy and gensim. The input data consists of 1000 random documents from English Wikipedia with 1,506,966 words (about 10.1 MB). tomotopy trains 200 iterations and gensim trains 10 iterations.

https://bab2min.github.io/tomotopy/images/tmt_i5.png

↑ Performance in Intel i5-6600, x86-64 (4 cores)

https://bab2min.github.io/tomotopy/images/tmt_xeon.png

↑ Performance in Intel Xeon E5-2620 v4, x86-64 (8 cores, 16 threads)

https://bab2min.github.io/tomotopy/images/tmt_r7_3700x.png

↑ Performance in AMD Ryzen7 3700X, x86-64 (8 cores, 16 threads)

Although tomotopy iterated 20 times more, the overall running time was 5~10 times faster than gensim. And it yields a stable result.

It is difficult to compare CGS and VB directly because they are totaly different techniques. But from a practical point of view, we can compare the speed and the result between them. The following chart shows the log-likelihood per word of two models’ result.

https://bab2min.github.io/tomotopy/images/LLComp.png

The SIMD instruction set has a great effect on performance. Following is a comparison between SIMD instruction sets.

https://bab2min.github.io/tomotopy/images/SIMDComp.png

Fortunately, most of recent x86-64 CPUs provide AVX2 instruction set, so we can enjoy the performance of AVX2.

Vocabulary controlling using CF and DF

CF(collection frequency) and DF(document frequency) are concepts used in information retreival, and each represents the total number of times the word appears in the corpus and the number of documents in which the word appears within the corpus, respectively. tomotopy provides these two measures under the parameters of min_cf and min_df to trim low frequency words when building the corpus.

For example, let’s say we have 5 documents #0 ~ #4 which are composed of the following words:

#0 : a, b, c, d, e, c
#1 : a, b, e, f
#2 : c, d, c
#3 : a, e, f, g
#4 : a, b, g

Both CF of a and CF of c are 4 because it appears 4 times in the entire corpus. But DF of a is 4 and DF of c is 2 because a appears in #0, #1, #3 and #4 and c only appears in #0 and #2. So if we trim low frequency words using min_cf=3, the result becomes follows:

(d, f and g are removed.)
#0 : a, b, c, e, c
#1 : a, b, e
#2 : c, c
#3 : a, e
#4 : a, b

However when min_df=3 the result is like :

(c, d, f and g are removed.)
#0 : a, b, e
#1 : a, b, e
#2 : (empty doc)
#3 : a, e
#4 : a, b

As we can see, min_df is a stronger criterion than min_cf. In performing topic modeling, words that appear repeatedly in only one document do not contribute to estimating the topic-word distribution. So, removing words with low df is a good way to reduce model size while preserving the results of the final model. In short, please prefer using min_df to min_cf.

Model Save and Load

tomotopy provides save and load method for each topic model class, so you can save the model into the file whenever you want, and re-load it from the file.

import tomotopy as tp

mdl = tp.HDPModel()
for line in open('sample.txt'):
    mdl.add_doc(line.strip().split())

for i in range(0, 100, 10):
    mdl.train(10)
    print('Iteration: {}\tLog-likelihood: {}'.format(i, mdl.ll_per_word))

# save into file
mdl.save('sample_hdp_model.bin')

# load from file
mdl = tp.HDPModel.load('sample_hdp_model.bin')
for k in range(mdl.k):
    if not mdl.is_live_topic(k): continue
    print('Top 10 words of topic #{}'.format(k))
    print(mdl.get_topic_words(k, top_n=10))

# the saved model is HDP model,
# so when you load it by LDA model, it will raise an exception
mdl = tp.LDAModel.load('sample_hdp_model.bin')

When you load the model from a file, a model type in the file should match the class of methods.

See more at tomotopy.LDAModel.save and tomotopy.LDAModel.load methods.

Documents in the Model and out of the Model

We can use Topic Model for two major purposes. The basic one is to discover topics from a set of documents as a result of trained model, and the more advanced one is to infer topic distributions for unseen documents by using trained model.

We named the document in the former purpose (used for model training) as document in the model, and the document in the later purpose (unseen document during training) as document out of the model.

In tomotopy, these two different kinds of document are generated differently. A document in the model can be created by tomotopy.LDAModel.add_doc method. add_doc can be called before tomotopy.LDAModel.train starts. In other words, after train called, add_doc cannot add a document into the model because the set of document used for training has become fixed.

To acquire the instance of the created document, you should use tomotopy.LDAModel.docs like:

mdl = tp.LDAModel(k=20)
idx = mdl.add_doc(words)
if idx < 0: raise RuntimeError("Failed to add doc")
doc_inst = mdl.docs[idx]
# doc_inst is an instance of the added document

A document out of the model is generated by tomotopy.LDAModel.make_doc method. make_doc can be called only after train starts. If you use make_doc before the set of document used for training has become fixed, you may get wrong results. Since make_doc returns the instance directly, you can use its return value for other manipulations.

mdl = tp.LDAModel(k=20)
# add_doc ...
mdl.train(100)
doc_inst = mdl.make_doc(unseen_doc) # doc_inst is an instance of the unseen document

Inference for Unseen Documents

If a new document is created by tomotopy.LDAModel.make_doc, its topic distribution can be inferred by the model. Inference for unseen document should be performed using tomotopy.LDAModel.infer method.

mdl = tp.LDAModel(k=20)
# add_doc ...
mdl.train(100)
doc_inst = mdl.make_doc(unseen_doc)
topic_dist, ll = mdl.infer(doc_inst)
print("Topic Distribution for Unseen Docs: ", topic_dist)
print("Log-likelihood of inference: ", ll)

The infer method can infer only one instance of tomotopy.Document or a list of instances of tomotopy.Document. See more at tomotopy.LDAModel.infer.

Corpus and transform

Every topic model in tomotopy has its own internal document type. A document can be created and added into suitable for each model through each model’s add_doc method. However, trying to add the same list of documents to different models becomes quite inconvenient, because add_doc should be called for the same list of documents to each different model. Thus, tomotopy provides tomotopy.utils.Corpus class that holds a list of documents. tomotopy.utils.Corpus can be inserted into any model by passing as argument corpus to __init__ or add_corpus method of each model. So, inserting tomotopy.utils.Corpus just has the same effect to inserting documents the corpus holds.

Some topic models requires different data for its documents. For example, tomotopy.DMRModel requires argument metadata in str type, but tomotopy.PLDAModel requires argument labels in List[str] type. Since tomotopy.utils.Corpus holds an independent set of documents rather than being tied to a specific topic model, data types required by a topic model may be inconsistent when a corpus is added into that topic model. In this case, miscellaneous data can be transformed to be fitted target topic model using argument transform. See more details in the following code:

from tomotopy import DMRModel
from tomotopy.utils import Corpus

corpus = Corpus()
corpus.add_doc("a b c d e".split(), a_data=1)
corpus.add_doc("e f g h i".split(), a_data=2)
corpus.add_doc("i j k l m".split(), a_data=3)

model = DMRModel(k=10)
model.add_corpus(corpus)
# You lose `a_data` field in `corpus`,
# and `metadata` that `DMRModel` requires is filled with the default value, empty str.

assert model.docs[0].metadata == ''
assert model.docs[1].metadata == ''
assert model.docs[2].metadata == ''

def transform_a_data_to_metadata(misc: dict):
    return {'metadata': str(misc['a_data'])}
# this function transforms `a_data` to `metadata`

model = DMRModel(k=10)
model.add_corpus(corpus, transform=transform_a_data_to_metadata)
# Now docs in `model` has non-default `metadata`, that generated from `a_data` field.

assert model.docs[0].metadata == '1'
assert model.docs[1].metadata == '2'
assert model.docs[2].metadata == '3'

Parallel Sampling Algorithms

Since version 0.5.0, tomotopy allows you to choose a parallelism algorithm. The algorithm provided in versions prior to 0.4.2 is COPY_MERGE, which is provided for all topic models. The new algorithm PARTITION, available since 0.5.0, makes training generally faster and more memory-efficient, but it is available at not all topic models.

The following chart shows the speed difference between the two algorithms based on the number of topics and the number of workers.

https://bab2min.github.io/tomotopy/images/algo_comp.png https://bab2min.github.io/tomotopy/images/algo_comp2.png

Performance by Version

Performance changes by version are shown in the following graph. The time it takes to run the LDA model train with 1000 iteration was measured. (Docs: 11314, Vocab: 60382, Words: 2364724, Intel Xeon Gold 5120 @2.2GHz)

https://bab2min.github.io/tomotopy/images/lda-perf-t1.png https://bab2min.github.io/tomotopy/images/lda-perf-t4.png https://bab2min.github.io/tomotopy/images/lda-perf-t8.png

Pining Topics using Word Priors

Since version 0.6.0, a new method tomotopy.LDAModel.set_word_prior has been added. It allows you to control word prior for each topic. For example, we can set the weight of the word ‘church’ to 1.0 in topic 0, and the weight to 0.1 in the rest of the topics by following codes. This means that the probability that the word ‘church’ is assigned to topic 0 is 10 times higher than the probability of being assigned to another topic. Therefore, most of ‘church’ is assigned to topic 0, so topic 0 contains many words related to ‘church’. This allows to manipulate some topics to be placed at a specific topic number.

import tomotopy as tp
mdl = tp.LDAModel(k=20)

# add documents into `mdl`

# setting word prior
mdl.set_word_prior('church', [1.0 if k == 0 else 0.1 for k in range(20)])

See word_prior_example in example.py for more details.

Examples

You can find an example python code of tomotopy at https://github.com/bab2min/tomotopy/blob/main/examples/ .

You can also get the data file used in the example code at https://drive.google.com/file/d/18OpNijd4iwPyYZ2O7pQoPyeTAKEXa71J/view .

License

tomotopy is licensed under the terms of MIT License, meaning you can use it for any reasonable purpose and remain in complete ownership of all the documentation you produce.

History

  • 0.12.7 (2023-12-19)
    • New features
      • Added Topic Model Viewer tomotopy.viewer.open_viewer()

      • Optimized the performance of tomotopy.utils.Corpus.process()

    • Bug fixes
      • Document.span now returns the ranges in character unit, not in byte unit.

  • 0.12.6 (2023-12-11)
    • New features
      • Added some convenience features to tomotopy.LDAModel.train and tomotopy.LDAModel.set_word_prior.

      • LDAModel.train now has new arguments callback, callback_interval and show_progres to monitor the training progress.

      • LDAModel.set_word_prior now can accept Dict[int, float] type as its argument prior.

  • 0.12.5 (2023-08-03)
    • New features
      • Added support for Linux ARM64 architecture.

  • 0.12.4 (2023-01-22)
    • New features
      • Added support for macOS ARM64 architecture.

    • Bug fixes
      • Fixed an issue where tomotopy.Document.get_sub_topic_dist() raises a bad argument exception.

      • Fixed an issue where exception raising sometimes causes crashes.

  • 0.12.3 (2022-07-19)
    • New features
      • Now, inserting an empty document using tomotopy.LDAModel.add_doc() just ignores it instead of raising an exception. If the newly added argument ignore_empty_words is set to False, an exception is raised as before.

      • tomotopy.HDPModel.purge_dead_topics() method is added to remove non-live topics from the model.

    • Bug fixes
      • Fixed an issue that prevents setting user defined values for nuSq in tomotopy.SLDAModel (by @jucendrero).

      • Fixed an issue where tomotopy.utils.Coherence did not work for tomotopy.DTModel.

      • Fixed an issue that often crashed when calling make_dic() before calling train().

      • Resolved the problem that the results of tomotopy.DMRModel and tomotopy.GDMRModel are different even when the seed is fixed.

      • The parameter optimization process of tomotopy.DMRModel and tomotopy.GDMRModel has been improved.

      • Fixed an issue that sometimes crashed when calling tomotopy.PTModel.copy().

  • 0.12.2 (2021-09-06)
    • An issue where calling convert_to_lda of tomotopy.HDPModel with min_cf > 0, min_df > 0 or rm_top > 0 causes a crash has been fixed.

    • A new argument from_pseudo_doc is added to tomotopy.Document.get_topics and tomotopy.Document.get_topic_dist. This argument is only valid for documents of PTModel, it enables to control a source for computing topic distribution.

    • A default value for argument p of tomotopy.PTModel has been changed. The new default value is k * 10.

    • Using documents generated by make_doc without calling infer doesn’t cause a crash anymore, but just print warning messages.

    • An issue where the internal C++ code isn’t compiled at clang c++17 environment has been fixed.

  • 0.12.1 (2021-06-20)
    • An issue where tomotopy.LDAModel.set_word_prior() causes a crash has been fixed.

    • Now tomotopy.LDAModel.perplexity and tomotopy.LDAModel.ll_per_word return the accurate value when TermWeight is not ONE.

    • tomotopy.LDAModel.used_vocab_weighted_freq was added, which returns term-weighted frequencies of words.

    • Now tomotopy.LDAModel.summary() shows not only the entropy of words, but also the entropy of term-weighted words.

  • 0.12.0 (2021-04-26)
    • Now tomotopy.DMRModel and tomotopy.GDMRModel support multiple values of metadata (see https://github.com/bab2min/tomotopy/blob/main/examples/dmr_multi_label.py )

    • The performance of tomotopy.GDMRModel was improved.

    • A copy() method has been added for all topic models to do a deep copy.

    • An issue was fixed where words that are excluded from training (by min_cf, min_df) have incorrect topic id. Now all excluded words have -1 as topic id.

    • Now all exceptions and warnings that generated by tomotopy follow standard Python types.

    • Compiler requirements have been raised to C++14.

  • 0.11.1 (2021-03-28)
    • A critical bug of asymmetric alphas was fixed. Due to this bug, version 0.11.0 has been removed from releases.

  • 0.11.0 (2021-03-26) (removed)
    • A new topic model tomotopy.PTModel for short texts was added into the package.

    • An issue was fixed where tomotopy.HDPModel.infer causes a segmentation fault sometimes.

    • A mismatch of numpy API version was fixed.

    • Now asymmetric document-topic priors are supported.

    • Serializing topic models to bytes in memory is supported.

    • An argument normalize was added to get_topic_dist(), get_topic_word_dist() and get_sub_topic_dist() for controlling normalization of results.

    • Now tomotopy.DMRModel.lambdas and tomotopy.DMRModel.alpha give correct values.

    • Categorical metadata supports for tomotopy.GDMRModel were added (see https://github.com/bab2min/tomotopy/blob/main/examples/gdmr_both_categorical_and_numerical.py ).

    • Python3.5 support was dropped.

  • 0.10.2 (2021-02-16)
    • An issue was fixed where tomotopy.CTModel.train fails with large K.

    • An issue was fixed where tomotopy.utils.Corpus loses their uid values.

  • 0.10.1 (2021-02-14)
    • An issue was fixed where tomotopy.utils.Corpus.extract_ngrams craches with empty input.

    • An issue was fixed where tomotopy.LDAModel.infer raises exception with valid input.

    • An issue was fixed where tomotopy.HLDAModel.infer generates wrong tomotopy.Document.path.

    • Since a new parameter freeze_topics for tomotopy.HLDAModel.train was added, you can control whether to create a new topic or not when training.

  • 0.10.0 (2020-12-19)
    • The interface of tomotopy.utils.Corpus and of tomotopy.LDAModel.docs were unified. Now you can access the document in corpus with the same manner.

    • __getitem__ of tomotopy.utils.Corpus was improved. Not only indexing by int, but also by Iterable[int], slicing are supported. Also indexing by uid is supported.

    • New methods tomotopy.utils.Corpus.extract_ngrams and tomotopy.utils.Corpus.concat_ngrams were added. They extracts n-gram collocations using PMI and concatenates them into a single words.

    • A new method tomotopy.LDAModel.add_corpus was added, and tomotopy.LDAModel.infer can receive corpus as input.

    • A new module tomotopy.coherence was added. It provides the way to calculate coherence of the model.

    • A paramter window_size was added to tomotopy.label.FoRelevance.

    • An issue was fixed where NaN often occurs when training tomotopy.HDPModel.

    • Now Python3.9 is supported.

    • A dependency to py-cpuinfo was removed and the initializing of the module was improved.

  • 0.9.1 (2020-08-08)
    • Memory leaks of version 0.9.0 was fixed.

    • tomotopy.CTModel.summary() was fixed.

  • 0.9.0 (2020-08-04)
    • The tomotopy.LDAModel.summary() method, which prints human-readable summary of the model, has been added.

    • The random number generator of package has been replaced with [EigenRand]. It speeds up the random number generation and solves the result difference between platforms.

    • Due to above, even if seed is the same, the model training result may be different from the version before 0.9.0.

    • Fixed a training error in tomotopy.HDPModel.

    • tomotopy.DMRModel.alpha now shows Dirichlet prior of per-document topic distribution by metadata.

    • tomotopy.DTModel.get_count_by_topics() has been modified to return a 2-dimensional ndarray.

    • tomotopy.DTModel.alpha has been modified to return the same value as tomotopy.DTModel.get_alpha().

    • Fixed an issue where the metadata value could not be obtained for the document of tomotopy.GDMRModel.

    • tomotopy.HLDAModel.alpha now shows Dirichlet prior of per-document depth distribution.

    • tomotopy.LDAModel.global_step has been added.

    • tomotopy.MGLDAModel.get_count_by_topics() now returns the word count for both global and local topics.

    • tomotopy.PAModel.alpha, tomotopy.PAModel.subalpha, and tomotopy.PAModel.get_count_by_super_topic() have been added.

[EigenRand]: https://github.com/bab2min/EigenRand

  • 0.8.2 (2020-07-14)
    • New properties tomotopy.DTModel.num_timepoints and tomotopy.DTModel.num_docs_by_timepoint have been added.

    • A bug which causes different results with the different platform even if seeds were the same was partially fixed. As a result of this fix, now tomotopy in 32 bit yields different training results from earlier version.

  • 0.8.1 (2020-06-08)
    • A bug where tomotopy.LDAModel.used_vocabs returned an incorrect value was fixed.

    • Now tomotopy.CTModel.prior_cov returns a covariance matrix with shape [k, k].

    • Now tomotopy.CTModel.get_correlations with empty arguments returns a correlation matrix with shape [k, k].

  • 0.8.0 (2020-06-06)
    • Since NumPy was introduced in tomotopy, many methods and properties of tomotopy return not just list, but numpy.ndarray now.

    • Tomotopy has a new dependency NumPy >= 1.10.0.

    • A wrong estimation of tomotopy.HDPModel.infer was fixed.

    • A new method about converting HDPModel to LDAModel was added.

    • New properties including tomotopy.LDAModel.used_vocabs, tomotopy.LDAModel.used_vocab_freq and tomotopy.LDAModel.used_vocab_df were added into topic models.

    • A new g-DMR topic model(tomotopy.GDMRModel) was added.

    • An error at initializing tomotopy.label.FoRelevance in macOS was fixed.

    • An error that occured when using tomotopy.utils.Corpus created without raw parameters was fixed.

  • 0.7.1 (2020-05-08)
    • tomotopy.Document.path was added for tomotopy.HLDAModel.

    • A memory corruption bug in tomotopy.label.PMIExtractor was fixed.

    • A compile error in gcc 7 was fixed.

  • 0.7.0 (2020-04-18)
    • tomotopy.DTModel was added into the package.

    • A bug in tomotopy.utils.Corpus.save was fixed.

    • A new method tomotopy.Document.get_count_vector was added into Document class.

    • Now linux distributions use manylinux2010 and an additional optimization is applied.

  • 0.6.2 (2020-03-28)
    • A critical bug related to save and load was fixed. Version 0.6.0 and 0.6.1 have been removed from releases.

  • 0.6.1 (2020-03-22) (removed)
    • A bug related to module loading was fixed.

  • 0.6.0 (2020-03-22) (removed)
    • tomotopy.utils.Corpus class that manages multiple documents easily was added.

    • tomotopy.LDAModel.set_word_prior method that controls word-topic priors of topic models was added.

    • A new argument min_df that filters words based on document frequency was added into every topic model’s __init__.

    • tomotopy.label, the submodule about topic labeling was added. Currently, only tomotopy.label.FoRelevance is provided.

  • 0.5.2 (2020-03-01)
    • A segmentation fault problem was fixed in tomotopy.LLDAModel.add_doc.

    • A bug was fixed that infer of tomotopy.HDPModel sometimes crashes the program.

    • A crash issue was fixed of tomotopy.LDAModel.infer with ps=tomotopy.ParallelScheme.PARTITION, together=True.

  • 0.5.1 (2020-01-11)
    • A bug was fixed that tomotopy.SLDAModel.make_doc doesn’t support missing values for y.

    • Now tomotopy.SLDAModel fully supports missing values for response variables y. Documents with missing values (NaN) are included in modeling topic, but excluded from regression of response variables.

  • 0.5.0 (2019-12-30)
    • Now tomotopy.PAModel.infer returns both topic distribution nd sub-topic distribution.

    • New methods get_sub_topics and get_sub_topic_dist were added into tomotopy.Document. (for PAModel)

    • New parameter parallel was added for tomotopy.LDAModel.train and tomotopy.LDAModel.infer method. You can select parallelism algorithm by changing this parameter.

    • tomotopy.ParallelScheme.PARTITION, a new algorithm, was added. It works efficiently when the number of workers is large, the number of topics or the size of vocabulary is big.

    • A bug where rm_top didn’t work at min_cf < 2 was fixed.

  • 0.4.2 (2019-11-30)
    • Wrong topic assignments of tomotopy.LLDAModel and tomotopy.PLDAModel were fixed.

    • Readable __repr__ of tomotopy.Document and tomotopy.Dictionary was implemented.

  • 0.4.1 (2019-11-27)
    • A bug at init function of tomotopy.PLDAModel was fixed.

  • 0.4.0 (2019-11-18)
    • New models including tomotopy.PLDAModel and tomotopy.HLDAModel were added into the package.

  • 0.3.1 (2019-11-05)
    • An issue where get_topic_dist() returns incorrect value when min_cf or rm_top is set was fixed.

    • The return value of get_topic_dist() of tomotopy.MGLDAModel document was fixed to include local topics.

    • The estimation speed with tw=ONE was improved.

  • 0.3.0 (2019-10-06)
    • A new model, tomotopy.LLDAModel was added into the package.

    • A crashing issue of HDPModel was fixed.

    • Since hyperparameter estimation for HDPModel was implemented, the result of HDPModel may differ from previous versions.

      If you want to turn off hyperparameter estimation of HDPModel, set optim_interval to zero.

  • 0.2.0 (2019-08-18)
    • New models including tomotopy.CTModel and tomotopy.SLDAModel were added into the package.

    • A new parameter option rm_top was added for all topic models.

    • The problems in save and load method for PAModel and HPAModel were fixed.

    • An occassional crash in loading HDPModel was fixed.

    • The problem that ll_per_word was calculated incorrectly when min_cf > 0 was fixed.

  • 0.1.6 (2019-08-09)
    • Compiling errors at clang with macOS environment were fixed.

  • 0.1.4 (2019-08-05)
    • The issue when add_doc receives an empty list as input was fixed.

    • The issue that tomotopy.PAModel.get_topic_words doesn’t extract the word distribution of subtopic was fixed.

  • 0.1.3 (2019-05-19)
    • The parameter min_cf and its stopword-removing function were added for all topic models.

  • 0.1.0 (2019-05-12)
    • First version of tomotopy

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tomotopy-0.12.7.tar.gz (1.3 MB view details)

Uploaded Source

Built Distributions

tomotopy-0.12.7-cp312-cp312-win_amd64.whl (5.7 MB view details)

Uploaded CPython 3.12Windows x86-64

tomotopy-0.12.7-cp312-cp312-win32.whl (3.4 MB view details)

Uploaded CPython 3.12Windows x86

tomotopy-0.12.7-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (17.2 MB view details)

Uploaded CPython 3.12manylinux: glibc 2.17+ x86-64

tomotopy-0.12.7-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (5.3 MB view details)

Uploaded CPython 3.12manylinux: glibc 2.17+ ARM64

tomotopy-0.12.7-cp311-cp311-win_amd64.whl (5.7 MB view details)

Uploaded CPython 3.11Windows x86-64

tomotopy-0.12.7-cp311-cp311-win32.whl (3.4 MB view details)

Uploaded CPython 3.11Windows x86

tomotopy-0.12.7-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (17.2 MB view details)

Uploaded CPython 3.11manylinux: glibc 2.17+ x86-64

tomotopy-0.12.7-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (5.3 MB view details)

Uploaded CPython 3.11manylinux: glibc 2.17+ ARM64

tomotopy-0.12.7-cp311-cp311-macosx_11_0_x86_64.whl (12.6 MB view details)

Uploaded CPython 3.11macOS 11.0+ x86-64

tomotopy-0.12.7-cp311-cp311-macosx_11_0_arm64.whl (3.4 MB view details)

Uploaded CPython 3.11macOS 11.0+ ARM64

tomotopy-0.12.7-cp310-cp310-win_amd64.whl (5.7 MB view details)

Uploaded CPython 3.10Windows x86-64

tomotopy-0.12.7-cp310-cp310-win32.whl (3.4 MB view details)

Uploaded CPython 3.10Windows x86

tomotopy-0.12.7-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (17.2 MB view details)

Uploaded CPython 3.10manylinux: glibc 2.17+ x86-64

tomotopy-0.12.7-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (5.3 MB view details)

Uploaded CPython 3.10manylinux: glibc 2.17+ ARM64

tomotopy-0.12.7-cp310-cp310-macosx_11_0_x86_64.whl (12.6 MB view details)

Uploaded CPython 3.10macOS 11.0+ x86-64

tomotopy-0.12.7-cp310-cp310-macosx_11_0_arm64.whl (3.4 MB view details)

Uploaded CPython 3.10macOS 11.0+ ARM64

tomotopy-0.12.7-cp39-cp39-win_amd64.whl (5.7 MB view details)

Uploaded CPython 3.9Windows x86-64

tomotopy-0.12.7-cp39-cp39-win32.whl (3.4 MB view details)

Uploaded CPython 3.9Windows x86

tomotopy-0.12.7-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (17.2 MB view details)

Uploaded CPython 3.9manylinux: glibc 2.17+ x86-64

tomotopy-0.12.7-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (5.3 MB view details)

Uploaded CPython 3.9manylinux: glibc 2.17+ ARM64

tomotopy-0.12.7-cp39-cp39-macosx_11_0_x86_64.whl (12.6 MB view details)

Uploaded CPython 3.9macOS 11.0+ x86-64

tomotopy-0.12.7-cp39-cp39-macosx_11_0_arm64.whl (3.4 MB view details)

Uploaded CPython 3.9macOS 11.0+ ARM64

tomotopy-0.12.7-cp38-cp38-win_amd64.whl (5.7 MB view details)

Uploaded CPython 3.8Windows x86-64

tomotopy-0.12.7-cp38-cp38-win32.whl (3.4 MB view details)

Uploaded CPython 3.8Windows x86

tomotopy-0.12.7-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (17.2 MB view details)

Uploaded CPython 3.8manylinux: glibc 2.17+ x86-64

tomotopy-0.12.7-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (5.3 MB view details)

Uploaded CPython 3.8manylinux: glibc 2.17+ ARM64

tomotopy-0.12.7-cp38-cp38-macosx_11_0_x86_64.whl (12.7 MB view details)

Uploaded CPython 3.8macOS 11.0+ x86-64

tomotopy-0.12.7-cp38-cp38-macosx_11_0_arm64.whl (3.5 MB view details)

Uploaded CPython 3.8macOS 11.0+ ARM64

File details

Details for the file tomotopy-0.12.7.tar.gz.

File metadata

  • Download URL: tomotopy-0.12.7.tar.gz
  • Upload date:
  • Size: 1.3 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.8.18

File hashes

Hashes for tomotopy-0.12.7.tar.gz
Algorithm Hash digest
SHA256 e1a4e0b6426489ed11cb1940e17d6704d77672dd86314418c0108a6ffb9a78f6
MD5 3d621e6507d4387247ae7ec43920004c
BLAKE2b-256 e767c5183deb13264ffb3b83866ffef7606f6b35cf8880b1770bfc85dccd618f

See more details on using hashes here.

File details

Details for the file tomotopy-0.12.7-cp312-cp312-win_amd64.whl.

File metadata

  • Download URL: tomotopy-0.12.7-cp312-cp312-win_amd64.whl
  • Upload date:
  • Size: 5.7 MB
  • Tags: CPython 3.12, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.12.1

File hashes

Hashes for tomotopy-0.12.7-cp312-cp312-win_amd64.whl
Algorithm Hash digest
SHA256 4a983f3bc54bd1d2088df769adbd78ebb98dffce7e0d09bb9aa12909ba7ccb9f
MD5 564a49420a38d166b3ec84634a07dc41
BLAKE2b-256 a22148f84aa9fc32926fad57d4100093c574ceb038c661bf5215b3a4a9bfe66a

See more details on using hashes here.

File details

Details for the file tomotopy-0.12.7-cp312-cp312-win32.whl.

File metadata

  • Download URL: tomotopy-0.12.7-cp312-cp312-win32.whl
  • Upload date:
  • Size: 3.4 MB
  • Tags: CPython 3.12, Windows x86
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.12.1

File hashes

Hashes for tomotopy-0.12.7-cp312-cp312-win32.whl
Algorithm Hash digest
SHA256 7ef8287399f8a2d25d180cb148380d34aa9de61ac8af52e2bb6d9e926ece3976
MD5 384b69012852437d5a8a714da5497479
BLAKE2b-256 f1aaf15612f36378c23d847348a44a29914bfaa4783d5dc9db1337c7fc05328f

See more details on using hashes here.

File details

Details for the file tomotopy-0.12.7-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for tomotopy-0.12.7-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 79b4a31fe29f4c55520a25b56b673cd89a42b37a30bb9e98eb8155c104c34f7b
MD5 f1d57facafde3e6112ae8ad00bd251fb
BLAKE2b-256 3c9cb04b52732f32a2c66e302959a5dedf0a905b8cd18ed9ca195d8716e3d25d

See more details on using hashes here.

File details

Details for the file tomotopy-0.12.7-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for tomotopy-0.12.7-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 81bd8c595cbcad563bcba7af5cb18e611656191b9c79e70564aae8376761c5b4
MD5 30957c710c6dc527232fb7e79b71bcd7
BLAKE2b-256 5096f4599f125214ea6f8631807d8aa812ffccc098e20fd6ed7e4971c5c32c1a

See more details on using hashes here.

File details

Details for the file tomotopy-0.12.7-cp311-cp311-win_amd64.whl.

File metadata

  • Download URL: tomotopy-0.12.7-cp311-cp311-win_amd64.whl
  • Upload date:
  • Size: 5.7 MB
  • Tags: CPython 3.11, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for tomotopy-0.12.7-cp311-cp311-win_amd64.whl
Algorithm Hash digest
SHA256 b74f933b738cbe96116580956f93859f0b3d71282e117be3a657010d0d0eb77f
MD5 b32cbd3e6e208172668a7fc48fca5c00
BLAKE2b-256 bfd3b3797c1557b535b6b2fc5a1df5add984ea4cb69d00b883640427cb0b85b9

See more details on using hashes here.

File details

Details for the file tomotopy-0.12.7-cp311-cp311-win32.whl.

File metadata

  • Download URL: tomotopy-0.12.7-cp311-cp311-win32.whl
  • Upload date:
  • Size: 3.4 MB
  • Tags: CPython 3.11, Windows x86
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for tomotopy-0.12.7-cp311-cp311-win32.whl
Algorithm Hash digest
SHA256 d17a4cb035aa0403083262ee24756025d43b94633325591a6a9bac3256d469e6
MD5 05e338b4c07b170720ad3550e8e256b2
BLAKE2b-256 0f7500fadb7681b49e7b3cd649011458200976d627714ec5366522fc2bed5a09

See more details on using hashes here.

File details

Details for the file tomotopy-0.12.7-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for tomotopy-0.12.7-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 5e95a43c1d0f56bf499b7a2d83b91a1a1eb751a5798209f7d9cd06b37b953c74
MD5 d66683bc59c573aaad9086b6c3068be8
BLAKE2b-256 de42c535e2119e8bc06ca216927f9fa0b5ed8be7b4e4208a32c37840f3217a91

See more details on using hashes here.

File details

Details for the file tomotopy-0.12.7-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for tomotopy-0.12.7-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 18969cd07700b3e86eb44cc18466f87da28cf10c4a156bc72ad37832b54ebc33
MD5 a49a920daac34e3bf802674e5e40edfb
BLAKE2b-256 3d5445e04525c8b16371de40f9111d92d237d01088bb757c81be633b0f110344

See more details on using hashes here.

File details

Details for the file tomotopy-0.12.7-cp311-cp311-macosx_11_0_x86_64.whl.

File metadata

File hashes

Hashes for tomotopy-0.12.7-cp311-cp311-macosx_11_0_x86_64.whl
Algorithm Hash digest
SHA256 a0da5b0eb95e6340c8ae6714407c6668e7ef9acc0e4e0afb4f399e5ff7162781
MD5 fd5e3540f19cc1243a6d8424a177e155
BLAKE2b-256 05017ab62ed0c43a1b5f87cab5ace49ad07e0baeee9cae530fc4311d6922c8dc

See more details on using hashes here.

File details

Details for the file tomotopy-0.12.7-cp311-cp311-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for tomotopy-0.12.7-cp311-cp311-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 910be923a8d27e671fc87b4ee735f153ccadfa91e65224bf8878354589f4c3c0
MD5 043b98a3617376a7b61501b3e8b95645
BLAKE2b-256 08fba2dbd672ff5858834c20dae32b6ed5deaa7c5da8f5e4733b1202eaa3dd6f

See more details on using hashes here.

File details

Details for the file tomotopy-0.12.7-cp310-cp310-win_amd64.whl.

File metadata

  • Download URL: tomotopy-0.12.7-cp310-cp310-win_amd64.whl
  • Upload date:
  • Size: 5.7 MB
  • Tags: CPython 3.10, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.11

File hashes

Hashes for tomotopy-0.12.7-cp310-cp310-win_amd64.whl
Algorithm Hash digest
SHA256 e3639ebf5dcd389ce3d501d28c2744e32df176ccb75cbf0c3d0fa3fbcae2988d
MD5 1c0fe189e7884dd8408403479241e55a
BLAKE2b-256 2de97900eaadba70f64e55ca9646500b30de93a201f1d21d57daae5e7212378f

See more details on using hashes here.

File details

Details for the file tomotopy-0.12.7-cp310-cp310-win32.whl.

File metadata

  • Download URL: tomotopy-0.12.7-cp310-cp310-win32.whl
  • Upload date:
  • Size: 3.4 MB
  • Tags: CPython 3.10, Windows x86
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.11

File hashes

Hashes for tomotopy-0.12.7-cp310-cp310-win32.whl
Algorithm Hash digest
SHA256 60696eb12aadeb5f2945d0d7f20ac20de93f3ea780d820afdd2052c9d6bfebbc
MD5 5b7e9fbda9d8228fa7fa6ce7b1e86e67
BLAKE2b-256 d00be5794bb95fca80eb1c8cf07824cf9975756ea0419f947fea695710e1551a

See more details on using hashes here.

File details

Details for the file tomotopy-0.12.7-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for tomotopy-0.12.7-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 bc3eca878003ac4526139215cc8588f3a6547ccf40da55df3279cd776ad05d5c
MD5 28fd3753cf153f1334f1d2e74b4b9f84
BLAKE2b-256 b38476fe9e6adf9d5a3777896e97df04f31ddc5f329786f7bc8533365a309d96

See more details on using hashes here.

File details

Details for the file tomotopy-0.12.7-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for tomotopy-0.12.7-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 ca63d089108f9866f0402781c6543f9ef6b7ca733d2964c18ab5cdf77a826720
MD5 d75bc8ca520f0e7b993fc2ad28d61bb9
BLAKE2b-256 a60ef9d9d51104c940a66f587d1a23ff173c32c87a2c5775c0831c295a36b984

See more details on using hashes here.

File details

Details for the file tomotopy-0.12.7-cp310-cp310-macosx_11_0_x86_64.whl.

File metadata

File hashes

Hashes for tomotopy-0.12.7-cp310-cp310-macosx_11_0_x86_64.whl
Algorithm Hash digest
SHA256 719a0ce0f323340a4f90d8ad9e41895c3f51fb4b8d51a9107a5ddda8015d4dbf
MD5 4d58b216bf3e58f11be9499a4e755eb6
BLAKE2b-256 b8cddc122307b4b89327774cee92c8f77040e30f0882120d2c4d847ab1f6362a

See more details on using hashes here.

File details

Details for the file tomotopy-0.12.7-cp310-cp310-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for tomotopy-0.12.7-cp310-cp310-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 b8d7df54465346899af88d8d1537f82cfd62171c6096496908fb84bde1687e74
MD5 6bd48c2deb86e8e1e4da13f32e2e12f6
BLAKE2b-256 c398d15dad22dbf00dcf716dcb05c26d716d5d8d124a17f2ee5b98ca10e4d4fc

See more details on using hashes here.

File details

Details for the file tomotopy-0.12.7-cp39-cp39-win_amd64.whl.

File metadata

  • Download URL: tomotopy-0.12.7-cp39-cp39-win_amd64.whl
  • Upload date:
  • Size: 5.7 MB
  • Tags: CPython 3.9, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.13

File hashes

Hashes for tomotopy-0.12.7-cp39-cp39-win_amd64.whl
Algorithm Hash digest
SHA256 247eec27266fc2ec6d48f30255462be59f7cffaf797b54c074f46f7e32490473
MD5 5ad227ad9323087c41058f424cb82f47
BLAKE2b-256 47af61a1d7eaa90adafa38c9089aded304d980f43305b769bae3765166094912

See more details on using hashes here.

File details

Details for the file tomotopy-0.12.7-cp39-cp39-win32.whl.

File metadata

  • Download URL: tomotopy-0.12.7-cp39-cp39-win32.whl
  • Upload date:
  • Size: 3.4 MB
  • Tags: CPython 3.9, Windows x86
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.13

File hashes

Hashes for tomotopy-0.12.7-cp39-cp39-win32.whl
Algorithm Hash digest
SHA256 33c468b1f5ec18c4e871739b6c91141c3391c7996283b38a3907dfdba6ca78fd
MD5 bc25caaa1cb8fc33aeb528e6126fe898
BLAKE2b-256 71577fe6696a17144c04461e7fe17e935de827a8f795ca78ed31f73c28297e87

See more details on using hashes here.

File details

Details for the file tomotopy-0.12.7-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for tomotopy-0.12.7-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 239390fb1680f39ac09b030a1fcb06d3da7f187de28d962594922ba3ba1f9da6
MD5 853650ee0eecf9d0ff53c3284a8f93e6
BLAKE2b-256 6c49e306ac1598f47aebc1e8e698d974af129377d2075a21170ae10fb53ddddd

See more details on using hashes here.

File details

Details for the file tomotopy-0.12.7-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for tomotopy-0.12.7-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 5729f45cbed6df28af0f7a51daeca87022194376c3fddf9df9bed2a304b5fd0e
MD5 8e7d655c29a044bedae60d2478d3c95b
BLAKE2b-256 84d3ed3d9dcf53b3b74eccf8c201a9eb1a6666e6c188ef07fa619a87c158654e

See more details on using hashes here.

File details

Details for the file tomotopy-0.12.7-cp39-cp39-macosx_11_0_x86_64.whl.

File metadata

File hashes

Hashes for tomotopy-0.12.7-cp39-cp39-macosx_11_0_x86_64.whl
Algorithm Hash digest
SHA256 81d2519701a6fe156aff5b74aeba57d8750133bed0632abc5a3293c845b7e890
MD5 7cb80bcd529208820a2bb2551e01a15c
BLAKE2b-256 55ad5ca9540075d602cc17ea60637d00a6f4562ead311c509741fd30d09d1703

See more details on using hashes here.

File details

Details for the file tomotopy-0.12.7-cp39-cp39-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for tomotopy-0.12.7-cp39-cp39-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 66624abd5a9cbd8969a97725e5046de744d236330927a66d31a62f2146c1b115
MD5 0107a1b2b6d8ed8d23673389841357c5
BLAKE2b-256 c8bd2a9999b3270c3d67d80b02548f6885a716eaddf54854ae72eb427b192be8

See more details on using hashes here.

File details

Details for the file tomotopy-0.12.7-cp38-cp38-win_amd64.whl.

File metadata

  • Download URL: tomotopy-0.12.7-cp38-cp38-win_amd64.whl
  • Upload date:
  • Size: 5.7 MB
  • Tags: CPython 3.8, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.8.10

File hashes

Hashes for tomotopy-0.12.7-cp38-cp38-win_amd64.whl
Algorithm Hash digest
SHA256 0c7f48fdcad99dbd8bc25ceea8d3e161b6a88ea47c3c0b394468027b3a4a42ae
MD5 a1da89a8d389f96a504f0b9a3de09c61
BLAKE2b-256 fe48f066620b8874df9332d14e0a1df74a1933a61fcbf74aade31577a4634324

See more details on using hashes here.

File details

Details for the file tomotopy-0.12.7-cp38-cp38-win32.whl.

File metadata

  • Download URL: tomotopy-0.12.7-cp38-cp38-win32.whl
  • Upload date:
  • Size: 3.4 MB
  • Tags: CPython 3.8, Windows x86
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.8.10

File hashes

Hashes for tomotopy-0.12.7-cp38-cp38-win32.whl
Algorithm Hash digest
SHA256 d43b7476c3f9077d77f64f51fce6e8e1a58d7d7f504acf0f2297af916640e2ea
MD5 2c7f162ab9519ceb1f5c17da539be363
BLAKE2b-256 83b4fb1ef45cba9e5d24f73211864523e844970ee95cd56d6bdeb5f8240dae90

See more details on using hashes here.

File details

Details for the file tomotopy-0.12.7-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for tomotopy-0.12.7-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 cb34141434ac96d4894f9a5745c1ef434bd40dc2783c104f72fcde74d5e982c7
MD5 9d4e32c262fe446c755c6428a3eedfa4
BLAKE2b-256 68c5ba5f8f82df0dc182501fbdd62a5b9171c22d8dda3dab10e9103ea7823328

See more details on using hashes here.

File details

Details for the file tomotopy-0.12.7-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for tomotopy-0.12.7-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 3a91053b837ac7ba39bb7669d58d2e180e3a52c3d1e62e50ef3bd57bac043435
MD5 a01cf7982512fcfe503bbb54f6f8dc25
BLAKE2b-256 50938ae2e42c185a6df971e8bb9c812fcb8763195e82e9014328872ac10bccfa

See more details on using hashes here.

File details

Details for the file tomotopy-0.12.7-cp38-cp38-macosx_11_0_x86_64.whl.

File metadata

File hashes

Hashes for tomotopy-0.12.7-cp38-cp38-macosx_11_0_x86_64.whl
Algorithm Hash digest
SHA256 c11806e827a0eee18591f1aa87a647256d54dcf80d7f734c408a7f580f3caf30
MD5 0072321179b549239a33d1729af43284
BLAKE2b-256 c1cb5614df7fde4ebcb97b0991629457e0b7252f6fdf7cf50ef406cfcddad4fa

See more details on using hashes here.

File details

Details for the file tomotopy-0.12.7-cp38-cp38-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for tomotopy-0.12.7-cp38-cp38-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 1338a46ea8bd9263e58abf8d886e96cc8ac2aae56184dfd3992b046b5aa933e1
MD5 88b3c9fb909c2176d561262f7501b25c
BLAKE2b-256 e7715bd0c9453a6bdf29b640cd9d2cd0d92e40925d7618555e628e874016deb5

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page