Skip to main content

Tomoto, Topic Modeling Tool for Python

Project description

What is tomotopy?

tomotopy is a Python extension of tomoto (Topic Modeling Tool) which is a Gibbs-sampling based topic model library written in C++. It utilizes a vectorization of modern CPUs for maximizing speed. The current version of tomoto supports several major topic models including

  • Latent Dirichlet Allocation (tomotopy.LDAModel)

  • Labeled LDA (tomotopy.LLDAModel)

  • Partially Labeled LDA (tomotopy.PLDAModel)

  • Supervised LDA (tomotopy.SLDAModel)

  • Dirichlet Multinomial Regression (tomotopy.DMRModel)

  • Generalized Dirichlet Multinomial Regression (tomotopy.GDMRModel)

  • Hierarchical Dirichlet Process (tomotopy.HDPModel)

  • Hierarchical LDA (tomotopy.HLDAModel)

  • Multi Grain LDA (tomotopy.MGLDAModel)

  • Pachinko Allocation (tomotopy.PAModel)

  • Hierarchical PA (tomotopy.HPAModel)

  • Correlated Topic Model (tomotopy.CTModel)

  • Dynamic Topic Model (tomotopy.DTModel)

  • Pseudo-document based Topic Model (tomotopy.PTModel).

https://badge.fury.io/py/tomotopy.svg

Getting Started

You can install tomotopy easily using pip. (https://pypi.org/project/tomotopy/)

$ pip install --upgrade pip
$ pip install tomotopy

The supported OS and Python versions are:

  • Linux (x86-64) with Python >= 3.6

  • macOS >= 10.13 with Python >= 3.6

  • Windows 7 or later (x86, x86-64) with Python >= 3.6

  • Other OS with Python >= 3.6: Compilation from source code required (with c++14 compatible compiler)

After installing, you can start tomotopy by just importing.

import tomotopy as tp
print(tp.isa) # prints 'avx2', 'avx', 'sse2' or 'none'

Currently, tomotopy can exploits AVX2, AVX or SSE2 SIMD instruction set for maximizing performance. When the package is imported, it will check available instruction sets and select the best option. If tp.isa tells none, iterations of training may take a long time. But, since most of modern Intel or AMD CPUs provide SIMD instruction set, the SIMD acceleration could show a big improvement.

Here is a sample code for simple LDA training of texts from ‘sample.txt’ file.

import tomotopy as tp
mdl = tp.LDAModel(k=20)
for line in open('sample.txt'):
    mdl.add_doc(line.strip().split())

for i in range(0, 100, 10):
    mdl.train(10)
    print('Iteration: {}\tLog-likelihood: {}'.format(i, mdl.ll_per_word))

for k in range(mdl.k):
    print('Top 10 words of topic #{}'.format(k))
    print(mdl.get_topic_words(k, top_n=10))

mdl.summary()

Performance of tomotopy

tomotopy uses Collapsed Gibbs-Sampling(CGS) to infer the distribution of topics and the distribution of words. Generally CGS converges more slowly than Variational Bayes(VB) that [gensim’s LdaModel] uses, but its iteration can be computed much faster. In addition, tomotopy can take advantage of multicore CPUs with a SIMD instruction set, which can result in faster iterations.

[gensim’s LdaModel]: https://radimrehurek.com/gensim/models/ldamodel.html

Following chart shows the comparison of LDA model’s running time between tomotopy and gensim. The input data consists of 1000 random documents from English Wikipedia with 1,506,966 words (about 10.1 MB). tomotopy trains 200 iterations and gensim trains 10 iterations.

https://bab2min.github.io/tomotopy/images/tmt_i5.png

↑ Performance in Intel i5-6600, x86-64 (4 cores)

https://bab2min.github.io/tomotopy/images/tmt_xeon.png

↑ Performance in Intel Xeon E5-2620 v4, x86-64 (8 cores, 16 threads)

https://bab2min.github.io/tomotopy/images/tmt_r7_3700x.png

↑ Performance in AMD Ryzen7 3700X, x86-64 (8 cores, 16 threads)

Although tomotopy iterated 20 times more, the overall running time was 5~10 times faster than gensim. And it yields a stable result.

It is difficult to compare CGS and VB directly because they are totaly different techniques. But from a practical point of view, we can compare the speed and the result between them. The following chart shows the log-likelihood per word of two models’ result.

https://bab2min.github.io/tomotopy/images/LLComp.png

The SIMD instruction set has a great effect on performance. Following is a comparison between SIMD instruction sets.

https://bab2min.github.io/tomotopy/images/SIMDComp.png

Fortunately, most of recent x86-64 CPUs provide AVX2 instruction set, so we can enjoy the performance of AVX2.

Vocabulary controlling using CF and DF

CF(collection frequency) and DF(document frequency) are concepts used in information retreival, and each represents the total number of times the word appears in the corpus and the number of documents in which the word appears within the corpus, respectively. tomotopy provides these two measures under the parameters of min_cf and min_df to trim low frequency words when building the corpus.

For example, let’s say we have 5 documents #0 ~ #4 which are composed of the following words:

#0 : a, b, c, d, e, c
#1 : a, b, e, f
#2 : c, d, c
#3 : a, e, f, g
#4 : a, b, g

Both CF of a and CF of c are 4 because it appears 4 times in the entire corpus. But DF of a is 4 and DF of c is 2 because a appears in #0, #1, #3 and #4 and c only appears in #0 and #2. So if we trim low frequency words using min_cf=3, the result becomes follows:

(d, f and g are removed.)
#0 : a, b, c, e, c
#1 : a, b, e
#2 : c, c
#3 : a, e
#4 : a, b

However when min_df=3 the result is like :

(c, d, f and g are removed.)
#0 : a, b, e
#1 : a, b, e
#2 : (empty doc)
#3 : a, e
#4 : a, b

As we can see, min_df is a stronger criterion than min_cf. In performing topic modeling, words that appear repeatedly in only one document do not contribute to estimating the topic-word distribution. So, removing words with low df is a good way to reduce model size while preserving the results of the final model. In short, please prefer using min_df to min_cf.

Model Save and Load

tomotopy provides save and load method for each topic model class, so you can save the model into the file whenever you want, and re-load it from the file.

import tomotopy as tp

mdl = tp.HDPModel()
for line in open('sample.txt'):
    mdl.add_doc(line.strip().split())

for i in range(0, 100, 10):
    mdl.train(10)
    print('Iteration: {}\tLog-likelihood: {}'.format(i, mdl.ll_per_word))

# save into file
mdl.save('sample_hdp_model.bin')

# load from file
mdl = tp.HDPModel.load('sample_hdp_model.bin')
for k in range(mdl.k):
    if not mdl.is_live_topic(k): continue
    print('Top 10 words of topic #{}'.format(k))
    print(mdl.get_topic_words(k, top_n=10))

# the saved model is HDP model,
# so when you load it by LDA model, it will raise an exception
mdl = tp.LDAModel.load('sample_hdp_model.bin')

When you load the model from a file, a model type in the file should match the class of methods.

See more at tomotopy.LDAModel.save and tomotopy.LDAModel.load methods.

Documents in the Model and out of the Model

We can use Topic Model for two major purposes. The basic one is to discover topics from a set of documents as a result of trained model, and the more advanced one is to infer topic distributions for unseen documents by using trained model.

We named the document in the former purpose (used for model training) as document in the model, and the document in the later purpose (unseen document during training) as document out of the model.

In tomotopy, these two different kinds of document are generated differently. A document in the model can be created by tomotopy.LDAModel.add_doc method. add_doc can be called before tomotopy.LDAModel.train starts. In other words, after train called, add_doc cannot add a document into the model because the set of document used for training has become fixed.

To acquire the instance of the created document, you should use tomotopy.LDAModel.docs like:

mdl = tp.LDAModel(k=20)
idx = mdl.add_doc(words)
if idx < 0: raise RuntimeError("Failed to add doc")
doc_inst = mdl.docs[idx]
# doc_inst is an instance of the added document

A document out of the model is generated by tomotopy.LDAModel.make_doc method. make_doc can be called only after train starts. If you use make_doc before the set of document used for training has become fixed, you may get wrong results. Since make_doc returns the instance directly, you can use its return value for other manipulations.

mdl = tp.LDAModel(k=20)
# add_doc ...
mdl.train(100)
doc_inst = mdl.make_doc(unseen_doc) # doc_inst is an instance of the unseen document

Inference for Unseen Documents

If a new document is created by tomotopy.LDAModel.make_doc, its topic distribution can be inferred by the model. Inference for unseen document should be performed using tomotopy.LDAModel.infer method.

mdl = tp.LDAModel(k=20)
# add_doc ...
mdl.train(100)
doc_inst = mdl.make_doc(unseen_doc)
topic_dist, ll = mdl.infer(doc_inst)
print("Topic Distribution for Unseen Docs: ", topic_dist)
print("Log-likelihood of inference: ", ll)

The infer method can infer only one instance of tomotopy.Document or a list of instances of tomotopy.Document. See more at tomotopy.LDAModel.infer.

Corpus and transform

Every topic model in tomotopy has its own internal document type. A document can be created and added into suitable for each model through each model’s add_doc method. However, trying to add the same list of documents to different models becomes quite inconvenient, because add_doc should be called for the same list of documents to each different model. Thus, tomotopy provides tomotopy.utils.Corpus class that holds a list of documents. tomotopy.utils.Corpus can be inserted into any model by passing as argument corpus to __init__ or add_corpus method of each model. So, inserting tomotopy.utils.Corpus just has the same effect to inserting documents the corpus holds.

Some topic models requires different data for its documents. For example, tomotopy.DMRModel requires argument metadata in str type, but tomotopy.PLDAModel requires argument labels in List[str] type. Since tomotopy.utils.Corpus holds an independent set of documents rather than being tied to a specific topic model, data types required by a topic model may be inconsistent when a corpus is added into that topic model. In this case, miscellaneous data can be transformed to be fitted target topic model using argument transform. See more details in the following code:

from tomotopy import DMRModel
from tomotopy.utils import Corpus

corpus = Corpus()
corpus.add_doc("a b c d e".split(), a_data=1)
corpus.add_doc("e f g h i".split(), a_data=2)
corpus.add_doc("i j k l m".split(), a_data=3)

model = DMRModel(k=10)
model.add_corpus(corpus)
# You lose `a_data` field in `corpus`,
# and `metadata` that `DMRModel` requires is filled with the default value, empty str.

assert model.docs[0].metadata == ''
assert model.docs[1].metadata == ''
assert model.docs[2].metadata == ''

def transform_a_data_to_metadata(misc: dict):
    return {'metadata': str(misc['a_data'])}
# this function transforms `a_data` to `metadata`

model = DMRModel(k=10)
model.add_corpus(corpus, transform=transform_a_data_to_metadata)
# Now docs in `model` has non-default `metadata`, that generated from `a_data` field.

assert model.docs[0].metadata == '1'
assert model.docs[1].metadata == '2'
assert model.docs[2].metadata == '3'

Parallel Sampling Algorithms

Since version 0.5.0, tomotopy allows you to choose a parallelism algorithm. The algorithm provided in versions prior to 0.4.2 is COPY_MERGE, which is provided for all topic models. The new algorithm PARTITION, available since 0.5.0, makes training generally faster and more memory-efficient, but it is available at not all topic models.

The following chart shows the speed difference between the two algorithms based on the number of topics and the number of workers.

https://bab2min.github.io/tomotopy/images/algo_comp.png https://bab2min.github.io/tomotopy/images/algo_comp2.png

Performance by Version

Performance changes by version are shown in the following graph. The time it takes to run the LDA model train with 1000 iteration was measured. (Docs: 11314, Vocab: 60382, Words: 2364724, Intel Xeon Gold 5120 @2.2GHz)

https://bab2min.github.io/tomotopy/images/lda-perf-t1.png https://bab2min.github.io/tomotopy/images/lda-perf-t4.png https://bab2min.github.io/tomotopy/images/lda-perf-t8.png

Pining Topics using Word Priors

Since version 0.6.0, a new method tomotopy.LDAModel.set_word_prior has been added. It allows you to control word prior for each topic. For example, we can set the weight of the word ‘church’ to 1.0 in topic 0, and the weight to 0.1 in the rest of the topics by following codes. This means that the probability that the word ‘church’ is assigned to topic 0 is 10 times higher than the probability of being assigned to another topic. Therefore, most of ‘church’ is assigned to topic 0, so topic 0 contains many words related to ‘church’. This allows to manipulate some topics to be placed at a specific topic number.

import tomotopy as tp
mdl = tp.LDAModel(k=20)

# add documents into `mdl`

# setting word prior
mdl.set_word_prior('church', [1.0 if k == 0 else 0.1 for k in range(20)])

See word_prior_example in example.py for more details.

Examples

You can find an example python code of tomotopy at https://github.com/bab2min/tomotopy/blob/main/examples/ .

You can also get the data file used in the example code at https://drive.google.com/file/d/18OpNijd4iwPyYZ2O7pQoPyeTAKEXa71J/view .

License

tomotopy is licensed under the terms of MIT License, meaning you can use it for any reasonable purpose and remain in complete ownership of all the documentation you produce.

History

  • 0.12.6 (2023-12-11)
    • New features
      • Added some convenience features to tomotopy.LDAModel.train and tomotopy.LDAModel.set_word_prior.

      • LDAModel.train now has new arguments callback, callback_interval and show_progres to monitor the training progress.

      • LDAModel.set_word_prior now can accept Dict[int, float] type as its argument prior.

  • 0.12.5 (2023-08-03)
    • New features
      • Added support for Linux ARM64 architecture.

  • 0.12.4 (2023-01-22)
    • New features
      • Added support for macOS ARM64 architecture.

    • Bug fixes
      • Fixed an issue where tomotopy.Document.get_sub_topic_dist() raises a bad argument exception.

      • Fixed an issue where exception raising sometimes causes crashes.

  • 0.12.3 (2022-07-19)
    • New features
      • Now, inserting an empty document using tomotopy.LDAModel.add_doc() just ignores it instead of raising an exception. If the newly added argument ignore_empty_words is set to False, an exception is raised as before.

      • tomotopy.HDPModel.purge_dead_topics() method is added to remove non-live topics from the model.

    • Bug fixes
      • Fixed an issue that prevents setting user defined values for nuSq in tomotopy.SLDAModel (by @jucendrero).

      • Fixed an issue where tomotopy.utils.Coherence did not work for tomotopy.DTModel.

      • Fixed an issue that often crashed when calling make_dic() before calling train().

      • Resolved the problem that the results of tomotopy.DMRModel and tomotopy.GDMRModel are different even when the seed is fixed.

      • The parameter optimization process of tomotopy.DMRModel and tomotopy.GDMRModel has been improved.

      • Fixed an issue that sometimes crashed when calling tomotopy.PTModel.copy().

  • 0.12.2 (2021-09-06)
    • An issue where calling convert_to_lda of tomotopy.HDPModel with min_cf > 0, min_df > 0 or rm_top > 0 causes a crash has been fixed.

    • A new argument from_pseudo_doc is added to tomotopy.Document.get_topics and tomotopy.Document.get_topic_dist. This argument is only valid for documents of PTModel, it enables to control a source for computing topic distribution.

    • A default value for argument p of tomotopy.PTModel has been changed. The new default value is k * 10.

    • Using documents generated by make_doc without calling infer doesn’t cause a crash anymore, but just print warning messages.

    • An issue where the internal C++ code isn’t compiled at clang c++17 environment has been fixed.

  • 0.12.1 (2021-06-20)
    • An issue where tomotopy.LDAModel.set_word_prior() causes a crash has been fixed.

    • Now tomotopy.LDAModel.perplexity and tomotopy.LDAModel.ll_per_word return the accurate value when TermWeight is not ONE.

    • tomotopy.LDAModel.used_vocab_weighted_freq was added, which returns term-weighted frequencies of words.

    • Now tomotopy.LDAModel.summary() shows not only the entropy of words, but also the entropy of term-weighted words.

  • 0.12.0 (2021-04-26)
    • Now tomotopy.DMRModel and tomotopy.GDMRModel support multiple values of metadata (see https://github.com/bab2min/tomotopy/blob/main/examples/dmr_multi_label.py )

    • The performance of tomotopy.GDMRModel was improved.

    • A copy() method has been added for all topic models to do a deep copy.

    • An issue was fixed where words that are excluded from training (by min_cf, min_df) have incorrect topic id. Now all excluded words have -1 as topic id.

    • Now all exceptions and warnings that generated by tomotopy follow standard Python types.

    • Compiler requirements have been raised to C++14.

  • 0.11.1 (2021-03-28)
    • A critical bug of asymmetric alphas was fixed. Due to this bug, version 0.11.0 has been removed from releases.

  • 0.11.0 (2021-03-26) (removed)
    • A new topic model tomotopy.PTModel for short texts was added into the package.

    • An issue was fixed where tomotopy.HDPModel.infer causes a segmentation fault sometimes.

    • A mismatch of numpy API version was fixed.

    • Now asymmetric document-topic priors are supported.

    • Serializing topic models to bytes in memory is supported.

    • An argument normalize was added to get_topic_dist(), get_topic_word_dist() and get_sub_topic_dist() for controlling normalization of results.

    • Now tomotopy.DMRModel.lambdas and tomotopy.DMRModel.alpha give correct values.

    • Categorical metadata supports for tomotopy.GDMRModel were added (see https://github.com/bab2min/tomotopy/blob/main/examples/gdmr_both_categorical_and_numerical.py ).

    • Python3.5 support was dropped.

  • 0.10.2 (2021-02-16)
    • An issue was fixed where tomotopy.CTModel.train fails with large K.

    • An issue was fixed where tomotopy.utils.Corpus loses their uid values.

  • 0.10.1 (2021-02-14)
    • An issue was fixed where tomotopy.utils.Corpus.extract_ngrams craches with empty input.

    • An issue was fixed where tomotopy.LDAModel.infer raises exception with valid input.

    • An issue was fixed where tomotopy.HLDAModel.infer generates wrong tomotopy.Document.path.

    • Since a new parameter freeze_topics for tomotopy.HLDAModel.train was added, you can control whether to create a new topic or not when training.

  • 0.10.0 (2020-12-19)
    • The interface of tomotopy.utils.Corpus and of tomotopy.LDAModel.docs were unified. Now you can access the document in corpus with the same manner.

    • __getitem__ of tomotopy.utils.Corpus was improved. Not only indexing by int, but also by Iterable[int], slicing are supported. Also indexing by uid is supported.

    • New methods tomotopy.utils.Corpus.extract_ngrams and tomotopy.utils.Corpus.concat_ngrams were added. They extracts n-gram collocations using PMI and concatenates them into a single words.

    • A new method tomotopy.LDAModel.add_corpus was added, and tomotopy.LDAModel.infer can receive corpus as input.

    • A new module tomotopy.coherence was added. It provides the way to calculate coherence of the model.

    • A paramter window_size was added to tomotopy.label.FoRelevance.

    • An issue was fixed where NaN often occurs when training tomotopy.HDPModel.

    • Now Python3.9 is supported.

    • A dependency to py-cpuinfo was removed and the initializing of the module was improved.

  • 0.9.1 (2020-08-08)
    • Memory leaks of version 0.9.0 was fixed.

    • tomotopy.CTModel.summary() was fixed.

  • 0.9.0 (2020-08-04)
    • The tomotopy.LDAModel.summary() method, which prints human-readable summary of the model, has been added.

    • The random number generator of package has been replaced with [EigenRand]. It speeds up the random number generation and solves the result difference between platforms.

    • Due to above, even if seed is the same, the model training result may be different from the version before 0.9.0.

    • Fixed a training error in tomotopy.HDPModel.

    • tomotopy.DMRModel.alpha now shows Dirichlet prior of per-document topic distribution by metadata.

    • tomotopy.DTModel.get_count_by_topics() has been modified to return a 2-dimensional ndarray.

    • tomotopy.DTModel.alpha has been modified to return the same value as tomotopy.DTModel.get_alpha().

    • Fixed an issue where the metadata value could not be obtained for the document of tomotopy.GDMRModel.

    • tomotopy.HLDAModel.alpha now shows Dirichlet prior of per-document depth distribution.

    • tomotopy.LDAModel.global_step has been added.

    • tomotopy.MGLDAModel.get_count_by_topics() now returns the word count for both global and local topics.

    • tomotopy.PAModel.alpha, tomotopy.PAModel.subalpha, and tomotopy.PAModel.get_count_by_super_topic() have been added.

[EigenRand]: https://github.com/bab2min/EigenRand

  • 0.8.2 (2020-07-14)
    • New properties tomotopy.DTModel.num_timepoints and tomotopy.DTModel.num_docs_by_timepoint have been added.

    • A bug which causes different results with the different platform even if seeds were the same was partially fixed. As a result of this fix, now tomotopy in 32 bit yields different training results from earlier version.

  • 0.8.1 (2020-06-08)
    • A bug where tomotopy.LDAModel.used_vocabs returned an incorrect value was fixed.

    • Now tomotopy.CTModel.prior_cov returns a covariance matrix with shape [k, k].

    • Now tomotopy.CTModel.get_correlations with empty arguments returns a correlation matrix with shape [k, k].

  • 0.8.0 (2020-06-06)
    • Since NumPy was introduced in tomotopy, many methods and properties of tomotopy return not just list, but numpy.ndarray now.

    • Tomotopy has a new dependency NumPy >= 1.10.0.

    • A wrong estimation of tomotopy.HDPModel.infer was fixed.

    • A new method about converting HDPModel to LDAModel was added.

    • New properties including tomotopy.LDAModel.used_vocabs, tomotopy.LDAModel.used_vocab_freq and tomotopy.LDAModel.used_vocab_df were added into topic models.

    • A new g-DMR topic model(tomotopy.GDMRModel) was added.

    • An error at initializing tomotopy.label.FoRelevance in macOS was fixed.

    • An error that occured when using tomotopy.utils.Corpus created without raw parameters was fixed.

  • 0.7.1 (2020-05-08)
    • tomotopy.Document.path was added for tomotopy.HLDAModel.

    • A memory corruption bug in tomotopy.label.PMIExtractor was fixed.

    • A compile error in gcc 7 was fixed.

  • 0.7.0 (2020-04-18)
    • tomotopy.DTModel was added into the package.

    • A bug in tomotopy.utils.Corpus.save was fixed.

    • A new method tomotopy.Document.get_count_vector was added into Document class.

    • Now linux distributions use manylinux2010 and an additional optimization is applied.

  • 0.6.2 (2020-03-28)
    • A critical bug related to save and load was fixed. Version 0.6.0 and 0.6.1 have been removed from releases.

  • 0.6.1 (2020-03-22) (removed)
    • A bug related to module loading was fixed.

  • 0.6.0 (2020-03-22) (removed)
    • tomotopy.utils.Corpus class that manages multiple documents easily was added.

    • tomotopy.LDAModel.set_word_prior method that controls word-topic priors of topic models was added.

    • A new argument min_df that filters words based on document frequency was added into every topic model’s __init__.

    • tomotopy.label, the submodule about topic labeling was added. Currently, only tomotopy.label.FoRelevance is provided.

  • 0.5.2 (2020-03-01)
    • A segmentation fault problem was fixed in tomotopy.LLDAModel.add_doc.

    • A bug was fixed that infer of tomotopy.HDPModel sometimes crashes the program.

    • A crash issue was fixed of tomotopy.LDAModel.infer with ps=tomotopy.ParallelScheme.PARTITION, together=True.

  • 0.5.1 (2020-01-11)
    • A bug was fixed that tomotopy.SLDAModel.make_doc doesn’t support missing values for y.

    • Now tomotopy.SLDAModel fully supports missing values for response variables y. Documents with missing values (NaN) are included in modeling topic, but excluded from regression of response variables.

  • 0.5.0 (2019-12-30)
    • Now tomotopy.PAModel.infer returns both topic distribution nd sub-topic distribution.

    • New methods get_sub_topics and get_sub_topic_dist were added into tomotopy.Document. (for PAModel)

    • New parameter parallel was added for tomotopy.LDAModel.train and tomotopy.LDAModel.infer method. You can select parallelism algorithm by changing this parameter.

    • tomotopy.ParallelScheme.PARTITION, a new algorithm, was added. It works efficiently when the number of workers is large, the number of topics or the size of vocabulary is big.

    • A bug where rm_top didn’t work at min_cf < 2 was fixed.

  • 0.4.2 (2019-11-30)
    • Wrong topic assignments of tomotopy.LLDAModel and tomotopy.PLDAModel were fixed.

    • Readable __repr__ of tomotopy.Document and tomotopy.Dictionary was implemented.

  • 0.4.1 (2019-11-27)
    • A bug at init function of tomotopy.PLDAModel was fixed.

  • 0.4.0 (2019-11-18)
    • New models including tomotopy.PLDAModel and tomotopy.HLDAModel were added into the package.

  • 0.3.1 (2019-11-05)
    • An issue where get_topic_dist() returns incorrect value when min_cf or rm_top is set was fixed.

    • The return value of get_topic_dist() of tomotopy.MGLDAModel document was fixed to include local topics.

    • The estimation speed with tw=ONE was improved.

  • 0.3.0 (2019-10-06)
    • A new model, tomotopy.LLDAModel was added into the package.

    • A crashing issue of HDPModel was fixed.

    • Since hyperparameter estimation for HDPModel was implemented, the result of HDPModel may differ from previous versions.

      If you want to turn off hyperparameter estimation of HDPModel, set optim_interval to zero.

  • 0.2.0 (2019-08-18)
    • New models including tomotopy.CTModel and tomotopy.SLDAModel were added into the package.

    • A new parameter option rm_top was added for all topic models.

    • The problems in save and load method for PAModel and HPAModel were fixed.

    • An occassional crash in loading HDPModel was fixed.

    • The problem that ll_per_word was calculated incorrectly when min_cf > 0 was fixed.

  • 0.1.6 (2019-08-09)
    • Compiling errors at clang with macOS environment were fixed.

  • 0.1.4 (2019-08-05)
    • The issue when add_doc receives an empty list as input was fixed.

    • The issue that tomotopy.PAModel.get_topic_words doesn’t extract the word distribution of subtopic was fixed.

  • 0.1.3 (2019-05-19)
    • The parameter min_cf and its stopword-removing function were added for all topic models.

  • 0.1.0 (2019-05-12)
    • First version of tomotopy

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tomotopy-0.12.6.tar.gz (1.3 MB view details)

Uploaded Source

Built Distributions

tomotopy-0.12.6-cp312-cp312-win_amd64.whl (5.7 MB view details)

Uploaded CPython 3.12Windows x86-64

tomotopy-0.12.6-cp312-cp312-win32.whl (3.4 MB view details)

Uploaded CPython 3.12Windows x86

tomotopy-0.12.6-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (17.2 MB view details)

Uploaded CPython 3.12manylinux: glibc 2.17+ x86-64

tomotopy-0.12.6-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (5.3 MB view details)

Uploaded CPython 3.12manylinux: glibc 2.17+ ARM64

tomotopy-0.12.6-cp311-cp311-win_amd64.whl (5.7 MB view details)

Uploaded CPython 3.11Windows x86-64

tomotopy-0.12.6-cp311-cp311-win32.whl (3.4 MB view details)

Uploaded CPython 3.11Windows x86

tomotopy-0.12.6-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (17.2 MB view details)

Uploaded CPython 3.11manylinux: glibc 2.17+ x86-64

tomotopy-0.12.6-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (5.3 MB view details)

Uploaded CPython 3.11manylinux: glibc 2.17+ ARM64

tomotopy-0.12.6-cp311-cp311-macosx_11_0_x86_64.whl (12.6 MB view details)

Uploaded CPython 3.11macOS 11.0+ x86-64

tomotopy-0.12.6-cp311-cp311-macosx_11_0_arm64.whl (3.4 MB view details)

Uploaded CPython 3.11macOS 11.0+ ARM64

tomotopy-0.12.6-cp310-cp310-win_amd64.whl (5.7 MB view details)

Uploaded CPython 3.10Windows x86-64

tomotopy-0.12.6-cp310-cp310-win32.whl (3.4 MB view details)

Uploaded CPython 3.10Windows x86

tomotopy-0.12.6-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (17.2 MB view details)

Uploaded CPython 3.10manylinux: glibc 2.17+ x86-64

tomotopy-0.12.6-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (5.3 MB view details)

Uploaded CPython 3.10manylinux: glibc 2.17+ ARM64

tomotopy-0.12.6-cp310-cp310-macosx_11_0_x86_64.whl (12.6 MB view details)

Uploaded CPython 3.10macOS 11.0+ x86-64

tomotopy-0.12.6-cp310-cp310-macosx_11_0_arm64.whl (3.4 MB view details)

Uploaded CPython 3.10macOS 11.0+ ARM64

tomotopy-0.12.6-cp39-cp39-win_amd64.whl (5.7 MB view details)

Uploaded CPython 3.9Windows x86-64

tomotopy-0.12.6-cp39-cp39-win32.whl (3.4 MB view details)

Uploaded CPython 3.9Windows x86

tomotopy-0.12.6-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (17.2 MB view details)

Uploaded CPython 3.9manylinux: glibc 2.17+ x86-64

tomotopy-0.12.6-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (5.3 MB view details)

Uploaded CPython 3.9manylinux: glibc 2.17+ ARM64

tomotopy-0.12.6-cp39-cp39-macosx_11_0_x86_64.whl (12.6 MB view details)

Uploaded CPython 3.9macOS 11.0+ x86-64

tomotopy-0.12.6-cp39-cp39-macosx_11_0_arm64.whl (3.4 MB view details)

Uploaded CPython 3.9macOS 11.0+ ARM64

tomotopy-0.12.6-cp38-cp38-win_amd64.whl (5.7 MB view details)

Uploaded CPython 3.8Windows x86-64

tomotopy-0.12.6-cp38-cp38-win32.whl (3.4 MB view details)

Uploaded CPython 3.8Windows x86

tomotopy-0.12.6-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (17.2 MB view details)

Uploaded CPython 3.8manylinux: glibc 2.17+ x86-64

tomotopy-0.12.6-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (5.3 MB view details)

Uploaded CPython 3.8manylinux: glibc 2.17+ ARM64

tomotopy-0.12.6-cp38-cp38-macosx_11_0_x86_64.whl (12.6 MB view details)

Uploaded CPython 3.8macOS 11.0+ x86-64

tomotopy-0.12.6-cp38-cp38-macosx_11_0_arm64.whl (3.4 MB view details)

Uploaded CPython 3.8macOS 11.0+ ARM64

File details

Details for the file tomotopy-0.12.6.tar.gz.

File metadata

  • Download URL: tomotopy-0.12.6.tar.gz
  • Upload date:
  • Size: 1.3 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.8.18

File hashes

Hashes for tomotopy-0.12.6.tar.gz
Algorithm Hash digest
SHA256 565790ff7ac8f4d922204f3d3043fe82167335c1ff8c7612ad2a5099876334f7
MD5 3baf6cab6d0454c90c766bc378f47af2
BLAKE2b-256 93e51ba0aa85ef8f4e2d37e36124b493fea84f29383b2886e769969ee9a07e3e

See more details on using hashes here.

File details

Details for the file tomotopy-0.12.6-cp312-cp312-win_amd64.whl.

File metadata

  • Download URL: tomotopy-0.12.6-cp312-cp312-win_amd64.whl
  • Upload date:
  • Size: 5.7 MB
  • Tags: CPython 3.12, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.12.0

File hashes

Hashes for tomotopy-0.12.6-cp312-cp312-win_amd64.whl
Algorithm Hash digest
SHA256 ffb3d9d129673edb935793870700f3225e9d90fa956b027162b39c720ead7435
MD5 21c82afff57ed45f876317ec9e58c786
BLAKE2b-256 4b655813ca28cc4488f23efe2413f8a0b9964aae7f0c5ced69849fec6da5fb27

See more details on using hashes here.

File details

Details for the file tomotopy-0.12.6-cp312-cp312-win32.whl.

File metadata

  • Download URL: tomotopy-0.12.6-cp312-cp312-win32.whl
  • Upload date:
  • Size: 3.4 MB
  • Tags: CPython 3.12, Windows x86
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.12.0

File hashes

Hashes for tomotopy-0.12.6-cp312-cp312-win32.whl
Algorithm Hash digest
SHA256 3cc8af12c1c5116e3a5a926c4a415ab893d406f14c96741d926350f05f1590bf
MD5 58b3011140a468177087754b4c6e822b
BLAKE2b-256 443f6ca9d8c10c7eeed29bb9a6e081e435a76f071b3bea3cad0e54f621786421

See more details on using hashes here.

File details

Details for the file tomotopy-0.12.6-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for tomotopy-0.12.6-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 c81b51b58a3cd240b745edae52307c758ab3527776db56a9a89c54b4724794d9
MD5 1bc113a0d318d60a4e30e442d523c87e
BLAKE2b-256 feadf96228b2905de9b081becd94705df3cb1584f819c352ae912472797a689f

See more details on using hashes here.

File details

Details for the file tomotopy-0.12.6-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for tomotopy-0.12.6-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 0ab6be697bc5576d1db905e2c8739fd9b74b7895bb9f116d8483964ecbeafb66
MD5 cb022dc5a8c330b2b1b2f82ebff2fd6c
BLAKE2b-256 7cb6ea8da1769d73ca0ea0b4435641296f622cdde8a9ecb835d61a15a1ee3208

See more details on using hashes here.

File details

Details for the file tomotopy-0.12.6-cp311-cp311-win_amd64.whl.

File metadata

  • Download URL: tomotopy-0.12.6-cp311-cp311-win_amd64.whl
  • Upload date:
  • Size: 5.7 MB
  • Tags: CPython 3.11, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for tomotopy-0.12.6-cp311-cp311-win_amd64.whl
Algorithm Hash digest
SHA256 d1ef2c598c196528bc6eb0d56986267a85d143c45a642c270783a1e595049de2
MD5 845fa0efc356e6f1dc81e20de17f836a
BLAKE2b-256 8a21e33ad9ed9d3c91ea5d4f242ff9c543072503c73cc1b205de072c830e2d8a

See more details on using hashes here.

File details

Details for the file tomotopy-0.12.6-cp311-cp311-win32.whl.

File metadata

  • Download URL: tomotopy-0.12.6-cp311-cp311-win32.whl
  • Upload date:
  • Size: 3.4 MB
  • Tags: CPython 3.11, Windows x86
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for tomotopy-0.12.6-cp311-cp311-win32.whl
Algorithm Hash digest
SHA256 ca75b5a5412e4e7d8dd82971254f6eb74258d726dcc6d642d7d7345de4d5a585
MD5 da81e9306bf7df115250badad439e5e2
BLAKE2b-256 1ee372b08c73959a49cf6fe42615c5241b3839b6e900f43ec7f89d4d82f52b4e

See more details on using hashes here.

File details

Details for the file tomotopy-0.12.6-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for tomotopy-0.12.6-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 be4eb7c382f84a396481ad34a56402d7c475ed5560115619fc1960173a8768c3
MD5 6e3d8544efa735c645066a964519d001
BLAKE2b-256 91a50a74844278949582ead36d047650869856686935201cc9abc063e205b61c

See more details on using hashes here.

File details

Details for the file tomotopy-0.12.6-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for tomotopy-0.12.6-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 baae303af77ded11af939aa11c2afc404ce7ba469f4c5e64bfbf14cb29ba4293
MD5 81534c7db510e67e9209f97563abb118
BLAKE2b-256 833980664e6c47b0427df0e677a657dd98945d26ac5f83be271b071077da894b

See more details on using hashes here.

File details

Details for the file tomotopy-0.12.6-cp311-cp311-macosx_11_0_x86_64.whl.

File metadata

File hashes

Hashes for tomotopy-0.12.6-cp311-cp311-macosx_11_0_x86_64.whl
Algorithm Hash digest
SHA256 5330f0611cbde0fe79337da1086f17e880fb4c0d679a3f514ebe095f7c6fb003
MD5 eecf71c7c5ce3f4252afacd2f3baf01f
BLAKE2b-256 69775094b3ceb35b9ed8ee53f6f8a2439d4e486c73d169105a181ecf3172cd0b

See more details on using hashes here.

File details

Details for the file tomotopy-0.12.6-cp311-cp311-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for tomotopy-0.12.6-cp311-cp311-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 e08eecd0496b6e1ba705d076d6b8c03f0d1bd0bbc341c861eb50dabff1661ef6
MD5 3226c6b76131fdddc18ffcddbf8cb4cb
BLAKE2b-256 49eb9caf36ab1a115122841179fdb79b83741d20410d0d0da2f1333e6a5e68d7

See more details on using hashes here.

File details

Details for the file tomotopy-0.12.6-cp310-cp310-win_amd64.whl.

File metadata

  • Download URL: tomotopy-0.12.6-cp310-cp310-win_amd64.whl
  • Upload date:
  • Size: 5.7 MB
  • Tags: CPython 3.10, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.11

File hashes

Hashes for tomotopy-0.12.6-cp310-cp310-win_amd64.whl
Algorithm Hash digest
SHA256 1d0def9a8249c1244db2f13de25bde9aeba3e527669c122137c6a89706c7cd80
MD5 4b3fe47af75646d8f9b722aaf08176ad
BLAKE2b-256 3e193acf0228566f2755d81e231a0f1126b9543b9bdf8861a28c934042ee147a

See more details on using hashes here.

File details

Details for the file tomotopy-0.12.6-cp310-cp310-win32.whl.

File metadata

  • Download URL: tomotopy-0.12.6-cp310-cp310-win32.whl
  • Upload date:
  • Size: 3.4 MB
  • Tags: CPython 3.10, Windows x86
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.11

File hashes

Hashes for tomotopy-0.12.6-cp310-cp310-win32.whl
Algorithm Hash digest
SHA256 b38898a9b9438967502ba6420579779cb31a79f77e7782cdfcc579861eb011e7
MD5 a2a004e355debc1fc8e0f8ff1e445f2a
BLAKE2b-256 631dcfabbc7de03a1cf8d91300c85b9cdd0752b1c782c06e91eabb184db9b94c

See more details on using hashes here.

File details

Details for the file tomotopy-0.12.6-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for tomotopy-0.12.6-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 d838c150fcec9798f603a88d79ba50b9a064310c3fa921583948474738000039
MD5 0fa4a574a2e70e26cebfdbb2fb53d745
BLAKE2b-256 5fc1d94ca91d5cf1f313bdcc8e7724c698111f7a6a02a954d5b0150e7f4f56c2

See more details on using hashes here.

File details

Details for the file tomotopy-0.12.6-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for tomotopy-0.12.6-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 ce90771e3ce1019a0b74920f43e35920905998f845c28afbeb20ccddab26b809
MD5 2c132717e01eadcee2de237526125a85
BLAKE2b-256 4c69b21c5e708e35fbe3064cda6d54e83367cbbe505d5f9ef5e031edef068c33

See more details on using hashes here.

File details

Details for the file tomotopy-0.12.6-cp310-cp310-macosx_11_0_x86_64.whl.

File metadata

File hashes

Hashes for tomotopy-0.12.6-cp310-cp310-macosx_11_0_x86_64.whl
Algorithm Hash digest
SHA256 a729d2204980f9e46af895e3e71167be5e8c7452208e94267aadb1d1250f06f9
MD5 afd783f15180c8c0457be3b466456603
BLAKE2b-256 ae0edc86961260752dcc090c9390de598ab18f169fb7beda5d239370ff13e36e

See more details on using hashes here.

File details

Details for the file tomotopy-0.12.6-cp310-cp310-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for tomotopy-0.12.6-cp310-cp310-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 45fb28468d539c351dadf6b6a87c8e91453071ea15f13bd83a8b8e11613f6318
MD5 4fafd0d4627fbb17d6a4e3e9629d13ce
BLAKE2b-256 b920d722c432691c48fc8503cd5696534b7c5644855b9d195beff9f126a71447

See more details on using hashes here.

File details

Details for the file tomotopy-0.12.6-cp39-cp39-win_amd64.whl.

File metadata

  • Download URL: tomotopy-0.12.6-cp39-cp39-win_amd64.whl
  • Upload date:
  • Size: 5.7 MB
  • Tags: CPython 3.9, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.13

File hashes

Hashes for tomotopy-0.12.6-cp39-cp39-win_amd64.whl
Algorithm Hash digest
SHA256 253908bf5852e75824883c7f034c3d23d341aeacbcd0e75872c226c2b63787ec
MD5 95449a4ee45a649090b85d0bf754ad6d
BLAKE2b-256 b4bfcd3e95fa81fb02dd480cd5612e0381d7d6d931cd48b3703405f99081695e

See more details on using hashes here.

File details

Details for the file tomotopy-0.12.6-cp39-cp39-win32.whl.

File metadata

  • Download URL: tomotopy-0.12.6-cp39-cp39-win32.whl
  • Upload date:
  • Size: 3.4 MB
  • Tags: CPython 3.9, Windows x86
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.13

File hashes

Hashes for tomotopy-0.12.6-cp39-cp39-win32.whl
Algorithm Hash digest
SHA256 c98a385c23d2bb10f810c313dd71d7c5326a0c0dcbcabbbc1a3beca825d82c26
MD5 7310202464f504a08801222404c5a31b
BLAKE2b-256 036ed76043f2f2d2e6ed34552acbf6907356533c2fefb48f0c08d2998a6d6581

See more details on using hashes here.

File details

Details for the file tomotopy-0.12.6-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for tomotopy-0.12.6-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 4d64bb38a93fbfcc9c176d91e39b2dd6e398d64f62e83c43b0717ccfba908bf7
MD5 4869ac2f08ce5f44c75cd0e40cce35a0
BLAKE2b-256 7f6a6bb9882adfda9dadbadc093e752235b1fd453e84175c6d8f5b8d5f080729

See more details on using hashes here.

File details

Details for the file tomotopy-0.12.6-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for tomotopy-0.12.6-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 164ba01c0d6dbf0571195bad53b649338b1014ed1296087c6b2de97611bcd470
MD5 7606f8427f011e900b9fd91c80efb226
BLAKE2b-256 54e0cb667c93b1b099c6c883928ff201f094edb1d13ba34a074c32e6da2ad2f1

See more details on using hashes here.

File details

Details for the file tomotopy-0.12.6-cp39-cp39-macosx_11_0_x86_64.whl.

File metadata

File hashes

Hashes for tomotopy-0.12.6-cp39-cp39-macosx_11_0_x86_64.whl
Algorithm Hash digest
SHA256 0b206cb9969b58e1addfcdc174edd735951a5b7c9c5c2f3919e889e3ebdc00bb
MD5 323468b53475bcfa771205c86b26cb7d
BLAKE2b-256 15d6e8e1394b24dba086e0ce291200bac3e99a5e3497266299baedb6dd7e6ba9

See more details on using hashes here.

File details

Details for the file tomotopy-0.12.6-cp39-cp39-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for tomotopy-0.12.6-cp39-cp39-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 a14dc5fdec0bc9e71b9651d0995ee2eb3e3c5ec959073123007e3012927a0fc6
MD5 3d288472a9a9f22cc3c6cf96da8f183e
BLAKE2b-256 2a22cacf5f5b209e069c20c0f9b62da30db62989089eabeeaf81958dd51250c6

See more details on using hashes here.

File details

Details for the file tomotopy-0.12.6-cp38-cp38-win_amd64.whl.

File metadata

  • Download URL: tomotopy-0.12.6-cp38-cp38-win_amd64.whl
  • Upload date:
  • Size: 5.7 MB
  • Tags: CPython 3.8, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.8.10

File hashes

Hashes for tomotopy-0.12.6-cp38-cp38-win_amd64.whl
Algorithm Hash digest
SHA256 b3106cdfcaf571321acbf494a8d8fb53a7a46663b507644e33581efa53ffb095
MD5 967587ee8fa58c356f4293f9df9d3f6a
BLAKE2b-256 d5fd02fcb578ba6908946a3071d5ab7867730fa0143201885cd646bb739a38a9

See more details on using hashes here.

File details

Details for the file tomotopy-0.12.6-cp38-cp38-win32.whl.

File metadata

  • Download URL: tomotopy-0.12.6-cp38-cp38-win32.whl
  • Upload date:
  • Size: 3.4 MB
  • Tags: CPython 3.8, Windows x86
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.8.10

File hashes

Hashes for tomotopy-0.12.6-cp38-cp38-win32.whl
Algorithm Hash digest
SHA256 230929495449b111ae5dda782897c6b71a47fcb5ca200586e4996f184f55d7db
MD5 83664b23b6de00674bd7a0dbce8d3aea
BLAKE2b-256 90da2b5fbc2da38c9861ef1ca536208916e2a1b35a8805f08223cce5bcb03f2f

See more details on using hashes here.

File details

Details for the file tomotopy-0.12.6-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for tomotopy-0.12.6-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 b0ae8f2d8926689377f6975e056658e3c2ea62bf087ccd889113362ca1734799
MD5 347f4ec081ee87c95bd56b80467ed053
BLAKE2b-256 6cefef15aa5a44c5497628c498585e311f98ef4263e23f974039d24688f6fb3f

See more details on using hashes here.

File details

Details for the file tomotopy-0.12.6-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for tomotopy-0.12.6-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 8a84a7a09c0ca5e34506ae14cec21aaf98ee44359b6207c0446260eb9fb4740a
MD5 a5a9c0a91f12c3078104dfe34370af0d
BLAKE2b-256 193d63185a181051c7d9d0fa3573962d3ff3b5033a7aecd7ee0aeb7354d5b8f7

See more details on using hashes here.

File details

Details for the file tomotopy-0.12.6-cp38-cp38-macosx_11_0_x86_64.whl.

File metadata

File hashes

Hashes for tomotopy-0.12.6-cp38-cp38-macosx_11_0_x86_64.whl
Algorithm Hash digest
SHA256 2a28ef91ff1a9f8f3a4f5e285c5374eba92859846355dacd1f4053bc1599a5fc
MD5 dd259fa57866902be421a6b65d50ca9c
BLAKE2b-256 be129cadb2ec1b7e7684bea8da078cd092b38329244c67cd040b4869df6cf1a2

See more details on using hashes here.

File details

Details for the file tomotopy-0.12.6-cp38-cp38-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for tomotopy-0.12.6-cp38-cp38-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 1db1ea3d7684edf29d16e73d4234e93f01c3e4df3282776ca596081a00802be7
MD5 38294e03a1a8c371bb02a4f569af1594
BLAKE2b-256 dbb5ad09118f88901a7346c7771e4f0442a9bbea7d6865dcca0b47732d55a480

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page