Skip to main content

Tomoto, Topic Modeling Tool for Python

Project description

What is tomotopy?

tomotopy is a Python extension of tomoto (Topic Modeling Tool) which is a Gibbs-sampling based topic model library written in C++. It utilizes a vectorization of modern CPUs for maximizing speed. The current version of tomoto supports several major topic models including

  • Latent Dirichlet Allocation (tomotopy.LDAModel)

  • Labeled LDA (tomotopy.LLDAModel)

  • Partially Labeled LDA (tomotopy.PLDAModel)

  • Supervised LDA (tomotopy.SLDAModel)

  • Dirichlet Multinomial Regression (tomotopy.DMRModel)

  • Generalized Dirichlet Multinomial Regression (tomotopy.GDMRModel)

  • Hierarchical Dirichlet Process (tomotopy.HDPModel)

  • Hierarchical LDA (tomotopy.HLDAModel)

  • Multi Grain LDA (tomotopy.MGLDAModel)

  • Pachinko Allocation (tomotopy.PAModel)

  • Hierarchical PA (tomotopy.HPAModel)

  • Correlated Topic Model (tomotopy.CTModel)

  • Dynamic Topic Model (tomotopy.DTModel)

  • Pseudo-document based Topic Model (tomotopy.PTModel).

https://badge.fury.io/py/tomotopy.svg

Getting Started

You can install tomotopy easily using pip. (https://pypi.org/project/tomotopy/)

$ pip install --upgrade pip
$ pip install tomotopy

The supported OS and Python versions are:

  • Linux (x86-64) with Python >= 3.6

  • macOS >= 10.13 with Python >= 3.6

  • Windows 7 or later (x86, x86-64) with Python >= 3.6

  • Other OS with Python >= 3.6: Compilation from source code required (with c++14 compatible compiler)

After installing, you can start tomotopy by just importing.

import tomotopy as tp
print(tp.isa) # prints 'avx2', 'avx', 'sse2' or 'none'

Currently, tomotopy can exploits AVX2, AVX or SSE2 SIMD instruction set for maximizing performance. When the package is imported, it will check available instruction sets and select the best option. If tp.isa tells none, iterations of training may take a long time. But, since most of modern Intel or AMD CPUs provide SIMD instruction set, the SIMD acceleration could show a big improvement.

Here is a sample code for simple LDA training of texts from ‘sample.txt’ file.

import tomotopy as tp
mdl = tp.LDAModel(k=20)
for line in open('sample.txt'):
    mdl.add_doc(line.strip().split())

for i in range(0, 100, 10):
    mdl.train(10)
    print('Iteration: {}\tLog-likelihood: {}'.format(i, mdl.ll_per_word))

for k in range(mdl.k):
    print('Top 10 words of topic #{}'.format(k))
    print(mdl.get_topic_words(k, top_n=10))

mdl.summary()

Performance of tomotopy

tomotopy uses Collapsed Gibbs-Sampling(CGS) to infer the distribution of topics and the distribution of words. Generally CGS converges more slowly than Variational Bayes(VB) that [gensim’s LdaModel] uses, but its iteration can be computed much faster. In addition, tomotopy can take advantage of multicore CPUs with a SIMD instruction set, which can result in faster iterations.

[gensim’s LdaModel]: https://radimrehurek.com/gensim/models/ldamodel.html

Following chart shows the comparison of LDA model’s running time between tomotopy and gensim. The input data consists of 1000 random documents from English Wikipedia with 1,506,966 words (about 10.1 MB). tomotopy trains 200 iterations and gensim trains 10 iterations.

https://bab2min.github.io/tomotopy/images/tmt_i5.png

↑ Performance in Intel i5-6600, x86-64 (4 cores)

https://bab2min.github.io/tomotopy/images/tmt_xeon.png

↑ Performance in Intel Xeon E5-2620 v4, x86-64 (8 cores, 16 threads)

https://bab2min.github.io/tomotopy/images/tmt_r7_3700x.png

↑ Performance in AMD Ryzen7 3700X, x86-64 (8 cores, 16 threads)

Although tomotopy iterated 20 times more, the overall running time was 5~10 times faster than gensim. And it yields a stable result.

It is difficult to compare CGS and VB directly because they are totaly different techniques. But from a practical point of view, we can compare the speed and the result between them. The following chart shows the log-likelihood per word of two models’ result.

https://bab2min.github.io/tomotopy/images/LLComp.png

The SIMD instruction set has a great effect on performance. Following is a comparison between SIMD instruction sets.

https://bab2min.github.io/tomotopy/images/SIMDComp.png

Fortunately, most of recent x86-64 CPUs provide AVX2 instruction set, so we can enjoy the performance of AVX2.

Vocabulary controlling using CF and DF

CF(collection frequency) and DF(document frequency) are concepts used in information retreival, and each represents the total number of times the word appears in the corpus and the number of documents in which the word appears within the corpus, respectively. tomotopy provides these two measures under the parameters of min_cf and min_df to trim low frequency words when building the corpus.

For example, let’s say we have 5 documents #0 ~ #4 which are composed of the following words:

#0 : a, b, c, d, e, c
#1 : a, b, e, f
#2 : c, d, c
#3 : a, e, f, g
#4 : a, b, g

Both CF of a and CF of c are 4 because it appears 4 times in the entire corpus. But DF of a is 4 and DF of c is 2 because a appears in #0, #1, #3 and #4 and c only appears in #0 and #2. So if we trim low frequency words using min_cf=3, the result becomes follows:

(d, f and g are removed.)
#0 : a, b, c, e, c
#1 : a, b, e
#2 : c, c
#3 : a, e
#4 : a, b

However when min_df=3 the result is like :

(c, d, f and g are removed.)
#0 : a, b, e
#1 : a, b, e
#2 : (empty doc)
#3 : a, e
#4 : a, b

As we can see, min_df is a stronger criterion than min_cf. In performing topic modeling, words that appear repeatedly in only one document do not contribute to estimating the topic-word distribution. So, removing words with low df is a good way to reduce model size while preserving the results of the final model. In short, please prefer using min_df to min_cf.

Model Save and Load

tomotopy provides save and load method for each topic model class, so you can save the model into the file whenever you want, and re-load it from the file.

import tomotopy as tp

mdl = tp.HDPModel()
for line in open('sample.txt'):
    mdl.add_doc(line.strip().split())

for i in range(0, 100, 10):
    mdl.train(10)
    print('Iteration: {}\tLog-likelihood: {}'.format(i, mdl.ll_per_word))

# save into file
mdl.save('sample_hdp_model.bin')

# load from file
mdl = tp.HDPModel.load('sample_hdp_model.bin')
for k in range(mdl.k):
    if not mdl.is_live_topic(k): continue
    print('Top 10 words of topic #{}'.format(k))
    print(mdl.get_topic_words(k, top_n=10))

# the saved model is HDP model,
# so when you load it by LDA model, it will raise an exception
mdl = tp.LDAModel.load('sample_hdp_model.bin')

When you load the model from a file, a model type in the file should match the class of methods.

See more at tomotopy.LDAModel.save and tomotopy.LDAModel.load methods.

Documents in the Model and out of the Model

We can use Topic Model for two major purposes. The basic one is to discover topics from a set of documents as a result of trained model, and the more advanced one is to infer topic distributions for unseen documents by using trained model.

We named the document in the former purpose (used for model training) as document in the model, and the document in the later purpose (unseen document during training) as document out of the model.

In tomotopy, these two different kinds of document are generated differently. A document in the model can be created by tomotopy.LDAModel.add_doc method. add_doc can be called before tomotopy.LDAModel.train starts. In other words, after train called, add_doc cannot add a document into the model because the set of document used for training has become fixed.

To acquire the instance of the created document, you should use tomotopy.LDAModel.docs like:

mdl = tp.LDAModel(k=20)
idx = mdl.add_doc(words)
if idx < 0: raise RuntimeError("Failed to add doc")
doc_inst = mdl.docs[idx]
# doc_inst is an instance of the added document

A document out of the model is generated by tomotopy.LDAModel.make_doc method. make_doc can be called only after train starts. If you use make_doc before the set of document used for training has become fixed, you may get wrong results. Since make_doc returns the instance directly, you can use its return value for other manipulations.

mdl = tp.LDAModel(k=20)
# add_doc ...
mdl.train(100)
doc_inst = mdl.make_doc(unseen_doc) # doc_inst is an instance of the unseen document

Inference for Unseen Documents

If a new document is created by tomotopy.LDAModel.make_doc, its topic distribution can be inferred by the model. Inference for unseen document should be performed using tomotopy.LDAModel.infer method.

mdl = tp.LDAModel(k=20)
# add_doc ...
mdl.train(100)
doc_inst = mdl.make_doc(unseen_doc)
topic_dist, ll = mdl.infer(doc_inst)
print("Topic Distribution for Unseen Docs: ", topic_dist)
print("Log-likelihood of inference: ", ll)

The infer method can infer only one instance of tomotopy.Document or a list of instances of tomotopy.Document. See more at tomotopy.LDAModel.infer.

Corpus and transform

Every topic model in tomotopy has its own internal document type. A document can be created and added into suitable for each model through each model’s add_doc method. However, trying to add the same list of documents to different models becomes quite inconvenient, because add_doc should be called for the same list of documents to each different model. Thus, tomotopy provides tomotopy.utils.Corpus class that holds a list of documents. tomotopy.utils.Corpus can be inserted into any model by passing as argument corpus to __init__ or add_corpus method of each model. So, inserting tomotopy.utils.Corpus just has the same effect to inserting documents the corpus holds.

Some topic models requires different data for its documents. For example, tomotopy.DMRModel requires argument metadata in str type, but tomotopy.PLDAModel requires argument labels in List[str] type. Since tomotopy.utils.Corpus holds an independent set of documents rather than being tied to a specific topic model, data types required by a topic model may be inconsistent when a corpus is added into that topic model. In this case, miscellaneous data can be transformed to be fitted target topic model using argument transform. See more details in the following code:

from tomotopy import DMRModel
from tomotopy.utils import Corpus

corpus = Corpus()
corpus.add_doc("a b c d e".split(), a_data=1)
corpus.add_doc("e f g h i".split(), a_data=2)
corpus.add_doc("i j k l m".split(), a_data=3)

model = DMRModel(k=10)
model.add_corpus(corpus)
# You lose `a_data` field in `corpus`,
# and `metadata` that `DMRModel` requires is filled with the default value, empty str.

assert model.docs[0].metadata == ''
assert model.docs[1].metadata == ''
assert model.docs[2].metadata == ''

def transform_a_data_to_metadata(misc: dict):
    return {'metadata': str(misc['a_data'])}
# this function transforms `a_data` to `metadata`

model = DMRModel(k=10)
model.add_corpus(corpus, transform=transform_a_data_to_metadata)
# Now docs in `model` has non-default `metadata`, that generated from `a_data` field.

assert model.docs[0].metadata == '1'
assert model.docs[1].metadata == '2'
assert model.docs[2].metadata == '3'

Parallel Sampling Algorithms

Since version 0.5.0, tomotopy allows you to choose a parallelism algorithm. The algorithm provided in versions prior to 0.4.2 is COPY_MERGE, which is provided for all topic models. The new algorithm PARTITION, available since 0.5.0, makes training generally faster and more memory-efficient, but it is available at not all topic models.

The following chart shows the speed difference between the two algorithms based on the number of topics and the number of workers.

https://bab2min.github.io/tomotopy/images/algo_comp.png https://bab2min.github.io/tomotopy/images/algo_comp2.png

Performance by Version

Performance changes by version are shown in the following graph. The time it takes to run the LDA model train with 1000 iteration was measured. (Docs: 11314, Vocab: 60382, Words: 2364724, Intel Xeon Gold 5120 @2.2GHz)

https://bab2min.github.io/tomotopy/images/lda-perf-t1.png https://bab2min.github.io/tomotopy/images/lda-perf-t4.png https://bab2min.github.io/tomotopy/images/lda-perf-t8.png

Pining Topics using Word Priors

Since version 0.6.0, a new method tomotopy.LDAModel.set_word_prior has been added. It allows you to control word prior for each topic. For example, we can set the weight of the word ‘church’ to 1.0 in topic 0, and the weight to 0.1 in the rest of the topics by following codes. This means that the probability that the word ‘church’ is assigned to topic 0 is 10 times higher than the probability of being assigned to another topic. Therefore, most of ‘church’ is assigned to topic 0, so topic 0 contains many words related to ‘church’. This allows to manipulate some topics to be placed at a specific topic number.

import tomotopy as tp
mdl = tp.LDAModel(k=20)

# add documents into `mdl`

# setting word prior
mdl.set_word_prior('church', [1.0 if k == 0 else 0.1 for k in range(20)])

See word_prior_example in example.py for more details.

Examples

You can find an example python code of tomotopy at https://github.com/bab2min/tomotopy/blob/main/examples/ .

You can also get the data file used in the example code at https://drive.google.com/file/d/18OpNijd4iwPyYZ2O7pQoPyeTAKEXa71J/view .

License

tomotopy is licensed under the terms of MIT License, meaning you can use it for any reasonable purpose and remain in complete ownership of all the documentation you produce.

History

  • 0.13.0 (2024-08-05)
    • New features
      • Major features of Topic Model Viewer tomotopy.viewer.open_viewer() are ready now.

      • tomotopy.LDAModel.get_hash() is added. You can get 128bit hash value of the model.

      • Add an argument ngram_list to tomotopy.utils.SimpleTokenizer.

    • Bug fixes
      • Fixed inconsistent spans bug after Corpus.concat_ngrams is called.

      • Optimized the bottleneck of tomotopy.LDAModel.load() and tomotopy.LDAModel.save() and improved its speed more than 10 times.

  • 0.12.7 (2023-12-19)
    • New features
      • Added Topic Model Viewer tomotopy.viewer.open_viewer()

      • Optimized the performance of tomotopy.utils.Corpus.process()

    • Bug fixes
      • Document.span now returns the ranges in character unit, not in byte unit.

  • 0.12.6 (2023-12-11)
    • New features
      • Added some convenience features to tomotopy.LDAModel.train and tomotopy.LDAModel.set_word_prior.

      • LDAModel.train now has new arguments callback, callback_interval and show_progres to monitor the training progress.

      • LDAModel.set_word_prior now can accept Dict[int, float] type as its argument prior.

  • 0.12.5 (2023-08-03)
    • New features
      • Added support for Linux ARM64 architecture.

  • 0.12.4 (2023-01-22)
    • New features
      • Added support for macOS ARM64 architecture.

    • Bug fixes
      • Fixed an issue where tomotopy.Document.get_sub_topic_dist() raises a bad argument exception.

      • Fixed an issue where exception raising sometimes causes crashes.

  • 0.12.3 (2022-07-19)
    • New features
      • Now, inserting an empty document using tomotopy.LDAModel.add_doc() just ignores it instead of raising an exception. If the newly added argument ignore_empty_words is set to False, an exception is raised as before.

      • tomotopy.HDPModel.purge_dead_topics() method is added to remove non-live topics from the model.

    • Bug fixes
      • Fixed an issue that prevents setting user defined values for nuSq in tomotopy.SLDAModel (by @jucendrero).

      • Fixed an issue where tomotopy.utils.Coherence did not work for tomotopy.DTModel.

      • Fixed an issue that often crashed when calling make_dic() before calling train().

      • Resolved the problem that the results of tomotopy.DMRModel and tomotopy.GDMRModel are different even when the seed is fixed.

      • The parameter optimization process of tomotopy.DMRModel and tomotopy.GDMRModel has been improved.

      • Fixed an issue that sometimes crashed when calling tomotopy.PTModel.copy().

  • 0.12.2 (2021-09-06)
    • An issue where calling convert_to_lda of tomotopy.HDPModel with min_cf > 0, min_df > 0 or rm_top > 0 causes a crash has been fixed.

    • A new argument from_pseudo_doc is added to tomotopy.Document.get_topics and tomotopy.Document.get_topic_dist. This argument is only valid for documents of PTModel, it enables to control a source for computing topic distribution.

    • A default value for argument p of tomotopy.PTModel has been changed. The new default value is k * 10.

    • Using documents generated by make_doc without calling infer doesn’t cause a crash anymore, but just print warning messages.

    • An issue where the internal C++ code isn’t compiled at clang c++17 environment has been fixed.

  • 0.12.1 (2021-06-20)
    • An issue where tomotopy.LDAModel.set_word_prior() causes a crash has been fixed.

    • Now tomotopy.LDAModel.perplexity and tomotopy.LDAModel.ll_per_word return the accurate value when TermWeight is not ONE.

    • tomotopy.LDAModel.used_vocab_weighted_freq was added, which returns term-weighted frequencies of words.

    • Now tomotopy.LDAModel.summary() shows not only the entropy of words, but also the entropy of term-weighted words.

  • 0.12.0 (2021-04-26)
    • Now tomotopy.DMRModel and tomotopy.GDMRModel support multiple values of metadata (see https://github.com/bab2min/tomotopy/blob/main/examples/dmr_multi_label.py )

    • The performance of tomotopy.GDMRModel was improved.

    • A copy() method has been added for all topic models to do a deep copy.

    • An issue was fixed where words that are excluded from training (by min_cf, min_df) have incorrect topic id. Now all excluded words have -1 as topic id.

    • Now all exceptions and warnings that generated by tomotopy follow standard Python types.

    • Compiler requirements have been raised to C++14.

  • 0.11.1 (2021-03-28)
    • A critical bug of asymmetric alphas was fixed. Due to this bug, version 0.11.0 has been removed from releases.

  • 0.11.0 (2021-03-26) (removed)
    • A new topic model tomotopy.PTModel for short texts was added into the package.

    • An issue was fixed where tomotopy.HDPModel.infer causes a segmentation fault sometimes.

    • A mismatch of numpy API version was fixed.

    • Now asymmetric document-topic priors are supported.

    • Serializing topic models to bytes in memory is supported.

    • An argument normalize was added to get_topic_dist(), get_topic_word_dist() and get_sub_topic_dist() for controlling normalization of results.

    • Now tomotopy.DMRModel.lambdas and tomotopy.DMRModel.alpha give correct values.

    • Categorical metadata supports for tomotopy.GDMRModel were added (see https://github.com/bab2min/tomotopy/blob/main/examples/gdmr_both_categorical_and_numerical.py ).

    • Python3.5 support was dropped.

  • 0.10.2 (2021-02-16)
    • An issue was fixed where tomotopy.CTModel.train fails with large K.

    • An issue was fixed where tomotopy.utils.Corpus loses their uid values.

  • 0.10.1 (2021-02-14)
    • An issue was fixed where tomotopy.utils.Corpus.extract_ngrams craches with empty input.

    • An issue was fixed where tomotopy.LDAModel.infer raises exception with valid input.

    • An issue was fixed where tomotopy.HLDAModel.infer generates wrong tomotopy.Document.path.

    • Since a new parameter freeze_topics for tomotopy.HLDAModel.train was added, you can control whether to create a new topic or not when training.

  • 0.10.0 (2020-12-19)
    • The interface of tomotopy.utils.Corpus and of tomotopy.LDAModel.docs were unified. Now you can access the document in corpus with the same manner.

    • __getitem__ of tomotopy.utils.Corpus was improved. Not only indexing by int, but also by Iterable[int], slicing are supported. Also indexing by uid is supported.

    • New methods tomotopy.utils.Corpus.extract_ngrams and tomotopy.utils.Corpus.concat_ngrams were added. They extracts n-gram collocations using PMI and concatenates them into a single words.

    • A new method tomotopy.LDAModel.add_corpus was added, and tomotopy.LDAModel.infer can receive corpus as input.

    • A new module tomotopy.coherence was added. It provides the way to calculate coherence of the model.

    • A paramter window_size was added to tomotopy.label.FoRelevance.

    • An issue was fixed where NaN often occurs when training tomotopy.HDPModel.

    • Now Python3.9 is supported.

    • A dependency to py-cpuinfo was removed and the initializing of the module was improved.

  • 0.9.1 (2020-08-08)
    • Memory leaks of version 0.9.0 was fixed.

    • tomotopy.CTModel.summary() was fixed.

  • 0.9.0 (2020-08-04)
    • The tomotopy.LDAModel.summary() method, which prints human-readable summary of the model, has been added.

    • The random number generator of package has been replaced with [EigenRand]. It speeds up the random number generation and solves the result difference between platforms.

    • Due to above, even if seed is the same, the model training result may be different from the version before 0.9.0.

    • Fixed a training error in tomotopy.HDPModel.

    • tomotopy.DMRModel.alpha now shows Dirichlet prior of per-document topic distribution by metadata.

    • tomotopy.DTModel.get_count_by_topics() has been modified to return a 2-dimensional ndarray.

    • tomotopy.DTModel.alpha has been modified to return the same value as tomotopy.DTModel.get_alpha().

    • Fixed an issue where the metadata value could not be obtained for the document of tomotopy.GDMRModel.

    • tomotopy.HLDAModel.alpha now shows Dirichlet prior of per-document depth distribution.

    • tomotopy.LDAModel.global_step has been added.

    • tomotopy.MGLDAModel.get_count_by_topics() now returns the word count for both global and local topics.

    • tomotopy.PAModel.alpha, tomotopy.PAModel.subalpha, and tomotopy.PAModel.get_count_by_super_topic() have been added.

[EigenRand]: https://github.com/bab2min/EigenRand

  • 0.8.2 (2020-07-14)
    • New properties tomotopy.DTModel.num_timepoints and tomotopy.DTModel.num_docs_by_timepoint have been added.

    • A bug which causes different results with the different platform even if seeds were the same was partially fixed. As a result of this fix, now tomotopy in 32 bit yields different training results from earlier version.

  • 0.8.1 (2020-06-08)
    • A bug where tomotopy.LDAModel.used_vocabs returned an incorrect value was fixed.

    • Now tomotopy.CTModel.prior_cov returns a covariance matrix with shape [k, k].

    • Now tomotopy.CTModel.get_correlations with empty arguments returns a correlation matrix with shape [k, k].

  • 0.8.0 (2020-06-06)
    • Since NumPy was introduced in tomotopy, many methods and properties of tomotopy return not just list, but numpy.ndarray now.

    • Tomotopy has a new dependency NumPy >= 1.10.0.

    • A wrong estimation of tomotopy.HDPModel.infer was fixed.

    • A new method about converting HDPModel to LDAModel was added.

    • New properties including tomotopy.LDAModel.used_vocabs, tomotopy.LDAModel.used_vocab_freq and tomotopy.LDAModel.used_vocab_df were added into topic models.

    • A new g-DMR topic model(tomotopy.GDMRModel) was added.

    • An error at initializing tomotopy.label.FoRelevance in macOS was fixed.

    • An error that occured when using tomotopy.utils.Corpus created without raw parameters was fixed.

  • 0.7.1 (2020-05-08)
    • tomotopy.Document.path was added for tomotopy.HLDAModel.

    • A memory corruption bug in tomotopy.label.PMIExtractor was fixed.

    • A compile error in gcc 7 was fixed.

  • 0.7.0 (2020-04-18)
    • tomotopy.DTModel was added into the package.

    • A bug in tomotopy.utils.Corpus.save was fixed.

    • A new method tomotopy.Document.get_count_vector was added into Document class.

    • Now linux distributions use manylinux2010 and an additional optimization is applied.

  • 0.6.2 (2020-03-28)
    • A critical bug related to save and load was fixed. Version 0.6.0 and 0.6.1 have been removed from releases.

  • 0.6.1 (2020-03-22) (removed)
    • A bug related to module loading was fixed.

  • 0.6.0 (2020-03-22) (removed)
    • tomotopy.utils.Corpus class that manages multiple documents easily was added.

    • tomotopy.LDAModel.set_word_prior method that controls word-topic priors of topic models was added.

    • A new argument min_df that filters words based on document frequency was added into every topic model’s __init__.

    • tomotopy.label, the submodule about topic labeling was added. Currently, only tomotopy.label.FoRelevance is provided.

  • 0.5.2 (2020-03-01)
    • A segmentation fault problem was fixed in tomotopy.LLDAModel.add_doc.

    • A bug was fixed that infer of tomotopy.HDPModel sometimes crashes the program.

    • A crash issue was fixed of tomotopy.LDAModel.infer with ps=tomotopy.ParallelScheme.PARTITION, together=True.

  • 0.5.1 (2020-01-11)
    • A bug was fixed that tomotopy.SLDAModel.make_doc doesn’t support missing values for y.

    • Now tomotopy.SLDAModel fully supports missing values for response variables y. Documents with missing values (NaN) are included in modeling topic, but excluded from regression of response variables.

  • 0.5.0 (2019-12-30)
    • Now tomotopy.PAModel.infer returns both topic distribution nd sub-topic distribution.

    • New methods get_sub_topics and get_sub_topic_dist were added into tomotopy.Document. (for PAModel)

    • New parameter parallel was added for tomotopy.LDAModel.train and tomotopy.LDAModel.infer method. You can select parallelism algorithm by changing this parameter.

    • tomotopy.ParallelScheme.PARTITION, a new algorithm, was added. It works efficiently when the number of workers is large, the number of topics or the size of vocabulary is big.

    • A bug where rm_top didn’t work at min_cf < 2 was fixed.

  • 0.4.2 (2019-11-30)
    • Wrong topic assignments of tomotopy.LLDAModel and tomotopy.PLDAModel were fixed.

    • Readable __repr__ of tomotopy.Document and tomotopy.Dictionary was implemented.

  • 0.4.1 (2019-11-27)
    • A bug at init function of tomotopy.PLDAModel was fixed.

  • 0.4.0 (2019-11-18)
    • New models including tomotopy.PLDAModel and tomotopy.HLDAModel were added into the package.

  • 0.3.1 (2019-11-05)
    • An issue where get_topic_dist() returns incorrect value when min_cf or rm_top is set was fixed.

    • The return value of get_topic_dist() of tomotopy.MGLDAModel document was fixed to include local topics.

    • The estimation speed with tw=ONE was improved.

  • 0.3.0 (2019-10-06)
    • A new model, tomotopy.LLDAModel was added into the package.

    • A crashing issue of HDPModel was fixed.

    • Since hyperparameter estimation for HDPModel was implemented, the result of HDPModel may differ from previous versions.

      If you want to turn off hyperparameter estimation of HDPModel, set optim_interval to zero.

  • 0.2.0 (2019-08-18)
    • New models including tomotopy.CTModel and tomotopy.SLDAModel were added into the package.

    • A new parameter option rm_top was added for all topic models.

    • The problems in save and load method for PAModel and HPAModel were fixed.

    • An occassional crash in loading HDPModel was fixed.

    • The problem that ll_per_word was calculated incorrectly when min_cf > 0 was fixed.

  • 0.1.6 (2019-08-09)
    • Compiling errors at clang with macOS environment were fixed.

  • 0.1.4 (2019-08-05)
    • The issue when add_doc receives an empty list as input was fixed.

    • The issue that tomotopy.PAModel.get_topic_words doesn’t extract the word distribution of subtopic was fixed.

  • 0.1.3 (2019-05-19)
    • The parameter min_cf and its stopword-removing function were added for all topic models.

  • 0.1.0 (2019-05-12)
    • First version of tomotopy

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tomotopy-0.13.0.tar.gz (1.4 MB view details)

Uploaded Source

Built Distributions

tomotopy-0.13.0-cp312-cp312-win_amd64.whl (3.9 MB view details)

Uploaded CPython 3.12 Windows x86-64

tomotopy-0.13.0-cp312-cp312-win32.whl (3.4 MB view details)

Uploaded CPython 3.12 Windows x86

tomotopy-0.13.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (11.6 MB view details)

Uploaded CPython 3.12 manylinux: glibc 2.17+ x86-64

tomotopy-0.13.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (5.4 MB view details)

Uploaded CPython 3.12 manylinux: glibc 2.17+ ARM64

tomotopy-0.13.0-cp312-cp312-macosx_11_0_arm64.whl (4.5 MB view details)

Uploaded CPython 3.12 macOS 11.0+ ARM64

tomotopy-0.13.0-cp312-cp312-macosx_10_14_x86_64.whl (16.3 MB view details)

Uploaded CPython 3.12 macOS 10.14+ x86-64

tomotopy-0.13.0-cp311-cp311-win_amd64.whl (3.9 MB view details)

Uploaded CPython 3.11 Windows x86-64

tomotopy-0.13.0-cp311-cp311-win32.whl (3.4 MB view details)

Uploaded CPython 3.11 Windows x86

tomotopy-0.13.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (11.6 MB view details)

Uploaded CPython 3.11 manylinux: glibc 2.17+ x86-64

tomotopy-0.13.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (5.4 MB view details)

Uploaded CPython 3.11 manylinux: glibc 2.17+ ARM64

tomotopy-0.13.0-cp311-cp311-macosx_11_0_arm64.whl (4.5 MB view details)

Uploaded CPython 3.11 macOS 11.0+ ARM64

tomotopy-0.13.0-cp311-cp311-macosx_10_14_x86_64.whl (16.3 MB view details)

Uploaded CPython 3.11 macOS 10.14+ x86-64

tomotopy-0.13.0-cp310-cp310-win_amd64.whl (3.9 MB view details)

Uploaded CPython 3.10 Windows x86-64

tomotopy-0.13.0-cp310-cp310-win32.whl (3.4 MB view details)

Uploaded CPython 3.10 Windows x86

tomotopy-0.13.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (11.6 MB view details)

Uploaded CPython 3.10 manylinux: glibc 2.17+ x86-64

tomotopy-0.13.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (5.4 MB view details)

Uploaded CPython 3.10 manylinux: glibc 2.17+ ARM64

tomotopy-0.13.0-cp310-cp310-macosx_11_0_arm64.whl (4.5 MB view details)

Uploaded CPython 3.10 macOS 11.0+ ARM64

tomotopy-0.13.0-cp310-cp310-macosx_10_14_x86_64.whl (16.3 MB view details)

Uploaded CPython 3.10 macOS 10.14+ x86-64

tomotopy-0.13.0-cp39-cp39-win_amd64.whl (3.9 MB view details)

Uploaded CPython 3.9 Windows x86-64

tomotopy-0.13.0-cp39-cp39-win32.whl (3.4 MB view details)

Uploaded CPython 3.9 Windows x86

tomotopy-0.13.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (11.6 MB view details)

Uploaded CPython 3.9 manylinux: glibc 2.17+ x86-64

tomotopy-0.13.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (5.4 MB view details)

Uploaded CPython 3.9 manylinux: glibc 2.17+ ARM64

tomotopy-0.13.0-cp39-cp39-macosx_11_0_arm64.whl (4.5 MB view details)

Uploaded CPython 3.9 macOS 11.0+ ARM64

tomotopy-0.13.0-cp39-cp39-macosx_10_14_x86_64.whl (16.3 MB view details)

Uploaded CPython 3.9 macOS 10.14+ x86-64

tomotopy-0.13.0-cp38-cp38-win_amd64.whl (3.9 MB view details)

Uploaded CPython 3.8 Windows x86-64

tomotopy-0.13.0-cp38-cp38-win32.whl (3.4 MB view details)

Uploaded CPython 3.8 Windows x86

tomotopy-0.13.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (11.6 MB view details)

Uploaded CPython 3.8 manylinux: glibc 2.17+ x86-64

tomotopy-0.13.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (5.4 MB view details)

Uploaded CPython 3.8 manylinux: glibc 2.17+ ARM64

tomotopy-0.13.0-cp38-cp38-macosx_11_0_arm64.whl (4.5 MB view details)

Uploaded CPython 3.8 macOS 11.0+ ARM64

tomotopy-0.13.0-cp38-cp38-macosx_10_14_x86_64.whl (16.3 MB view details)

Uploaded CPython 3.8 macOS 10.14+ x86-64

File details

Details for the file tomotopy-0.13.0.tar.gz.

File metadata

  • Download URL: tomotopy-0.13.0.tar.gz
  • Upload date:
  • Size: 1.4 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.8.19

File hashes

Hashes for tomotopy-0.13.0.tar.gz
Algorithm Hash digest
SHA256 99264db2a38354f3b3178f831e6452eb640f3262a036bad283e9ed6284ddd590
MD5 ac49e68f80da7b915782b88c75586039
BLAKE2b-256 f9769bc6304dc292e82ae3270930101c530364dd9228b82f06ca67c91f3ecba0

See more details on using hashes here.

File details

Details for the file tomotopy-0.13.0-cp312-cp312-win_amd64.whl.

File metadata

File hashes

Hashes for tomotopy-0.13.0-cp312-cp312-win_amd64.whl
Algorithm Hash digest
SHA256 2a32c7e38f664f43a43a35e62fb81b29f69569cd3dd5db17a3b4a969e279b452
MD5 53b66066da024d6a9ba4c208951d3688
BLAKE2b-256 3f80c6237c81ec92eb99a0248402121492a4ef96fc795717e52da4e7b46fe498

See more details on using hashes here.

File details

Details for the file tomotopy-0.13.0-cp312-cp312-win32.whl.

File metadata

  • Download URL: tomotopy-0.13.0-cp312-cp312-win32.whl
  • Upload date:
  • Size: 3.4 MB
  • Tags: CPython 3.12, Windows x86
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.4

File hashes

Hashes for tomotopy-0.13.0-cp312-cp312-win32.whl
Algorithm Hash digest
SHA256 bd81d6db02f8c05f41bd28d9ef733bdcce7e1915c6666c727c412ac379aa7c3b
MD5 fff128ee1c77e478960576dcaca0f8a8
BLAKE2b-256 99ca2d2611578308c1f7b2b04de4d5dcb94f9ad8c22024a372060c6c9f26e115

See more details on using hashes here.

File details

Details for the file tomotopy-0.13.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for tomotopy-0.13.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 d642f30a4de1347514e6b3d02e194a14dfdcd0022267a77eb123239a7b13d947
MD5 dce1778055c4eaf74fefc715febd42c5
BLAKE2b-256 1c646af37a44bfc7e8a31871d5fd75bb84f1cbd5a3e1a509d6d190166417de5a

See more details on using hashes here.

File details

Details for the file tomotopy-0.13.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for tomotopy-0.13.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 9163241f0da567a41fad05c1ab1b60afdd11c607aeb58d2ec0940a467eb054be
MD5 8895e6c0adb11a2e8c2e253f106e29a5
BLAKE2b-256 9c42d2d3e5ea1d361964639cace3b5b2b7de9874d03eb04d16ac35cf101c60fa

See more details on using hashes here.

File details

Details for the file tomotopy-0.13.0-cp312-cp312-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for tomotopy-0.13.0-cp312-cp312-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 cd8c98f85f2393c85f572754108f027ae5db201affbae6f94d340705f6041c99
MD5 9967e64f5d5f7200fefc7432e91ae8cd
BLAKE2b-256 a87e2640d98673e7f5a68e719858884b37c23f3f50ed9214df8032c0b5b02a0e

See more details on using hashes here.

File details

Details for the file tomotopy-0.13.0-cp312-cp312-macosx_10_14_x86_64.whl.

File metadata

File hashes

Hashes for tomotopy-0.13.0-cp312-cp312-macosx_10_14_x86_64.whl
Algorithm Hash digest
SHA256 414245dfae2c1cbe0a0a4d0d17d3f845a6f308f94884a5f0658827d943e2758a
MD5 10a726dc7702fc5198b74bdef40716f2
BLAKE2b-256 dbbab4a1b40e1da1149e11657110e14757f92d8cbf52089e4d11cfe3e3eec413

See more details on using hashes here.

File details

Details for the file tomotopy-0.13.0-cp311-cp311-win_amd64.whl.

File metadata

File hashes

Hashes for tomotopy-0.13.0-cp311-cp311-win_amd64.whl
Algorithm Hash digest
SHA256 48da9f79df3c465657b60fc2ccd67ff322d683aca4f55eac95469efef5368191
MD5 473fb52e655b355099195544c13fa40d
BLAKE2b-256 48ce9eb6ee7abae30ea70c3b1cba230b81d55e7bfe5af4637ca5478d285ed0c2

See more details on using hashes here.

File details

Details for the file tomotopy-0.13.0-cp311-cp311-win32.whl.

File metadata

  • Download URL: tomotopy-0.13.0-cp311-cp311-win32.whl
  • Upload date:
  • Size: 3.4 MB
  • Tags: CPython 3.11, Windows x86
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.11.9

File hashes

Hashes for tomotopy-0.13.0-cp311-cp311-win32.whl
Algorithm Hash digest
SHA256 cbc12323bc3906530ab862b4a054239bae2237ca4e6ce891743f142e0b5d7a6d
MD5 39a6b589c8534ecc5ee413e64a48a2ea
BLAKE2b-256 6ec852a20c1d0e0584daa854695a27d79c00813c7eb88b07cd8a079799eed745

See more details on using hashes here.

File details

Details for the file tomotopy-0.13.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for tomotopy-0.13.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 350e7f3c865bddbefaf9fdfbb3174b9d76c96578258c70e854c5f1850a9cbc59
MD5 cf6617ab6e0e96c67cd3ea18efe59fb0
BLAKE2b-256 9d6d96af5f68bcd3eb1e8eb66bd56bc783c1c1daf885bd8ffca42f6c7f47b990

See more details on using hashes here.

File details

Details for the file tomotopy-0.13.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for tomotopy-0.13.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 1a45051302f0e1a5e38d963ecb3640db8153f2dfd4a01b6dd4f3976431af540e
MD5 bad532309f4425fe1f5a7e62fc908962
BLAKE2b-256 3a51139fe23679f9577975956c69fdd1be0f19f6476cedd2e4d2b4d4e058bb19

See more details on using hashes here.

File details

Details for the file tomotopy-0.13.0-cp311-cp311-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for tomotopy-0.13.0-cp311-cp311-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 ed6c5a693313e55ebd804bd6c773b84b402a46c41ce774bc3c7d65a224213148
MD5 c22321444638f9ac760af7763dfa1b67
BLAKE2b-256 9793d947784acfe36dc09acb843391fe791f3ef3d3a75a2cf46e6458b6460a29

See more details on using hashes here.

File details

Details for the file tomotopy-0.13.0-cp311-cp311-macosx_10_14_x86_64.whl.

File metadata

File hashes

Hashes for tomotopy-0.13.0-cp311-cp311-macosx_10_14_x86_64.whl
Algorithm Hash digest
SHA256 65d06f8810b9eeed86e2c72f093f1eda6be5b78b53ef6bda99832b321a9ced5a
MD5 55a8ceb27612eb5afd10b9df7c60480a
BLAKE2b-256 8b0f0df100dd979a63b4e3d155da0debfe7f5aae30d236d34a1c1e79a0660dce

See more details on using hashes here.

File details

Details for the file tomotopy-0.13.0-cp310-cp310-win_amd64.whl.

File metadata

File hashes

Hashes for tomotopy-0.13.0-cp310-cp310-win_amd64.whl
Algorithm Hash digest
SHA256 cd0d04971b5445dc52087ec00ea405f4b25bf03276ae860b1362c0208430decd
MD5 330aa7c3bfef347f9668015a19663abc
BLAKE2b-256 5f3e0357536895e44ae7464b18533c37ee78df844ad0a68b69be5d70eedb1411

See more details on using hashes here.

File details

Details for the file tomotopy-0.13.0-cp310-cp310-win32.whl.

File metadata

  • Download URL: tomotopy-0.13.0-cp310-cp310-win32.whl
  • Upload date:
  • Size: 3.4 MB
  • Tags: CPython 3.10, Windows x86
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.10.11

File hashes

Hashes for tomotopy-0.13.0-cp310-cp310-win32.whl
Algorithm Hash digest
SHA256 9c87ee3d6f7333228a43c792f15d2fe26f9fd11571123cf8ad269051dc6f7242
MD5 7c547177ed0e11eaa81a75f136c87566
BLAKE2b-256 3ebdebf65cd56ed8ff00da00207db89b117e557378c78dc84e4ee0999144e905

See more details on using hashes here.

File details

Details for the file tomotopy-0.13.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for tomotopy-0.13.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 f558a73bab35a1e0f1b2b067fcec54325586d49d00eac0aea739c24c3bf02fe6
MD5 bb173efc6d1943594e624ed561393d02
BLAKE2b-256 e12c59a6087758d65908a6622bc5170444d4873c5ae2a48c533de50f986312de

See more details on using hashes here.

File details

Details for the file tomotopy-0.13.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for tomotopy-0.13.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 4e0a0989fe57b50c024b9a9c8dc0d6e7aa730cd89be28c1a2624d70655322d00
MD5 3971439a76b64d1349b97fefbc7ae047
BLAKE2b-256 4e0b3dc2ba6468c286107369200e318d982a107505c4b3553f7b95cad7aa8689

See more details on using hashes here.

File details

Details for the file tomotopy-0.13.0-cp310-cp310-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for tomotopy-0.13.0-cp310-cp310-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 dc2de178b8e9b5417d9f2cc806f738184d22dd35b5113ada03d3dccd924c454f
MD5 7e2d5c3e36bb52f40814a730b13e9103
BLAKE2b-256 30bf1fd4943a41399ee80f49d2f57efb20a4f0f39e4fc4c086a72eafc754e9b4

See more details on using hashes here.

File details

Details for the file tomotopy-0.13.0-cp310-cp310-macosx_10_14_x86_64.whl.

File metadata

File hashes

Hashes for tomotopy-0.13.0-cp310-cp310-macosx_10_14_x86_64.whl
Algorithm Hash digest
SHA256 ed81d9c143dd5b056dfa04da33b660a9c9428646989f84841316dd2f4ed2b8aa
MD5 d64796f364af5aedad14392c7eedb0b3
BLAKE2b-256 1c2cac3952eaca7c506278331a0fe9653de268d70f133dbc57cff9386b381d9f

See more details on using hashes here.

File details

Details for the file tomotopy-0.13.0-cp39-cp39-win_amd64.whl.

File metadata

  • Download URL: tomotopy-0.13.0-cp39-cp39-win_amd64.whl
  • Upload date:
  • Size: 3.9 MB
  • Tags: CPython 3.9, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.9.13

File hashes

Hashes for tomotopy-0.13.0-cp39-cp39-win_amd64.whl
Algorithm Hash digest
SHA256 92c166486cdb4458a0fab66da1f2c9624330d4daf4dd11f50dbc000ff49af3a8
MD5 4cb2222874139202eecd8ef6f8837a6f
BLAKE2b-256 e3fa7e00924b5a20b50e1c13ff4c439a0a746fe80084c6946205cbf0d68336ce

See more details on using hashes here.

File details

Details for the file tomotopy-0.13.0-cp39-cp39-win32.whl.

File metadata

  • Download URL: tomotopy-0.13.0-cp39-cp39-win32.whl
  • Upload date:
  • Size: 3.4 MB
  • Tags: CPython 3.9, Windows x86
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.9.13

File hashes

Hashes for tomotopy-0.13.0-cp39-cp39-win32.whl
Algorithm Hash digest
SHA256 8914cd8f9bde95ffda32ff8126f8d2f02627b0c01473bdb4935cf78d578dfe11
MD5 2ad5c35fcf5030c0ce55cbfef7e85ec2
BLAKE2b-256 de105b73b2bfea53de47e17dc63b03e944de560336bb28cb639daaf0207865cf

See more details on using hashes here.

File details

Details for the file tomotopy-0.13.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for tomotopy-0.13.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 2adb2c2be8ed880ad458125b4f466780675f27b61b504cdd9ad05c4700b1830a
MD5 40bbb1485d0f5e0c2e2734b36ade0c40
BLAKE2b-256 a1376d1369be757b3f4b0c3eaff41e993ef8d4a5062253901fb509c2de4496e0

See more details on using hashes here.

File details

Details for the file tomotopy-0.13.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for tomotopy-0.13.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 6be55ebfed674bce100809d9b1f21ca245bc1f85abc66746ed81620020339825
MD5 b0b9b1714fdea9f82a29a458ed649836
BLAKE2b-256 3606121d18d2da381d73428987bd9f6f6e187240cc98a8b14397033df1cfd1ff

See more details on using hashes here.

File details

Details for the file tomotopy-0.13.0-cp39-cp39-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for tomotopy-0.13.0-cp39-cp39-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 a322c5d27b0532e5ba585e3e9b8e7c374b7085776c41e6cffcd7a830c1466144
MD5 1491c42c5fcb06d0ce6ccaedb57aec70
BLAKE2b-256 b83bbafaee294a29c807f4b8dee22265e911bedf2d8ad028977b05618d0400ce

See more details on using hashes here.

File details

Details for the file tomotopy-0.13.0-cp39-cp39-macosx_10_14_x86_64.whl.

File metadata

File hashes

Hashes for tomotopy-0.13.0-cp39-cp39-macosx_10_14_x86_64.whl
Algorithm Hash digest
SHA256 77ab38d3a40b6988d1a60b9823a2bea79406b1a0e9470540bd364bc3e0b64b52
MD5 cbc6c4865eca29c3c3d10231d67bfec0
BLAKE2b-256 59eec77c67572246ce9b6a5a64b46bbaa884cf2cf4431d84db36de768f9bcfba

See more details on using hashes here.

File details

Details for the file tomotopy-0.13.0-cp38-cp38-win_amd64.whl.

File metadata

  • Download URL: tomotopy-0.13.0-cp38-cp38-win_amd64.whl
  • Upload date:
  • Size: 3.9 MB
  • Tags: CPython 3.8, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.8.10

File hashes

Hashes for tomotopy-0.13.0-cp38-cp38-win_amd64.whl
Algorithm Hash digest
SHA256 1c6c5cf3f92aa2e91f5d06a5df1fd29dbfa1e5a70f1eb84ea5a147bdd89a1da9
MD5 28e3613dd1c46b407e90fcb51eb6951e
BLAKE2b-256 c27b9e5700695f8a81ba0d61a1060029ec9db4b20e2bc126a8d9693a08c90164

See more details on using hashes here.

File details

Details for the file tomotopy-0.13.0-cp38-cp38-win32.whl.

File metadata

  • Download URL: tomotopy-0.13.0-cp38-cp38-win32.whl
  • Upload date:
  • Size: 3.4 MB
  • Tags: CPython 3.8, Windows x86
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.8.10

File hashes

Hashes for tomotopy-0.13.0-cp38-cp38-win32.whl
Algorithm Hash digest
SHA256 d74cd475052f07b9826728fd7de3978bea5654e011019baeff2f96cce7bf6057
MD5 b67fcd8eee6669ebc557cfc6094389fb
BLAKE2b-256 1bf01b1fbc2e4da5adf167bb0a91b0ed153bf9a1bbc37ef6243443bf0082e276

See more details on using hashes here.

File details

Details for the file tomotopy-0.13.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for tomotopy-0.13.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 04c62d2448caa60b07864c3724330fa8b98f2eccf5c4939b3d5bad4180e237aa
MD5 422160ec496295147eda0ad9aa247b79
BLAKE2b-256 3d866c8653420d54e92f99295f4439994cf79573f6987e245346ac6e64bd835a

See more details on using hashes here.

File details

Details for the file tomotopy-0.13.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for tomotopy-0.13.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 8df707486daecc19ec46e1e4e71d53aaf1cd66576527c2541674cdcc8855fb97
MD5 1959c350862ecb0e26a64cc71326ab5c
BLAKE2b-256 9c92571d9b7bc4b4751d59bc7e436f1f7dc0aba60ebe6936defb23cf715d4357

See more details on using hashes here.

File details

Details for the file tomotopy-0.13.0-cp38-cp38-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for tomotopy-0.13.0-cp38-cp38-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 16dfdc727f2079bde622eff6e2b83ef3cb983c65dd12fee636f72620e2a510b4
MD5 f0165f8250a0767e91a54db6062f3fad
BLAKE2b-256 1d9820f11f4dcbee5374abef30a083c83e2fb5c51c9aa7c58ae359d8914029a5

See more details on using hashes here.

File details

Details for the file tomotopy-0.13.0-cp38-cp38-macosx_10_14_x86_64.whl.

File metadata

File hashes

Hashes for tomotopy-0.13.0-cp38-cp38-macosx_10_14_x86_64.whl
Algorithm Hash digest
SHA256 931fda421da4f186b84c5f06584d8883cb19fe323a46f9d8e95a9ee507232621
MD5 2bd9de9cd341f0e31e1e9737d810285b
BLAKE2b-256 36f1d269df5afe16442de7d95f06afa66b00c079404f2584e203e1c8af9efbc3

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page