Skip to main content

Tomoto, Topic Modeling Tool for Python

Project description

What is tomotopy?

tomotopy is a Python extension of tomoto (Topic Modeling Tool) which is a Gibbs-sampling based topic model library written in C++. It utilizes a vectorization of modern CPUs for maximizing speed. The current version of tomoto supports several major topic models including

  • Latent Dirichlet Allocation (tomotopy.LDAModel)

  • Labeled LDA (tomotopy.LLDAModel)

  • Partially Labeled LDA (tomotopy.PLDAModel)

  • Supervised LDA (tomotopy.SLDAModel)

  • Dirichlet Multinomial Regression (tomotopy.DMRModel)

  • Generalized Dirichlet Multinomial Regression (tomotopy.GDMRModel)

  • Hierarchical Dirichlet Process (tomotopy.HDPModel)

  • Hierarchical LDA (tomotopy.HLDAModel)

  • Multi Grain LDA (tomotopy.MGLDAModel)

  • Pachinko Allocation (tomotopy.PAModel)

  • Hierarchical PA (tomotopy.HPAModel)

  • Correlated Topic Model (tomotopy.CTModel)

  • Dynamic Topic Model (tomotopy.DTModel)

  • Pseudo-document based Topic Model (tomotopy.PTModel).

https://badge.fury.io/py/tomotopy.svg

Getting Started

You can install tomotopy easily using pip. (https://pypi.org/project/tomotopy/)

$ pip install --upgrade pip
$ pip install tomotopy

The supported OS and Python versions are:

  • Linux (x86-64) with Python >= 3.5

  • macOS >= 10.13 with Python >= 3.5

  • Windows 7 or later (x86, x86-64) with Python >= 3.5

  • Other OS with Python >= 3.5: Compilation from source code required (with c++11 compatible compiler)

After installing, you can start tomotopy by just importing.

import tomotopy as tp
print(tp.isa) # prints 'avx2', 'avx', 'sse2' or 'none'

Currently, tomotopy can exploits AVX2, AVX or SSE2 SIMD instruction set for maximizing performance. When the package is imported, it will check available instruction sets and select the best option. If tp.isa tells none, iterations of training may take a long time. But, since most of modern Intel or AMD CPUs provide SIMD instruction set, the SIMD acceleration could show a big improvement.

Here is a sample code for simple LDA training of texts from ‘sample.txt’ file.

import tomotopy as tp
mdl = tp.LDAModel(k=20)
for line in open('sample.txt'):
    mdl.add_doc(line.strip().split())

for i in range(0, 100, 10):
    mdl.train(10)
    print('Iteration: {}\tLog-likelihood: {}'.format(i, mdl.ll_per_word))

for k in range(mdl.k):
    print('Top 10 words of topic #{}'.format(k))
    print(mdl.get_topic_words(k, top_n=10))

mdl.summary()

Performance of tomotopy

tomotopy uses Collapsed Gibbs-Sampling(CGS) to infer the distribution of topics and the distribution of words. Generally CGS converges more slowly than Variational Bayes(VB) that [gensim’s LdaModel] uses, but its iteration can be computed much faster. In addition, tomotopy can take advantage of multicore CPUs with a SIMD instruction set, which can result in faster iterations.

[gensim’s LdaModel]: https://radimrehurek.com/gensim/models/ldamodel.html

Following chart shows the comparison of LDA model’s running time between tomotopy and gensim. The input data consists of 1000 random documents from English Wikipedia with 1,506,966 words (about 10.1 MB). tomotopy trains 200 iterations and gensim trains 10 iterations.

https://bab2min.github.io/tomotopy/images/tmt_i5.png

↑ Performance in Intel i5-6600, x86-64 (4 cores)

https://bab2min.github.io/tomotopy/images/tmt_xeon.png

↑ Performance in Intel Xeon E5-2620 v4, x86-64 (8 cores, 16 threads)

https://bab2min.github.io/tomotopy/images/tmt_r7_3700x.png

↑ Performance in AMD Ryzen7 3700X, x86-64 (8 cores, 16 threads)

Although tomotopy iterated 20 times more, the overall running time was 5~10 times faster than gensim. And it yields a stable result.

It is difficult to compare CGS and VB directly because they are totaly different techniques. But from a practical point of view, we can compare the speed and the result between them. The following chart shows the log-likelihood per word of two models’ result.

https://bab2min.github.io/tomotopy/images/LLComp.png

The SIMD instruction set has a great effect on performance. Following is a comparison between SIMD instruction sets.

https://bab2min.github.io/tomotopy/images/SIMDComp.png

Fortunately, most of recent x86-64 CPUs provide AVX2 instruction set, so we can enjoy the performance of AVX2.

Vocabulary controlling using CF and DF

CF(collection frequency) and DF(document frequency) are concepts used in information retreival, and each represents the total number of times the word appears in the corpus and the number of documents in which the word appears within the corpus, respectively. tomotopy provides these two measures under the parameters of min_cf and min_df to trim low frequency words when building the corpus.

For example, let’s say we have 5 documents #0 ~ #4 which are composed of the following words:

#0 : a, b, c, d, e, c
#1 : a, b, e, f
#2 : c, d, c
#3 : a, e, f, g
#4 : a, b, g

Both CF of a and CF of c are 4 because it appears 4 times in the entire corpus. But DF of a is 4 and DF of c is 2 because a appears in #0, #1, #3 and #4 and c only appears in #0 and #2. So if we trim low frequency words using min_cf=3, the result becomes follows:

(d, f and g are removed.)
#0 : a, b, c, e, c
#1 : a, b, e
#2 : c, c
#3 : a, e
#4 : a, b

However when min_df=3 the result is like :

(c, d, f and g are removed.)
#0 : a, b, e
#1 : a, b, e
#2 : (empty doc)
#3 : a, e
#4 : a, b

As we can see, min_df is a stronger criterion than min_cf. In performing topic modeling, words that appear repeatedly in only one document do not contribute to estimating the topic-word distribution. So, removing words with low df is a good way to reduce model size while preserving the results of the final model. In short, please prefer using min_df to min_cf.

Model Save and Load

tomotopy provides save and load method for each topic model class, so you can save the model into the file whenever you want, and re-load it from the file.

import tomotopy as tp

mdl = tp.HDPModel()
for line in open('sample.txt'):
    mdl.add_doc(line.strip().split())

for i in range(0, 100, 10):
    mdl.train(10)
    print('Iteration: {}\tLog-likelihood: {}'.format(i, mdl.ll_per_word))

# save into file
mdl.save('sample_hdp_model.bin')

# load from file
mdl = tp.HDPModel.load('sample_hdp_model.bin')
for k in range(mdl.k):
    if not mdl.is_live_topic(k): continue
    print('Top 10 words of topic #{}'.format(k))
    print(mdl.get_topic_words(k, top_n=10))

# the saved model is HDP model,
# so when you load it by LDA model, it will raise an exception
mdl = tp.LDAModel.load('sample_hdp_model.bin')

When you load the model from a file, a model type in the file should match the class of methods.

See more at tomotopy.LDAModel.save and tomotopy.LDAModel.load methods.

Documents in the Model and out of the Model

We can use Topic Model for two major purposes. The basic one is to discover topics from a set of documents as a result of trained model, and the more advanced one is to infer topic distributions for unseen documents by using trained model.

We named the document in the former purpose (used for model training) as document in the model, and the document in the later purpose (unseen document during training) as document out of the model.

In tomotopy, these two different kinds of document are generated differently. A document in the model can be created by tomotopy.LDAModel.add_doc method. add_doc can be called before tomotopy.LDAModel.train starts. In other words, after train called, add_doc cannot add a document into the model because the set of document used for training has become fixed.

To acquire the instance of the created document, you should use tomotopy.LDAModel.docs like:

mdl = tp.LDAModel(k=20)
idx = mdl.add_doc(words)
if idx < 0: raise RuntimeError("Failed to add doc")
doc_inst = mdl.docs[idx]
# doc_inst is an instance of the added document

A document out of the model is generated by tomotopy.LDAModel.make_doc method. make_doc can be called only after train starts. If you use make_doc before the set of document used for training has become fixed, you may get wrong results. Since make_doc returns the instance directly, you can use its return value for other manipulations.

mdl = tp.LDAModel(k=20)
# add_doc ...
mdl.train(100)
doc_inst = mdl.make_doc(unseen_doc) # doc_inst is an instance of the unseen document

Inference for Unseen Documents

If a new document is created by tomotopy.LDAModel.make_doc, its topic distribution can be inferred by the model. Inference for unseen document should be performed using tomotopy.LDAModel.infer method.

mdl = tp.LDAModel(k=20)
# add_doc ...
mdl.train(100)
doc_inst = mdl.make_doc(unseen_doc)
topic_dist, ll = mdl.infer(doc_inst)
print("Topic Distribution for Unseen Docs: ", topic_dist)
print("Log-likelihood of inference: ", ll)

The infer method can infer only one instance of tomotopy.Document or a list of instances of tomotopy.Document. See more at tomotopy.LDAModel.infer.

Parallel Sampling Algorithms

Since version 0.5.0, tomotopy allows you to choose a parallelism algorithm. The algorithm provided in versions prior to 0.4.2 is COPY_MERGE, which is provided for all topic models. The new algorithm PARTITION, available since 0.5.0, makes training generally faster and more memory-efficient, but it is available at not all topic models.

The following chart shows the speed difference between the two algorithms based on the number of topics and the number of workers.

https://bab2min.github.io/tomotopy/images/algo_comp.png https://bab2min.github.io/tomotopy/images/algo_comp2.png

Performance by Version

Performance changes by version are shown in the following graph. The time it takes to run the LDA model train with 1000 iteration was measured. (Docs: 11314, Vocab: 60382, Words: 2364724, Intel Xeon Gold 5120 @2.2GHz)

https://bab2min.github.io/tomotopy/images/lda-perf-t1.png https://bab2min.github.io/tomotopy/images/lda-perf-t4.png https://bab2min.github.io/tomotopy/images/lda-perf-t8.png

Pining Topics using Word Priors

Since version 0.6.0, a new method tomotopy.LDAModel.set_word_prior has been added. It allows you to control word prior for each topic. For example, we can set the weight of the word ‘church’ to 1.0 in topic 0, and the weight to 0.1 in the rest of the topics by following codes. This means that the probability that the word ‘church’ is assigned to topic 0 is 10 times higher than the probability of being assigned to another topic. Therefore, most of ‘church’ is assigned to topic 0, so topic 0 contains many words related to ‘church’. This allows to manipulate some topics to be placed at a specific topic number.

import tomotopy as tp
mdl = tp.LDAModel(k=20)

# add documents into `mdl`

# setting word prior
mdl.set_word_prior('church', [1.0 if k == 0 else 0.1 for k in range(20)])

See word_prior_example in example.py for more details.

Examples

You can find an example python code of tomotopy at https://github.com/bab2min/tomotopy/blob/main/examples/ .

You can also get the data file used in the example code at https://drive.google.com/file/d/18OpNijd4iwPyYZ2O7pQoPyeTAKEXa71J/view .

License

tomotopy is licensed under the terms of MIT License, meaning you can use it for any reasonable purpose and remain in complete ownership of all the documentation you produce.

History

  • 0.11.1 (2021-03-28)
    • A critical bug of asymmetric alphas was fixed. Due to this bug, version 0.11.0 has been removed from releases.

  • 0.11.0 (2021-03-26) (removed)
    • A new topic model tomotopy.PTModel for short texts was added into the package.

    • An issue was fixed where tomotopy.HDPModel.infer causes a segmentation fault sometimes.

    • A mismatch of numpy API version was fixed.

    • Now asymmetric document-topic priors are supported.

    • Serializing topic models to bytes in memory is supported.

    • An argument normalize was added to get_topic_dist(), get_topic_word_dist() and get_sub_topic_dist() for controlling normalization of results.

    • Now tomotopy.DMRModel.lambdas and tomotopy.DMRModel.alpha give correct values.

    • Categorical metadata supports for tomotopy.GDMRModel were added (see https://github.com/bab2min/tomotopy/blob/main/examples/gdmr_both_categorical_and_numerical.py ).

    • Python3.5 support was dropped.

  • 0.10.2 (2021-02-16)
    • An issue was fixed where tomotopy.CTModel.train fails with large K.

    • An issue was fixed where tomotopy.utils.Corpus loses their uid values.

  • 0.10.1 (2021-02-14)
    • An issue was fixed where tomotopy.utils.Corpus.extract_ngrams craches with empty input.

    • An issue was fixed where tomotopy.LDAModel.infer raises exception with valid input.

    • An issue was fixed where tomotopy.HLDAModel.infer generates wrong tomotopy.Document.path.

    • Since a new parameter freeze_topics for tomotopy.HLDAModel.train was added, you can control whether to create a new topic or not when training.

  • 0.10.0 (2020-12-19)
    • The interface of tomotopy.utils.Corpus and of tomotopy.LDAModel.docs were unified. Now you can access the document in corpus with the same manner.

    • __getitem__ of tomotopy.utils.Corpus was improved. Not only indexing by int, but also by Iterable[int], slicing are supported. Also indexing by uid is supported.

    • New methods tomotopy.utils.Corpus.extract_ngrams and tomotopy.utils.Corpus.concat_ngrams were added. They extracts n-gram collocations using PMI and concatenates them into a single words.

    • A new method tomotopy.LDAModel.add_corpus was added, and tomotopy.LDAModel.infer can receive corpus as input.

    • A new module tomotopy.coherence was added. It provides the way to calculate coherence of the model.

    • A paramter window_size was added to tomotopy.label.FoRelevance.

    • An issue was fixed where NaN often occurs when training tomotopy.HDPModel.

    • Now Python3.9 is supported.

    • A dependency to py-cpuinfo was removed and the initializing of the module was improved.

  • 0.9.1 (2020-08-08)
    • Memory leaks of version 0.9.0 was fixed.

    • tomotopy.CTModel.summary() was fixed.

  • 0.9.0 (2020-08-04)
    • The tomotopy.LDAModel.summary() method, which prints human-readable summary of the model, has been added.

    • The random number generator of package has been replaced with [EigenRand]. It speeds up the random number generation and solves the result difference between platforms.

    • Due to above, even if seed is the same, the model training result may be different from the version before 0.9.0.

    • Fixed a training error in tomotopy.HDPModel.

    • tomotopy.DMRModel.alpha now shows Dirichlet prior of per-document topic distribution by metadata.

    • tomotopy.DTModel.get_count_by_topics() has been modified to return a 2-dimensional ndarray.

    • tomotopy.DTModel.alpha has been modified to return the same value as tomotopy.DTModel.get_alpha().

    • Fixed an issue where the metadata value could not be obtained for the document of tomotopy.GDMRModel.

    • tomotopy.HLDAModel.alpha now shows Dirichlet prior of per-document depth distribution.

    • tomotopy.LDAModel.global_step has been added.

    • tomotopy.MGLDAModel.get_count_by_topics() now returns the word count for both global and local topics.

    • tomotopy.PAModel.alpha, tomotopy.PAModel.subalpha, and tomotopy.PAModel.get_count_by_super_topic() have been added.

[EigenRand]: https://github.com/bab2min/EigenRand

  • 0.8.2 (2020-07-14)
    • New properties tomotopy.DTModel.num_timepoints and tomotopy.DTModel.num_docs_by_timepoint have been added.

    • A bug which causes different results with the different platform even if seeds were the same was partially fixed. As a result of this fix, now tomotopy in 32 bit yields different training results from earlier version.

  • 0.8.1 (2020-06-08)
    • A bug where tomotopy.LDAModel.used_vocabs returned an incorrect value was fixed.

    • Now tomotopy.CTModel.prior_cov returns a covariance matrix with shape [k, k].

    • Now tomotopy.CTModel.get_correlations with empty arguments returns a correlation matrix with shape [k, k].

  • 0.8.0 (2020-06-06)
    • Since NumPy was introduced in tomotopy, many methods and properties of tomotopy return not just list, but numpy.ndarray now.

    • Tomotopy has a new dependency NumPy >= 1.10.0.

    • A wrong estimation of tomotopy.HDPModel.infer was fixed.

    • A new method about converting HDPModel to LDAModel was added.

    • New properties including tomotopy.LDAModel.used_vocabs, tomotopy.LDAModel.used_vocab_freq and tomotopy.LDAModel.used_vocab_df were added into topic models.

    • A new g-DMR topic model(tomotopy.GDMRModel) was added.

    • An error at initializing tomotopy.label.FoRelevance in macOS was fixed.

    • An error that occured when using tomotopy.utils.Corpus created without raw parameters was fixed.

  • 0.7.1 (2020-05-08)
    • tomotopy.Document.path was added for tomotopy.HLDAModel.

    • A memory corruption bug in tomotopy.label.PMIExtractor was fixed.

    • A compile error in gcc 7 was fixed.

  • 0.7.0 (2020-04-18)
    • tomotopy.DTModel was added into the package.

    • A bug in tomotopy.utils.Corpus.save was fixed.

    • A new method tomotopy.Document.get_count_vector was added into Document class.

    • Now linux distributions use manylinux2010 and an additional optimization is applied.

  • 0.6.2 (2020-03-28)
    • A critical bug related to save and load was fixed. Version 0.6.0 and 0.6.1 have been removed from releases.

  • 0.6.1 (2020-03-22) (removed)
    • A bug related to module loading was fixed.

  • 0.6.0 (2020-03-22) (removed)
    • tomotopy.utils.Corpus class that manages multiple documents easily was added.

    • tomotopy.LDAModel.set_word_prior method that controls word-topic priors of topic models was added.

    • A new argument min_df that filters words based on document frequency was added into every topic model’s __init__.

    • tomotopy.label, the submodule about topic labeling was added. Currently, only tomotopy.label.FoRelevance is provided.

  • 0.5.2 (2020-03-01)
    • A segmentation fault problem was fixed in tomotopy.LLDAModel.add_doc.

    • A bug was fixed that infer of tomotopy.HDPModel sometimes crashes the program.

    • A crash issue was fixed of tomotopy.LDAModel.infer with ps=tomotopy.ParallelScheme.PARTITION, together=True.

  • 0.5.1 (2020-01-11)
    • A bug was fixed that tomotopy.SLDAModel.make_doc doesn’t support missing values for y.

    • Now tomotopy.SLDAModel fully supports missing values for response variables y. Documents with missing values (NaN) are included in modeling topic, but excluded from regression of response variables.

  • 0.5.0 (2019-12-30)
    • Now tomotopy.PAModel.infer returns both topic distribution nd sub-topic distribution.

    • New methods get_sub_topics and get_sub_topic_dist were added into tomotopy.Document. (for PAModel)

    • New parameter parallel was added for tomotopy.LDAModel.train and tomotopy.LDAModel.infer method. You can select parallelism algorithm by changing this parameter.

    • tomotopy.ParallelScheme.PARTITION, a new algorithm, was added. It works efficiently when the number of workers is large, the number of topics or the size of vocabulary is big.

    • A bug where rm_top didn’t work at min_cf < 2 was fixed.

  • 0.4.2 (2019-11-30)
    • Wrong topic assignments of tomotopy.LLDAModel and tomotopy.PLDAModel were fixed.

    • Readable __repr__ of tomotopy.Document and tomotopy.Dictionary was implemented.

  • 0.4.1 (2019-11-27)
    • A bug at init function of tomotopy.PLDAModel was fixed.

  • 0.4.0 (2019-11-18)
    • New models including tomotopy.PLDAModel and tomotopy.HLDAModel were added into the package.

  • 0.3.1 (2019-11-05)
    • An issue where get_topic_dist() returns incorrect value when min_cf or rm_top is set was fixed.

    • The return value of get_topic_dist() of tomotopy.MGLDAModel document was fixed to include local topics.

    • The estimation speed with tw=ONE was improved.

  • 0.3.0 (2019-10-06)
    • A new model, tomotopy.LLDAModel was added into the package.

    • A crashing issue of HDPModel was fixed.

    • Since hyperparameter estimation for HDPModel was implemented, the result of HDPModel may differ from previous versions.

      If you want to turn off hyperparameter estimation of HDPModel, set optim_interval to zero.

  • 0.2.0 (2019-08-18)
    • New models including tomotopy.CTModel and tomotopy.SLDAModel were added into the package.

    • A new parameter option rm_top was added for all topic models.

    • The problems in save and load method for PAModel and HPAModel were fixed.

    • An occassional crash in loading HDPModel was fixed.

    • The problem that ll_per_word was calculated incorrectly when min_cf > 0 was fixed.

  • 0.1.6 (2019-08-09)
    • Compiling errors at clang with macOS environment were fixed.

  • 0.1.4 (2019-08-05)
    • The issue when add_doc receives an empty list as input was fixed.

    • The issue that tomotopy.PAModel.get_topic_words doesn’t extract the word distribution of subtopic was fixed.

  • 0.1.3 (2019-05-19)
    • The parameter min_cf and its stopword-removing function were added for all topic models.

  • 0.1.0 (2019-05-12)
    • First version of tomotopy

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tomotopy-0.11.1.tar.gz (1.1 MB view details)

Uploaded Source

Built Distributions

tomotopy-0.11.1-cp39-cp39-win_amd64.whl (5.2 MB view details)

Uploaded CPython 3.9Windows x86-64

tomotopy-0.11.1-cp39-cp39-win32.whl (3.2 MB view details)

Uploaded CPython 3.9Windows x86

tomotopy-0.11.1-cp39-cp39-manylinux2010_x86_64.whl (15.7 MB view details)

Uploaded CPython 3.9manylinux: glibc 2.12+ x86-64

tomotopy-0.11.1-cp39-cp39-macosx_10_14_x86_64.whl (13.7 MB view details)

Uploaded CPython 3.9macOS 10.14+ x86-64

tomotopy-0.11.1-cp38-cp38-win_amd64.whl (5.2 MB view details)

Uploaded CPython 3.8Windows x86-64

tomotopy-0.11.1-cp38-cp38-win32.whl (3.2 MB view details)

Uploaded CPython 3.8Windows x86

tomotopy-0.11.1-cp38-cp38-manylinux2010_x86_64.whl (15.7 MB view details)

Uploaded CPython 3.8manylinux: glibc 2.12+ x86-64

tomotopy-0.11.1-cp38-cp38-macosx_10_14_x86_64.whl (13.7 MB view details)

Uploaded CPython 3.8macOS 10.14+ x86-64

tomotopy-0.11.1-cp37-cp37m-win_amd64.whl (5.2 MB view details)

Uploaded CPython 3.7mWindows x86-64

tomotopy-0.11.1-cp37-cp37m-win32.whl (3.2 MB view details)

Uploaded CPython 3.7mWindows x86

tomotopy-0.11.1-cp37-cp37m-manylinux2010_x86_64.whl (15.7 MB view details)

Uploaded CPython 3.7mmanylinux: glibc 2.12+ x86-64

tomotopy-0.11.1-cp37-cp37m-macosx_10_14_x86_64.whl (13.7 MB view details)

Uploaded CPython 3.7mmacOS 10.14+ x86-64

tomotopy-0.11.1-cp36-cp36m-win_amd64.whl (5.2 MB view details)

Uploaded CPython 3.6mWindows x86-64

tomotopy-0.11.1-cp36-cp36m-win32.whl (3.2 MB view details)

Uploaded CPython 3.6mWindows x86

tomotopy-0.11.1-cp36-cp36m-manylinux2010_x86_64.whl (15.7 MB view details)

Uploaded CPython 3.6mmanylinux: glibc 2.12+ x86-64

tomotopy-0.11.1-cp36-cp36m-macosx_10_14_x86_64.whl (13.7 MB view details)

Uploaded CPython 3.6mmacOS 10.14+ x86-64

File details

Details for the file tomotopy-0.11.1.tar.gz.

File metadata

  • Download URL: tomotopy-0.11.1.tar.gz
  • Upload date:
  • Size: 1.1 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/3.8.0 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.59.0 CPython/3.8.8

File hashes

Hashes for tomotopy-0.11.1.tar.gz
Algorithm Hash digest
SHA256 792e25c730d7a5370564300f4a1e389b28b3ecf59ab81f5012d3fda9df134caf
MD5 eab359ffe344de29fd7624de0bfa3e63
BLAKE2b-256 2c4264460a111a4f0cf7f6e6b5f2ee662a04aa759d5ec39f38a2bf9550fef8f7

See more details on using hashes here.

File details

Details for the file tomotopy-0.11.1-cp39-cp39-win_amd64.whl.

File metadata

  • Download URL: tomotopy-0.11.1-cp39-cp39-win_amd64.whl
  • Upload date:
  • Size: 5.2 MB
  • Tags: CPython 3.9, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/3.8.0 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.59.0 CPython/3.9.2

File hashes

Hashes for tomotopy-0.11.1-cp39-cp39-win_amd64.whl
Algorithm Hash digest
SHA256 1b8db8e39dc8b059c3c888a7bd9e72fb2de0d1cb86075dce4cfbf968f75692b3
MD5 d8a7b6ed259c59fa3eb59f15ef12049f
BLAKE2b-256 b2870d5d03dabc213ef25f485bbc3f0008608beaf28e1ee1af205a6e847d5d48

See more details on using hashes here.

File details

Details for the file tomotopy-0.11.1-cp39-cp39-win32.whl.

File metadata

  • Download URL: tomotopy-0.11.1-cp39-cp39-win32.whl
  • Upload date:
  • Size: 3.2 MB
  • Tags: CPython 3.9, Windows x86
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/3.8.0 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.59.0 CPython/3.9.2

File hashes

Hashes for tomotopy-0.11.1-cp39-cp39-win32.whl
Algorithm Hash digest
SHA256 91c6c4e48b74bea4c1c020c99caa52167aca1297edf226a0b0c97025a7516fc2
MD5 0a6650c7cc8d5795ff29d5add5a2596e
BLAKE2b-256 d5d1862122520331499fc332f1580924afdbd617c2ffcbed238fccd74f830938

See more details on using hashes here.

File details

Details for the file tomotopy-0.11.1-cp39-cp39-manylinux2010_x86_64.whl.

File metadata

  • Download URL: tomotopy-0.11.1-cp39-cp39-manylinux2010_x86_64.whl
  • Upload date:
  • Size: 15.7 MB
  • Tags: CPython 3.9, manylinux: glibc 2.12+ x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/3.8.0 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.59.0 CPython/3.8.8

File hashes

Hashes for tomotopy-0.11.1-cp39-cp39-manylinux2010_x86_64.whl
Algorithm Hash digest
SHA256 1b6bac6b0f240b8259a11bf8624b7fcb9c6c85a63090b6a09feb5385a97d1dff
MD5 2de67a78cfaa694238ba8170f4133475
BLAKE2b-256 855f56b1cb3ec17e7f71db6f7e90fc6cd7b6742b64e34abfbd9e769debc0ef3a

See more details on using hashes here.

File details

Details for the file tomotopy-0.11.1-cp39-cp39-macosx_10_14_x86_64.whl.

File metadata

  • Download URL: tomotopy-0.11.1-cp39-cp39-macosx_10_14_x86_64.whl
  • Upload date:
  • Size: 13.7 MB
  • Tags: CPython 3.9, macOS 10.14+ x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/3.8.0 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.59.0 CPython/3.9.2

File hashes

Hashes for tomotopy-0.11.1-cp39-cp39-macosx_10_14_x86_64.whl
Algorithm Hash digest
SHA256 f9896d0d8ab236b81cdd333c3cb5c47915d769158e4b96dc5b1e0d72b2ad7cd1
MD5 4956ca5ba77ee5c679ec41132742ee9e
BLAKE2b-256 e798a29f3913ea5077fcb180d2d620667a577cb81991e95d74ada2263ae43118

See more details on using hashes here.

File details

Details for the file tomotopy-0.11.1-cp38-cp38-win_amd64.whl.

File metadata

  • Download URL: tomotopy-0.11.1-cp38-cp38-win_amd64.whl
  • Upload date:
  • Size: 5.2 MB
  • Tags: CPython 3.8, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/3.8.0 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.59.0 CPython/3.8.8

File hashes

Hashes for tomotopy-0.11.1-cp38-cp38-win_amd64.whl
Algorithm Hash digest
SHA256 a8367d647e416585c947e60efb8cc064739aa293d5b50f1351902c3e8a9e8a14
MD5 09f1b6c68612767abba0fb080e69bb5b
BLAKE2b-256 dee5bbce4f55abdb8efff1c2507382b2ac6fcceaeceb3d3b418c1d60e8a19939

See more details on using hashes here.

File details

Details for the file tomotopy-0.11.1-cp38-cp38-win32.whl.

File metadata

  • Download URL: tomotopy-0.11.1-cp38-cp38-win32.whl
  • Upload date:
  • Size: 3.2 MB
  • Tags: CPython 3.8, Windows x86
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/3.8.0 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.59.0 CPython/3.8.8

File hashes

Hashes for tomotopy-0.11.1-cp38-cp38-win32.whl
Algorithm Hash digest
SHA256 8a2f2cc6937b6f0637282bb445e2dc27a7d3375772548491d0d4f0c87b26e965
MD5 b685eafa0bdacddd815e272ec50abcbf
BLAKE2b-256 e2f69062a9f3eadd964eed90d03a172353cc0c0c1ebe3a248725fd838d511f37

See more details on using hashes here.

File details

Details for the file tomotopy-0.11.1-cp38-cp38-manylinux2010_x86_64.whl.

File metadata

  • Download URL: tomotopy-0.11.1-cp38-cp38-manylinux2010_x86_64.whl
  • Upload date:
  • Size: 15.7 MB
  • Tags: CPython 3.8, manylinux: glibc 2.12+ x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/3.8.0 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.59.0 CPython/3.8.8

File hashes

Hashes for tomotopy-0.11.1-cp38-cp38-manylinux2010_x86_64.whl
Algorithm Hash digest
SHA256 b1aa7ac9e30bb4e42070458641b604b8738bb053f6d9baadec9f9e95addd5bb4
MD5 4a2a0104c233c3fc9abc3e0aee27e56f
BLAKE2b-256 8670ad790325560852564b42ff854e3485ec1bbd1a453085612c437e123842e2

See more details on using hashes here.

File details

Details for the file tomotopy-0.11.1-cp38-cp38-macosx_10_14_x86_64.whl.

File metadata

  • Download URL: tomotopy-0.11.1-cp38-cp38-macosx_10_14_x86_64.whl
  • Upload date:
  • Size: 13.7 MB
  • Tags: CPython 3.8, macOS 10.14+ x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/3.8.0 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.59.0 CPython/3.8.8

File hashes

Hashes for tomotopy-0.11.1-cp38-cp38-macosx_10_14_x86_64.whl
Algorithm Hash digest
SHA256 d91df4a2cb315a81aff4a2fe1712649a87c274f53b017d1c2c2adcb653c2ffae
MD5 a921ec043cf392c5124dbed8ce808396
BLAKE2b-256 3e9555de280f6711b168906fcb6249fb30f2246f6ae0ad7453cff667c5efdc20

See more details on using hashes here.

File details

Details for the file tomotopy-0.11.1-cp37-cp37m-win_amd64.whl.

File metadata

  • Download URL: tomotopy-0.11.1-cp37-cp37m-win_amd64.whl
  • Upload date:
  • Size: 5.2 MB
  • Tags: CPython 3.7m, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/3.7.3 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.59.0 CPython/3.7.9

File hashes

Hashes for tomotopy-0.11.1-cp37-cp37m-win_amd64.whl
Algorithm Hash digest
SHA256 c2af0ef672e0b7342d501ecd7a27b0fe789b4d334566ddfd6fe91d482f465814
MD5 990f5d9ac5d74877955cb38775ddde0b
BLAKE2b-256 e60c202cf389eab1cd7e31e78c538b6d5d1ef112bcc2190af6183a523b8febbf

See more details on using hashes here.

File details

Details for the file tomotopy-0.11.1-cp37-cp37m-win32.whl.

File metadata

  • Download URL: tomotopy-0.11.1-cp37-cp37m-win32.whl
  • Upload date:
  • Size: 3.2 MB
  • Tags: CPython 3.7m, Windows x86
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/3.8.0 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.59.0 CPython/3.7.9

File hashes

Hashes for tomotopy-0.11.1-cp37-cp37m-win32.whl
Algorithm Hash digest
SHA256 172e88e8c3135d706354f1a7df42b410b7c15d3ae38bbee29e2b4bb7c5a8189c
MD5 96539c35829e67d0647daf1c3a5634cd
BLAKE2b-256 7c8580d45b323af53ac198e379c87484dde39a4f303deb05640807a99b68ee42

See more details on using hashes here.

File details

Details for the file tomotopy-0.11.1-cp37-cp37m-manylinux2010_x86_64.whl.

File metadata

  • Download URL: tomotopy-0.11.1-cp37-cp37m-manylinux2010_x86_64.whl
  • Upload date:
  • Size: 15.7 MB
  • Tags: CPython 3.7m, manylinux: glibc 2.12+ x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/3.8.0 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.59.0 CPython/3.8.8

File hashes

Hashes for tomotopy-0.11.1-cp37-cp37m-manylinux2010_x86_64.whl
Algorithm Hash digest
SHA256 eabdff484e6f106294f7390dec64049399f8ae0fa4e58c5b9fbaab3cc05e6093
MD5 16a8c19eb663ad46c6ef7d2beb5fb25a
BLAKE2b-256 ce580e4a2e78303b48ff09c934d360a8fbcfad5c98d94436d031356fd234681c

See more details on using hashes here.

File details

Details for the file tomotopy-0.11.1-cp37-cp37m-macosx_10_14_x86_64.whl.

File metadata

  • Download URL: tomotopy-0.11.1-cp37-cp37m-macosx_10_14_x86_64.whl
  • Upload date:
  • Size: 13.7 MB
  • Tags: CPython 3.7m, macOS 10.14+ x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/3.8.0 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.59.0 CPython/3.7.10

File hashes

Hashes for tomotopy-0.11.1-cp37-cp37m-macosx_10_14_x86_64.whl
Algorithm Hash digest
SHA256 ce09faf702f76408ea8fa5007001feac933b0fc9a7ec85314077fe720288c310
MD5 b02046c9b1a5d6cc1fa071e9ffd9b41a
BLAKE2b-256 a75e44f31084e144803fd232407b9783ae4729e88f099c573f377b5474a4ee16

See more details on using hashes here.

File details

Details for the file tomotopy-0.11.1-cp36-cp36m-win_amd64.whl.

File metadata

  • Download URL: tomotopy-0.11.1-cp36-cp36m-win_amd64.whl
  • Upload date:
  • Size: 5.2 MB
  • Tags: CPython 3.6m, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/3.8.0 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.59.0 CPython/3.6.8

File hashes

Hashes for tomotopy-0.11.1-cp36-cp36m-win_amd64.whl
Algorithm Hash digest
SHA256 bd8f23be7cc8b53518b1157a8b30b28b59c93c1bd923ca3387395d72500b8353
MD5 f944f27bf555067f6decb280f73fed1a
BLAKE2b-256 3fe35eef86a176f1f7dbe8536fe6790acfc29208cc8528feb9e7a1a2d23fc947

See more details on using hashes here.

File details

Details for the file tomotopy-0.11.1-cp36-cp36m-win32.whl.

File metadata

  • Download URL: tomotopy-0.11.1-cp36-cp36m-win32.whl
  • Upload date:
  • Size: 3.2 MB
  • Tags: CPython 3.6m, Windows x86
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/3.8.0 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.59.0 CPython/3.6.8

File hashes

Hashes for tomotopy-0.11.1-cp36-cp36m-win32.whl
Algorithm Hash digest
SHA256 77a3d2673c05288080df9344e7505d47ad22d5231138ea9cdcb3202fc709f8e9
MD5 d776859025c68578d38fe8339bfe6dbe
BLAKE2b-256 3231f80614d50a377e219a2b9737d12df2768c2b07ef1a529352d34d00a7b6ee

See more details on using hashes here.

File details

Details for the file tomotopy-0.11.1-cp36-cp36m-manylinux2010_x86_64.whl.

File metadata

  • Download URL: tomotopy-0.11.1-cp36-cp36m-manylinux2010_x86_64.whl
  • Upload date:
  • Size: 15.7 MB
  • Tags: CPython 3.6m, manylinux: glibc 2.12+ x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/3.8.0 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.59.0 CPython/3.8.8

File hashes

Hashes for tomotopy-0.11.1-cp36-cp36m-manylinux2010_x86_64.whl
Algorithm Hash digest
SHA256 66227f492930fedca68d133c6110ad15dc9f5e8443319e022e7055174ee48b55
MD5 0ea27e37af1690b6f1bb3d724dfaabd8
BLAKE2b-256 bf80d8ce2ede9699416e55a03d9f423da7f731c82b70bd639a2c52803883dc57

See more details on using hashes here.

File details

Details for the file tomotopy-0.11.1-cp36-cp36m-macosx_10_14_x86_64.whl.

File metadata

  • Download URL: tomotopy-0.11.1-cp36-cp36m-macosx_10_14_x86_64.whl
  • Upload date:
  • Size: 13.7 MB
  • Tags: CPython 3.6m, macOS 10.14+ x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/3.8.0 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.59.0 CPython/3.6.13

File hashes

Hashes for tomotopy-0.11.1-cp36-cp36m-macosx_10_14_x86_64.whl
Algorithm Hash digest
SHA256 7136bcda3a8b501fe59c6462ddba1def673a116151b8c354eb9f85756b73122c
MD5 aa554fe504f9a4ddb1dd0fbf0d80cef4
BLAKE2b-256 2c7020b8b2c8a0526da098a0249a4b07582a7a5f1c59c1b3f1319cbea97576cb

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page