Skip to main content

Practical Machine Learning for NLP

Project description

Thinc: Practical Machine Learning for NLP in Python

Thinc is the machine learning library powering spaCy. It features a battle-tested linear model designed for large sparse learning problems, and a flexible neural network model under development for spaCy v2.0.

Thinc is a practical toolkit for implementing models that follow the "Embed, encode, attend, predict" architecture. It's designed to be easy to install, efficient for CPU usage and optimised for NLP and deep learning with text – in particular, hierarchically structured input and variable-length sequences.

🔮 Version 6.12 out now! Read the release notes here.

Travis Build Status Appveyor Build Status Test Coverage Current Release Version PyPi Version conda Version Python wheels Follow us on Twitter

What's where (as of v6.9.0)

Module Description
thinc.v2v.Model Base class.
thinc.v2v Layers transforming vectors to vectors.
thinc.i2v Layers embedding IDs to vectors.
thinc.t2v Layers pooling tensors to vectors.
thinc.t2t Layers transforming tensors to tensors (e.g. CNN, LSTM).
thinc.api Higher-order functions, for building networks. Will be renamed.
thinc.extra Datasets and utilities.
thinc.neural.ops Container classes for mathematical operations. Will be reorganized.
thinc.linear.avgtron Legacy efficient Averaged Perceptron implementation.

Development status

Thinc's deep learning functionality is still under active development: APIs are unstable, and we're not yet ready to provide usage support. However, if you're already quite familiar with neural networks, there's a lot here you might find interesting. Thinc's conceptual model is quite different from TensorFlow's. Thinc also implements some novel features, such as a small DSL for concisely wiring up models, embedding tables that support pre-computation and the hashing trick, dynamic batch sizes, a concatenation-based approach to variable-length sequences, and support for model averaging for the Adam solver (which performs very well).

No computational graph – just higher order functions

The central problem for a neural network implementation is this: during the forward pass, you compute results that will later be useful during the backward pass. How do you keep track of this arbitrary state, while making sure that layers can be cleanly composed?

Most libraries solve this problem by having you declare the forward computations, which are then compiled into a graph somewhere behind the scenes. Thinc doesn't have a "computational graph". Instead, we just use the stack, because we put the state from the forward pass into callbacks.

All nodes in the network have a simple signature:

f(inputs) -> {outputs, f(d_outputs)->d_inputs}

To make this less abstract, here's a ReLu activation, following this signature:

def relu(inputs):
    mask = inputs > 0
    def backprop_relu(d_outputs, optimizer):
        return d_outputs * mask
    return inputs * mask, backprop_relu

When you call the relu function, you get back an output variable, and a callback. This lets you calculate a gradient using the output, and then pass it into the callback to perform the backward pass.

This signature makes it easy to build a complex network out of smaller pieces, using arbitrary higher-order functions you can write yourself. To make this clearer, we need a function for a weights layer. Usually this will be implemented as a class — but let's continue using closures, to keep things concise, and to keep the simplicity of the interface explicit.

The main complication for the weights layer is that we now have a side-effect to manage: we would like to update the weights. There are a few ways to handle this. In Thinc we currently pass a callable into the backward pass. (I'm not convinced this is best.)

import numpy

def create_linear_layer(n_out, n_in):
    W = numpy.zeros((n_out, n_in))
    b = numpy.zeros((n_out, 1))

    def forward(X):
        Y = W @ X + b
        def backward(dY, optimizer):
            dX = W.T @ dY
            dW = numpy.einsum('ik,jk->ij', dY, X)
            db = dY.sum(axis=0)

            optimizer(W, dW)
            optimizer(b, db)

            return dX
        return Y, backward
    return forward

If we call Wb = create_linear_layer(5, 4), the variable Wb will be the forward() function, implemented inside the body of create_linear_layer(). The Wb instance will have access to the W and b variable defined in its outer scope. If we invoke create_linear_layer() again, we get a new instance, with its own internal state.

The Wb instance and the relu function have exactly the same signature. This makes it easy to write higher order functions to compose them. The most obvious thing to do is chain them together:

def chain(*layers):
    def forward(X):
        backprops = []
        Y = X
        for layer in layers:
            Y, backprop = layer(Y)
            backprops.append(backprop)
        def backward(dY, optimizer):
            for backprop in reversed(backprops):
                dY = backprop(dY, optimizer)
            return dY
        return Y, backward
    return forward

We could now chain our linear layer together with the relu activation, to create a simple feed-forward network:

Wb1 = create_linear_layer(10, 5)
Wb2 = create_linear_layer(3, 10)

model = chain(Wb1, relu, Wb2)

X = numpy.random.uniform(size=(5, 4))

y, bp_y = model(X)

dY = y - truth
dX = bp_y(dY, optimizer)

This conceptual model makes Thinc very flexible. The trade-off is that Thinc is less convenient and efficient at workloads that fit exactly into what TensorFlow etc. are designed for. If your graph really is static, and your inputs are homogenous in size and shape, Keras will likely be faster and simpler. But if you want to pass normal Python objects through your network, or handle sequences and recursions of arbitrary length or complexity, you might find Thinc's design a better fit for your problem.

Quickstart

Thinc should install cleanly with both pip and conda, for Pythons 2.7+ and 3.5+, on Linux, macOS / OSX and Windows. Its only system dependency is a compiler tool-chain (e.g. build-essential) and the Python development headers (e.g. python-dev).

pip install thinc

For GPU support, we're grateful to use the work of Chainer's cupy module, which provides a numpy-compatible interface for GPU arrays. However, installing Chainer when no GPU is available currently causes an error. We therefore do not list Chainer as an explicit dependency — so building Thinc for GPU requires some extra steps:

export CUDA_HOME=/usr/local/cuda-8.0 # Or wherever your CUDA is
export PATH=$PATH:$CUDA_HOME/bin
pip install chainer
python -c "import cupy; assert cupy" # Check it installed
pip install thinc_gpu_ops thinc # Or `thinc[cuda]`
python -c "import thinc_gpu_ops" # Check the GPU ops were built

The rest of this section describes how to build Thinc from source. If you have Fabric installed, you can use the shortcut:

git clone https://github.com/explosion/thinc
cd thinc
fab clean env make test

You can then run the examples as follows:

fab eg.mnist
fab eg.basic_tagger
fab eg.cnn_tagger

Otherwise, you can build and test explicitly with:

git clone https://github.com/explosion/thinc
cd thinc

virtualenv .env
source .env/bin/activate

pip install -r requirements.txt
python setup.py build_ext --inplace
py.test thinc/

And then run the examples as follows:

python examples/mnist.py
python examples/basic_tagger.py
python examples/cnn_tagger.py

Usage

The Neural Network API is still subject to change, even within minor versions. You can get a feel for the current API by checking out the examples. Here are a few quick highlights.

1. Shape inference

Models can be created with some dimensions unspecified. Missing dimensions are inferred when pre-trained weights are loaded or when training begins. This eliminates a common source of programmer error:

# Invalid network — shape mismatch
model = chain(ReLu(512, 748), ReLu(512, 784), Softmax(10))

# Leave the dimensions unspecified, and you can't be wrong.
model = chain(ReLu(512), ReLu(512), Softmax())

2. Operator overloading

The Model.define_operators() classmethod allows you to bind arbitrary binary functions to Python operators, for use in any Model instance. The method can (and should) be used as a context-manager, so that the overloading is limited to the immediate block. This allows concise and expressive model definition:

with Model.define_operators({'>>': chain}):
    model = ReLu(512) >> ReLu(512) >> Softmax()

The overloading is cleaned up at the end of the block. A fairly arbitrary zoo of functions are currently implemented. Some of the most useful:

  • chain(model1, model2): Compose two models f(x) and g(x) into a single model computing g(f(x)).
  • clone(model1, int): Create n copies of a model, each with distinct weights, and chain them together.
  • concatenate(model1, model2): Given two models with output dimensions (n,) and (m,), construct a model with output dimensions (m+n,).
  • add(model1, model2): add(f(x), g(x)) = f(x)+g(x)
  • make_tuple(model1, model2): Construct tuples of the outputs of two models, at the batch level. The backward pass expects to receive a tuple of gradients, which are routed through the appropriate model, and summed.

Putting these things together, here's the sort of tagging model that Thinc is designed to make easy.

with Model.define_operators({'>>': chain, '**': clone, '|': concatenate}):
    model = (
        add_eol_markers('EOL')
        >> flatten
        >> memoize(
            CharLSTM(char_width)
            | (normalize >> str2int >> Embed(word_width)))
        >> ExtractWindow(nW=2)
        >> BatchNorm(ReLu(hidden_width)) ** 3
        >> Softmax()
    )

Not all of these pieces are implemented yet, but hopefully this shows where we're going. The memoize function will be particularly important: in any batch of text, the common words will be very common. It's therefore important to evaluate models such as the CharLSTM once per word type per minibatch, rather than once per token.

3. Callback-based backpropagation

Most neural network libraries use a computational graph abstraction. This takes the execution away from you, so that gradients can be computed automatically. Thinc follows a style more like the autograd library, but with larger operations. Usage is as follows:

def explicit_sgd_update(X, y):
    sgd = lambda weights, gradient: weights - gradient * 0.001
    yh, finish_update = model.begin_update(X, drop=0.2)
    finish_update(y-yh, sgd)

Separating the backpropagation into three parts like this has many advantages. The interface to all models is completely uniform — there is no distinction between the top-level model you use as a predictor and the internal models for the layers. We also make concurrency simple, by making the begin_update() step a pure function, and separating the accumulation of the gradient from the action of the optimizer.

4. Class annotations

To keep the class hierarchy shallow, Thinc uses class decorators to reuse code for layer definitions. Specifically, the following decorators are available:

  • describe.attributes(): Allows attributes to be specified by keyword argument. Used especially for dimensions and parameters.
  • describe.on_init(): Allows callbacks to be specified, which will be called at the end of the __init__.py.
  • describe.on_data(): Allows callbacks to be specified, which will be called on Model.begin_training().

🛠 Changelog

Version Date Description
v6.12.1 2018-11-30 Fix msgpack pin
v6.12.0 2018-10-15 Wheels and separate GPU ops
v6.10.3 2018-07-21 Python 3.7 support and dependency updates
v6.11.2 2018-05-21 Improve GPU installation
v6.11.1 2018-05-20 Support direct linkage to BLAS libraries
v6.11.0 2018-03-16 n/a
v6.10.2 2017-12-06 Efficiency improvements and bug fixes
v6.10.1 2017-11-15 Fix GPU install and minor memory leak
v6.10.0 2017-10-28 CPU efficiency improvements, refactoring
v6.9.0 2017-10-03 Reorganize layers, bug fix to Layer Normalization
v6.8.2 2017-09-26 Fix packaging of gpu_ops
v6.8.1 2017-08-23 Fix Windows support
v6.8.0 2017-07-25 SELU layer, attention, improved GPU/CPU compatibility
v6.7.3 2017-06-05 Fix convolution on GPU
v6.7.2 2017-06-02 Bug fixes to serialization
v6.7.1 2017-06-02 Improve serialization
v6.7.0 2017-06-01 Fixes to serialization, hash embeddings and flatten ops
v6.6.0 2017-05-14 Improved GPU usage and examples
v6.5.2 2017-03-20 n/a
v6.5.1 2017-03-20 Improved linear class and Windows fix
v6.5.0 2017-03-11 Supervised similarity, fancier embedding and improvements to linear model
v6.4.0 2017-02-15 n/a
v6.3.0 2017-01-25 Efficiency improvements, argument checking and error messaging
v6.2.0 2017-01-15 Improve API and introduce overloaded operators
v6.1.3 2017-01-10 More neural network functions and training continuation
v6.1.2 2017-01-09 n/a
v6.1.1 2017-01-09 n/a
v6.1.0 2017-01-09 n/a
v6.0.0 2016-12-31 Add thinc.neural for NLP-oriented deep learning

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

thinc-7.0.0.tar.gz (1.9 MB view details)

Uploaded Source

Built Distributions

thinc-7.0.0-cp37-cp37m-win_amd64.whl (1.9 MB view details)

Uploaded CPython 3.7m Windows x86-64

thinc-7.0.0-cp37-cp37m-manylinux1_x86_64.whl (2.1 MB view details)

Uploaded CPython 3.7m

thinc-7.0.0-cp37-cp37m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl (2.8 MB view details)

Uploaded CPython 3.7m macOS 10.10+ intel macOS 10.10+ x86-64 macOS 10.6+ intel macOS 10.9+ intel macOS 10.9+ x86-64

thinc-7.0.0-cp36-cp36m-win_amd64.whl (1.9 MB view details)

Uploaded CPython 3.6m Windows x86-64

thinc-7.0.0-cp36-cp36m-manylinux1_x86_64.whl (2.1 MB view details)

Uploaded CPython 3.6m

thinc-7.0.0-cp36-cp36m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl (2.9 MB view details)

Uploaded CPython 3.6m macOS 10.10+ intel macOS 10.10+ x86-64 macOS 10.6+ intel macOS 10.9+ intel macOS 10.9+ x86-64

thinc-7.0.0-cp35-cp35m-win_amd64.whl (1.9 MB view details)

Uploaded CPython 3.5m Windows x86-64

thinc-7.0.0-cp35-cp35m-manylinux1_x86_64.whl (2.0 MB view details)

Uploaded CPython 3.5m

thinc-7.0.0-cp27-cp27mu-manylinux1_x86_64.whl (2.0 MB view details)

Uploaded CPython 2.7mu

thinc-7.0.0-cp27-cp27m-manylinux1_x86_64.whl (2.0 MB view details)

Uploaded CPython 2.7m

thinc-7.0.0-cp27-cp27m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl (2.8 MB view details)

Uploaded CPython 2.7m macOS 10.10+ intel macOS 10.10+ x86-64 macOS 10.6+ intel macOS 10.9+ intel macOS 10.9+ x86-64

File details

Details for the file thinc-7.0.0.tar.gz.

File metadata

  • Download URL: thinc-7.0.0.tar.gz
  • Upload date:
  • Size: 1.9 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.12.1 pkginfo/1.4.2 requests/2.20.1 setuptools/39.0.1 requests-toolbelt/0.8.0 tqdm/4.28.1 CPython/3.6.6

File hashes

Hashes for thinc-7.0.0.tar.gz
Algorithm Hash digest
SHA256 c0c9c611a8a71ac81127f1b825b86f865ce7d7c542b39d3dda239a2364a2f3f6
MD5 6e95d810a4853961a78397bcc6291af5
BLAKE2b-256 3f3aec72b323b56273cb21d3a7f52bb09257e6e4d59cf2487070fd91a774809e

See more details on using hashes here.

File details

Details for the file thinc-7.0.0-cp37-cp37m-win_amd64.whl.

File metadata

  • Download URL: thinc-7.0.0-cp37-cp37m-win_amd64.whl
  • Upload date:
  • Size: 1.9 MB
  • Tags: CPython 3.7m, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.12.1 pkginfo/1.4.2 requests/2.20.1 setuptools/39.0.1 requests-toolbelt/0.8.0 tqdm/4.28.1 CPython/3.6.6

File hashes

Hashes for thinc-7.0.0-cp37-cp37m-win_amd64.whl
Algorithm Hash digest
SHA256 2c3d52266c01bccc5abe3286cceb541135a03ec1129d157e474283c8db6bb950
MD5 120737b49ee9adb45b47f9c01222552c
BLAKE2b-256 b938fd2090c320523bc31779dac5bb81f708e505dd89e1cf0a723690e5947dc6

See more details on using hashes here.

File details

Details for the file thinc-7.0.0-cp37-cp37m-manylinux1_x86_64.whl.

File metadata

  • Download URL: thinc-7.0.0-cp37-cp37m-manylinux1_x86_64.whl
  • Upload date:
  • Size: 2.1 MB
  • Tags: CPython 3.7m
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.12.1 pkginfo/1.4.2 requests/2.20.1 setuptools/39.0.1 requests-toolbelt/0.8.0 tqdm/4.28.1 CPython/3.6.6

File hashes

Hashes for thinc-7.0.0-cp37-cp37m-manylinux1_x86_64.whl
Algorithm Hash digest
SHA256 4aa47f4376f4ad2ff618bba5256b9ddadd70ec3180dea3a1fb6876a43167a3ca
MD5 a2baf857035a0fd7aa5dec0665cc4425
BLAKE2b-256 fcac807d9008888912e2f4fe42b73d74bde46378013c88829cdf4dfec2b91e71

See more details on using hashes here.

File details

Details for the file thinc-7.0.0-cp37-cp37m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl.

File metadata

File hashes

Hashes for thinc-7.0.0-cp37-cp37m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl
Algorithm Hash digest
SHA256 c9c4b63324899b63ceb29a1178e3e887a9434f63cf77e14d9bfd7c6f3d7162e9
MD5 5351dad71df92c3730cd6352409235c6
BLAKE2b-256 cb45bbafb3fc0a491986b0209f4be4fc0760398e8e10a4f0c3e2e1cb086ac81b

See more details on using hashes here.

File details

Details for the file thinc-7.0.0-cp36-cp36m-win_amd64.whl.

File metadata

  • Download URL: thinc-7.0.0-cp36-cp36m-win_amd64.whl
  • Upload date:
  • Size: 1.9 MB
  • Tags: CPython 3.6m, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.12.1 pkginfo/1.4.2 requests/2.20.1 setuptools/39.0.1 requests-toolbelt/0.8.0 tqdm/4.28.1 CPython/3.6.6

File hashes

Hashes for thinc-7.0.0-cp36-cp36m-win_amd64.whl
Algorithm Hash digest
SHA256 59d7b94d064014a6cedfcf85e9da56659b8fe63aa9b43924522cd2739012aca2
MD5 17598af752ae9bba5b23f2dd02ed6f07
BLAKE2b-256 ac3560f45d7753fb11c9fa1f267762c68182608dfed7166d53897a205ed0ded2

See more details on using hashes here.

File details

Details for the file thinc-7.0.0-cp36-cp36m-manylinux1_x86_64.whl.

File metadata

  • Download URL: thinc-7.0.0-cp36-cp36m-manylinux1_x86_64.whl
  • Upload date:
  • Size: 2.1 MB
  • Tags: CPython 3.6m
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.12.1 pkginfo/1.4.2 requests/2.20.1 setuptools/39.0.1 requests-toolbelt/0.8.0 tqdm/4.28.1 CPython/3.6.6

File hashes

Hashes for thinc-7.0.0-cp36-cp36m-manylinux1_x86_64.whl
Algorithm Hash digest
SHA256 973c8c09bb6dddcc51545357d7d15ec3dd9bfc181cde34ea73a251b0e17def36
MD5 565584330c7686f91f3eeebc91687bbd
BLAKE2b-256 ccfcfe828ead7d8587014522a4ecdc146d40a349ca3b78bfc8e49c08b38e4bdd

See more details on using hashes here.

File details

Details for the file thinc-7.0.0-cp36-cp36m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl.

File metadata

File hashes

Hashes for thinc-7.0.0-cp36-cp36m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl
Algorithm Hash digest
SHA256 e7233b6a0138e6ec87aa908ee8cb55a84250a5af0825933a3a17aec6cdf704ec
MD5 bcb5537cc3e62bdcc1518457713abe24
BLAKE2b-256 58029ffee23344008bfa920fe2119f22bbc47d9be535e4a88d1763d4511b7ec0

See more details on using hashes here.

File details

Details for the file thinc-7.0.0-cp35-cp35m-win_amd64.whl.

File metadata

  • Download URL: thinc-7.0.0-cp35-cp35m-win_amd64.whl
  • Upload date:
  • Size: 1.9 MB
  • Tags: CPython 3.5m, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.12.1 pkginfo/1.4.2 requests/2.20.1 setuptools/39.0.1 requests-toolbelt/0.8.0 tqdm/4.28.1 CPython/3.6.6

File hashes

Hashes for thinc-7.0.0-cp35-cp35m-win_amd64.whl
Algorithm Hash digest
SHA256 20921e375092da6e754cce9e138d25cd52e6a19f6453325fdca85a083e923bab
MD5 60860e8da5f17153b7e950dc4226f638
BLAKE2b-256 7e0b5e43394ab12166ccfb90434a4ed44e0df4bb77f9f2746b945e14670a18c2

See more details on using hashes here.

File details

Details for the file thinc-7.0.0-cp35-cp35m-manylinux1_x86_64.whl.

File metadata

  • Download URL: thinc-7.0.0-cp35-cp35m-manylinux1_x86_64.whl
  • Upload date:
  • Size: 2.0 MB
  • Tags: CPython 3.5m
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.12.1 pkginfo/1.4.2 requests/2.20.1 setuptools/39.0.1 requests-toolbelt/0.8.0 tqdm/4.28.1 CPython/3.6.6

File hashes

Hashes for thinc-7.0.0-cp35-cp35m-manylinux1_x86_64.whl
Algorithm Hash digest
SHA256 19478d8bf7c08ba4debc3c815b92758328584c76e305e5c0b2a2c7ab5f040d47
MD5 b2a0881c314f7a795c60f1d00cf5dadc
BLAKE2b-256 1e3bade9c46d814cbce49f17adcc8337c7ef6d8516a0966f66d99152dd546719

See more details on using hashes here.

File details

Details for the file thinc-7.0.0-cp27-cp27mu-manylinux1_x86_64.whl.

File metadata

  • Download URL: thinc-7.0.0-cp27-cp27mu-manylinux1_x86_64.whl
  • Upload date:
  • Size: 2.0 MB
  • Tags: CPython 2.7mu
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.12.1 pkginfo/1.4.2 requests/2.20.1 setuptools/39.0.1 requests-toolbelt/0.8.0 tqdm/4.28.1 CPython/3.6.6

File hashes

Hashes for thinc-7.0.0-cp27-cp27mu-manylinux1_x86_64.whl
Algorithm Hash digest
SHA256 7bf55ca6340af883e0e42d7f9d0e1b7cac0368dfaef6270bbcec906b8aceacd2
MD5 6b10a5c72af581fb55ae486cbded57b6
BLAKE2b-256 ccb868be2d2e5f413c15ec2bcdc395e7c94835501b349e805217dab4af19a2a9

See more details on using hashes here.

File details

Details for the file thinc-7.0.0-cp27-cp27m-manylinux1_x86_64.whl.

File metadata

  • Download URL: thinc-7.0.0-cp27-cp27m-manylinux1_x86_64.whl
  • Upload date:
  • Size: 2.0 MB
  • Tags: CPython 2.7m
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.12.1 pkginfo/1.4.2 requests/2.20.1 setuptools/39.0.1 requests-toolbelt/0.8.0 tqdm/4.28.1 CPython/3.6.6

File hashes

Hashes for thinc-7.0.0-cp27-cp27m-manylinux1_x86_64.whl
Algorithm Hash digest
SHA256 47e55e73ab9a177277cdf4bbbee2921fa69dc21c4ec440670abc1a6800a39af6
MD5 342b65c8a6dd0488c2ea84172a38ff74
BLAKE2b-256 9da0c93cfb2d7facfee0d6b5d11de987a14c155226139f5ab75cb50a3a9256a1

See more details on using hashes here.

File details

Details for the file thinc-7.0.0-cp27-cp27m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl.

File metadata

File hashes

Hashes for thinc-7.0.0-cp27-cp27m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl
Algorithm Hash digest
SHA256 efb2e964f5be75f4950c4a95f9fc5e282ebafb179c8d7a2d70943e62278a04f4
MD5 cc5e3e9b0c78482d8a4f1a5fbf45de03
BLAKE2b-256 1e52fbcc3af5d8f74d5aecdfa1e427b87de105a63337f52eaab0295e360816cd

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page