Skip to main content

Practical Machine Learning for NLP

Project description

Thinc is the machine learning library powering spaCy. It features a battle-tested linear model designed for large sparse learning problems, and a flexible neural network model under development for spaCy v2.0.

Thinc is a practical toolkit for implementing models that follow the “Embed, encode, attend, predict” architecture. It’s designed to be easy to install, efficient for CPU usage and optimised for NLP and deep learning with text – in particular, hierarchically structured input and variable-length sequences.

🔮 Version 6.12 out now! Read the release notes here.

Build Status Appveyor Build Status Test Coverage Current Release Version pypi Version conda Version Python wheels Follow us on Twitter

What’s where (as of v6.9.0)

thinc.v2v.Model

Base class.

thinc.v2v

Layers transforming vectors to vectors.

thinc.i2v

Layers embedding IDs to vectors.

thinc.t2v

Layers pooling tensors to vectors.

thinc.t2t

Layers transforming tensors to tensors (e.g. CNN, LSTM).

thinc.api

Higher-order functions, for building networks. Will be renamed.

thinc.extra

Datasets and utilities.

thinc.neural.ops

Container classes for mathematical operations. Will be reorganized.

thinc.linear.avgtron

Legacy efficient Averaged Perceptron implementation.

Development status

Thinc’s deep learning functionality is still under active development: APIs are unstable, and we’re not yet ready to provide usage support. However, if you’re already quite familiar with neural networks, there’s a lot here you might find interesting. Thinc’s conceptual model is quite different from TensorFlow’s. Thinc also implements some novel features, such as a small DSL for concisely wiring up models, embedding tables that support pre-computation and the hashing trick, dynamic batch sizes, a concatenation-based approach to variable-length sequences, and support for model averaging for the Adam solver (which performs very well).

No computational graph – just higher order functions

The central problem for a neural network implementation is this: during the forward pass, you compute results that will later be useful during the backward pass. How do you keep track of this arbitrary state, while making sure that layers can be cleanly composed?

Most libraries solve this problem by having you declare the forward computations, which are then compiled into a graph somewhere behind the scenes. Thinc doesn’t have a “computational graph”. Instead, we just use the stack, because we put the state from the forward pass into callbacks.

All nodes in the network have a simple signature:

f(inputs) -> {outputs, f(d_outputs)->d_inputs}

To make this less abstract, here’s a ReLu activation, following this signature:

def relu(inputs):
    mask = inputs > 0
    def backprop_relu(d_outputs, optimizer):
        return d_outputs * mask
    return inputs * mask, backprop_relu

When you call the relu function, you get back an output variable, and a callback. This lets you calculate a gradient using the output, and then pass it into the callback to perform the backward pass.

This signature makes it easy to build a complex network out of smaller pieces, using arbitrary higher-order functions you can write yourself. To make this clearer, we need a function for a weights layer. Usually this will be implemented as a class — but let’s continue using closures, to keep things concise, and to keep the simplicity of the interface explicit.

The main complication for the weights layer is that we now have a side-effect to manage: we would like to update the weights. There are a few ways to handle this. In Thinc we currently pass a callable into the backward pass. (I’m not convinced this is best.)

import numpy

def create_linear_layer(n_out, n_in):
    W = numpy.zeros((n_out, n_in))
    b = numpy.zeros((n_out, 1))

    def forward(X):
        Y = W @ X + b
        def backward(dY, optimizer):
            dX = W.T @ dY
            dW = numpy.einsum('ik,jk->ij', dY, X)
            db = dY.sum(axis=0)

            optimizer(W, dW)
            optimizer(b, db)

            return dX
        return Y, backward
    return forward

If we call Wb = create_linear_layer(5, 4), the variable Wb will be the forward() function, implemented inside the body of create_linear_layer(). The Wb instance will have access to the W and b variable defined in its outer scope. If we invoke create_linear_layer() again, we get a new instance, with its own internal state.

The Wb instance and the relu function have exactly the same signature. This makes it easy to write higher order functions to compose them. The most obvious thing to do is chain them together:

def chain(*layers):
    def forward(X):
        backprops = []
        Y = X
        for layer in layers:
            Y, backprop = layer(Y)
            backprops.append(backprop)
        def backward(dY, optimizer):
            for backprop in reversed(backprops):
                dY = backprop(dY, optimizer)
            return dY
        return Y, backward
    return forward

We could now chain our linear layer together with the relu activation, to create a simple feed-forward network:

Wb1 = create_linear_layer(10, 5)
Wb2 = create_linear_layer(3, 10)

model = chain(Wb1, relu, Wb2)

X = numpy.random.uniform(size=(5, 4))

y, bp_y = model(X)

dY = y - truth
dX = bp_y(dY, optimizer)

This conceptual model makes Thinc very flexible. The trade-off is that Thinc is less convenient and efficient at workloads that fit exactly into what Tensorflow etc. are designed for. If your graph really is static, and your inputs are homogenous in size and shape, Keras will likely be faster and simpler. But if you want to pass normal Python objects through your network, or handle sequences and recursions of arbitrary length or complexity, you might find Thinc’s design a better fit for your problem.

Quickstart

Thinc should install cleanly with both pip and conda, for Pythons 2.7+ and 3.5+, on Linux, macOS / OSX and Windows. Its only system dependency is a compiler tool-chain (e.g. build-essential) and the Python development headers (e.g. python-dev).

pip install thinc

For GPU support, we’re grateful to use the work of Chainer’s cupy module, which provides a numpy-compatible interface for GPU arrays. However, installing Chainer when no GPU is available currently causes an error. We therefore do not list Chainer as an explicit dependency — so building Thinc for GPU requires some extra steps:

export CUDA_HOME=/usr/local/cuda-8.0 # Or wherever your CUDA is
export PATH=$PATH:$CUDA_HOME/bin
pip install chainer
python -c "import cupy; assert cupy" # Check it installed
pip install thinc_gpu_ops thinc # Or `thinc[cuda]`
python -c "import thinc_gpu_ops" # Check the GPU ops were built

The rest of this section describes how to build Thinc from source. If you have Fabric installed, you can use the shortcut:

git clone https://github.com/explosion/thinc
cd thinc
fab clean env make test

You can then run the examples as follows:

fab eg.mnist
fab eg.basic_tagger
fab eg.cnn_tagger

Otherwise, you can build and test explicitly with:

git clone https://github.com/explosion/thinc
cd thinc

virtualenv .env
source .env/bin/activate

pip install -r requirements.txt
python setup.py build_ext --inplace
py.test thinc/

And then run the examples as follows:

python examples/mnist.py
python examples/basic_tagger.py
python examples/cnn_tagger.py

Customizing the matrix multiplication backend

Prior to v6.11, Thinc relied on numpy for matrix multiplications. When numpy is installed via wheel using pip (the default), numpy will usually be linked against a suboptimal matrix multiplication kernel. This made it difficult to ensure that Thinc was well optimized for the target machine.

To fix this, Thinc now provides its own matrix multiplications, by bundling the source code for OpenBLAS’s sgemm kernel within the library. To change the default BLAS library, you can specify an environment variable, giving the location of the shared library you want to link against:

THINC_BLAS=/opt/openblas/lib/libopenblas.so pip install thinc --no-cache-dir --no-binary
export LD_LIBRARY_PATH=/opt/openblas/lib
# On OSX:
# export DYLD_LIBRARY_PATH=/opt/openblas/lib

If you want to link against the Intel MKL instead of OpenBLAS, the easiest way is to install Miniconda. For instance, if you installed miniconda to /opt/miniconda, the command to install Thinc linked against MKL would be:

THINC_BLAS=/opt/miniconda/numpy-mkl/lib/libmkl_rt.so pip install thinc --no-cache-dir --no-binary
export LD_LIBRARY_PATH=/opt/miniconda/numpy-mkl/lib
# On OSX:
# export DYLD_LIBRARY_PATH=/opt/miniconda/numpy-mkl/lib

If the library file ends in a .a extension, it is linked statically; if it ends in .so, it’s linked dynamically. Make sure you have the directory on your LD_LIBRARY_PATH at runtime if you use the dynamic linking.

Usage

The Neural Network API is still subject to change, even within minor versions. You can get a feel for the current API by checking out the examples. Here are a few quick highlights.

1. Shape inference

Models can be created with some dimensions unspecified. Missing dimensions are inferred when pre-trained weights are loaded or when training begins. This eliminates a common source of programmer error:

# Invalid network — shape mismatch
model = chain(ReLu(512, 748), ReLu(512, 784), Softmax(10))

# Leave the dimensions unspecified, and you can't be wrong.
model = chain(ReLu(512), ReLu(512), Softmax())

2. Operator overloading

The Model.define_operators() classmethod allows you to bind arbitrary binary functions to Python operators, for use in any Model instance. The method can (and should) be used as a context-manager, so that the overloading is limited to the immediate block. This allows concise and expressive model definition:

with Model.define_operators({'>>': chain}):
    model = ReLu(512) >> ReLu(512) >> Softmax()

The overloading is cleaned up at the end of the block. A fairly arbitrary zoo of functions are currently implemented. Some of the most useful:

  • chain(model1, model2): Compose two models f(x) and g(x) into a single model computing g(f(x)).

  • clone(model1, int): Create n copies of a model, each with distinct weights, and chain them together.

  • concatenate(model1, model2): Given two models with output dimensions (n,) and (m,), construct a model with output dimensions (m+n,).

  • add(model1, model2): add(f(x), g(x)) = f(x)+g(x)

  • make_tuple(model1, model2): Construct tuples of the outputs of two models, at the batch level. The backward pass expects to receive a tuple of gradients, which are routed through the appropriate model, and summed.

Putting these things together, here’s the sort of tagging model that Thinc is designed to make easy.

with Model.define_operators({'>>': chain, '**': clone, '|': concatenate}):
    model = (
        add_eol_markers('EOL')
        >> flatten
        >> memoize(
            CharLSTM(char_width)
            | (normalize >> str2int >> Embed(word_width)))
        >> ExtractWindow(nW=2)
        >> BatchNorm(ReLu(hidden_width)) ** 3
        >> Softmax()
    )

Not all of these pieces are implemented yet, but hopefully this shows where we’re going. The memoize function will be particularly important: in any batch of text, the common words will be very common. It’s therefore important to evaluate models such as the CharLSTM once per word type per minibatch, rather than once per token.

3. Callback-based backpropagation

Most neural network libraries use a computational graph abstraction. This takes the execution away from you, so that gradients can be computed automatically. Thinc follows a style more like the autograd library, but with larger operations. Usage is as follows:

def explicit_sgd_update(X, y):
    sgd = lambda weights, gradient: weights - gradient * 0.001
    yh, finish_update = model.begin_update(X, drop=0.2)
    finish_update(y-yh, sgd)

Separating the backpropagation into three parts like this has many advantages. The interface to all models is completely uniform — there is no distinction between the top-level model you use as a predictor and the internal models for the layers. We also make concurrency simple, by making the begin_update() step a pure function, and separating the accumulation of the gradient from the action of the optimizer.

4. Class annotations

To keep the class hierarchy shallow, Thinc uses class decorators to reuse code for layer definitions. Specifically, the following decorators are available:

  • describe.attributes(): Allows attributes to be specified by keyword argument. Used especially for dimensions and parameters.

  • describe.on_init(): Allows callbacks to be specified, which will be called at the end of the __init__.py.

  • describe.on_data(): Allows callbacks to be specified, which will be called on Model.begin_training().

🛠 Changelog

Version

Date

Description

v6.12.0

2018-10-15

Wheels and separate GPU ops

v6.10.3

2018-07-21

Python 3.7 support and dependency updates

v6.11.2

2018-05-21

Improve GPU installation

v6.11.1

2018-05-20

Support direct linkage to BLAS libraries

v6.11.0

2018-03-16

n/a

v6.10.2

2017-12-06

Efficiency improvements and bug fixes

v6.10.1

2017-11-15

Fix GPU install and minor memory leak

v6.10.0

2017-10-28

CPU efficiency improvements, refactoring

v6.9.0

2017-10-03

Reorganize layers, bug fix to Layer Normalization

v6.8.2

2017-09-26

Fix packaging of gpu_ops

v6.8.1

2017-08-23

Fix Windows support

v6.8.0

2017-07-25

SELU layer, attention, improved GPU/CPU compatibility

v6.7.3

2017-06-05

Fix convolution on GPU

v6.7.2

2017-06-02

Bug fixes to serialization

v6.7.1

2017-06-02

Improve serialization

v6.7.0

2017-06-01

Fixes to serialization, hash embeddings and flatten ops

v6.6.0

2017-05-14

Improved GPU usage and examples

v6.5.2

2017-03-20

n/a

v6.5.1

2017-03-20

Improved linear class and Windows fix

v6.5.0

2017-03-11

Supervised similarity, fancier embedding and improvements to linear model

v6.4.0

2017-02-15

n/a

v6.3.0

2017-01-25

Efficiency improvements, argument checking and error messaging

v6.2.0

2017-01-15

Improve API and introduce overloaded operators

v6.1.3

2017-01-10

More neural network functions and training continuation

v6.1.3

2017-01-09

n/a

v6.1.2

2017-01-09

n/a

v6.1.1

2017-01-09

n/a

v6.1.0

2017-01-09

n/a

v6.0.0

2016-12-31

Add thinc.neural for NLP-oriented deep learning

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

thinc-7.0.0.dev2.tar.gz (1.9 MB view details)

Uploaded Source

Built Distributions

thinc-7.0.0.dev2-cp37-cp37m-win_amd64.whl (1.9 MB view details)

Uploaded CPython 3.7m Windows x86-64

thinc-7.0.0.dev2-cp37-cp37m-manylinux1_x86_64.whl (2.0 MB view details)

Uploaded CPython 3.7m

thinc-7.0.0.dev2-cp37-cp37m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl (2.8 MB view details)

Uploaded CPython 3.7m macOS 10.10+ intel macOS 10.10+ x86-64 macOS 10.6+ intel macOS 10.9+ intel macOS 10.9+ x86-64

thinc-7.0.0.dev2-cp36-cp36m-win_amd64.whl (1.9 MB view details)

Uploaded CPython 3.6m Windows x86-64

thinc-7.0.0.dev2-cp36-cp36m-manylinux1_x86_64.whl (2.0 MB view details)

Uploaded CPython 3.6m

thinc-7.0.0.dev2-cp36-cp36m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl (2.8 MB view details)

Uploaded CPython 3.6m macOS 10.10+ intel macOS 10.10+ x86-64 macOS 10.6+ intel macOS 10.9+ intel macOS 10.9+ x86-64

thinc-7.0.0.dev2-cp35-cp35m-win_amd64.whl (1.9 MB view details)

Uploaded CPython 3.5m Windows x86-64

thinc-7.0.0.dev2-cp35-cp35m-manylinux1_x86_64.whl (2.0 MB view details)

Uploaded CPython 3.5m

thinc-7.0.0.dev2-cp35-cp35m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl (2.8 MB view details)

Uploaded CPython 3.5m macOS 10.10+ intel macOS 10.10+ x86-64 macOS 10.6+ intel macOS 10.9+ intel macOS 10.9+ x86-64

thinc-7.0.0.dev2-cp27-cp27mu-manylinux1_x86_64.whl (2.0 MB view details)

Uploaded CPython 2.7mu

thinc-7.0.0.dev2-cp27-cp27m-manylinux1_x86_64.whl (2.0 MB view details)

Uploaded CPython 2.7m

thinc-7.0.0.dev2-cp27-cp27m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl (2.8 MB view details)

Uploaded CPython 2.7m macOS 10.10+ intel macOS 10.10+ x86-64 macOS 10.6+ intel macOS 10.9+ intel macOS 10.9+ x86-64

File details

Details for the file thinc-7.0.0.dev2.tar.gz.

File metadata

  • Download URL: thinc-7.0.0.dev2.tar.gz
  • Upload date:
  • Size: 1.9 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.12.1 pkginfo/1.4.2 requests/2.20.1 setuptools/39.0.1 requests-toolbelt/0.8.0 tqdm/4.28.1 CPython/3.6.6

File hashes

Hashes for thinc-7.0.0.dev2.tar.gz
Algorithm Hash digest
SHA256 9e3202669a845bb8d680ff3535dfd7c9ad8479d3b3be4266dca0520e39f069a4
MD5 89765c5fb00f35baf595ba0b1c1c3bf1
BLAKE2b-256 9a16cbe0fe6d547285d0fdee21f786d32250e46f7a6e488ecc2dec26dc3d146e

See more details on using hashes here.

File details

Details for the file thinc-7.0.0.dev2-cp37-cp37m-win_amd64.whl.

File metadata

  • Download URL: thinc-7.0.0.dev2-cp37-cp37m-win_amd64.whl
  • Upload date:
  • Size: 1.9 MB
  • Tags: CPython 3.7m, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.12.1 pkginfo/1.4.2 requests/2.20.1 setuptools/39.0.1 requests-toolbelt/0.8.0 tqdm/4.28.1 CPython/3.6.6

File hashes

Hashes for thinc-7.0.0.dev2-cp37-cp37m-win_amd64.whl
Algorithm Hash digest
SHA256 7bcb3ea78559a01480976c88d335b1bd03def8f937967edf3a29065fc8270d6b
MD5 b377237f1a3901b30a20a71b72292e97
BLAKE2b-256 51bc245d4703644e17612ab611c011323b508e9cc0cb93288fe1ef7b3ece6b39

See more details on using hashes here.

File details

Details for the file thinc-7.0.0.dev2-cp37-cp37m-manylinux1_x86_64.whl.

File metadata

  • Download URL: thinc-7.0.0.dev2-cp37-cp37m-manylinux1_x86_64.whl
  • Upload date:
  • Size: 2.0 MB
  • Tags: CPython 3.7m
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.12.1 pkginfo/1.4.2 requests/2.20.1 setuptools/39.0.1 requests-toolbelt/0.8.0 tqdm/4.28.1 CPython/3.6.6

File hashes

Hashes for thinc-7.0.0.dev2-cp37-cp37m-manylinux1_x86_64.whl
Algorithm Hash digest
SHA256 bc58f45a93d36642eb8ad64e2b44547afd057ce35a170bb46840a1d753bbcb49
MD5 de403b925d2c1910cbce0639eedaf887
BLAKE2b-256 25cd80e6f19ab07218ac8d7479f792bcd2df1481e000e092d331fe28a52eeec0

See more details on using hashes here.

File details

Details for the file thinc-7.0.0.dev2-cp37-cp37m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl.

File metadata

File hashes

Hashes for thinc-7.0.0.dev2-cp37-cp37m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl
Algorithm Hash digest
SHA256 078f63a34c2b7221d82381a4ce620ee39eb143cd4905589bb7334a6588652eb3
MD5 8ba184a73afba15252b7bb7ebdf6ce3a
BLAKE2b-256 cd3090ae752c2f2b11da70a2acb14d5c2b556822ebc5de9b10d1324229208379

See more details on using hashes here.

File details

Details for the file thinc-7.0.0.dev2-cp36-cp36m-win_amd64.whl.

File metadata

  • Download URL: thinc-7.0.0.dev2-cp36-cp36m-win_amd64.whl
  • Upload date:
  • Size: 1.9 MB
  • Tags: CPython 3.6m, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.12.1 pkginfo/1.4.2 requests/2.20.1 setuptools/39.0.1 requests-toolbelt/0.8.0 tqdm/4.28.1 CPython/3.6.6

File hashes

Hashes for thinc-7.0.0.dev2-cp36-cp36m-win_amd64.whl
Algorithm Hash digest
SHA256 32a3ecd690cac307636a402a1f92386e6301460fe4a582b215c07d499fe65a26
MD5 e44ff4497df691e9c606d7422ac9c52d
BLAKE2b-256 9e7a87bf4b156645e8e7dbfc950b5d4eacaeb5986a4d7df82613d7ae9486278e

See more details on using hashes here.

File details

Details for the file thinc-7.0.0.dev2-cp36-cp36m-manylinux1_x86_64.whl.

File metadata

  • Download URL: thinc-7.0.0.dev2-cp36-cp36m-manylinux1_x86_64.whl
  • Upload date:
  • Size: 2.0 MB
  • Tags: CPython 3.6m
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.12.1 pkginfo/1.4.2 requests/2.20.1 setuptools/39.0.1 requests-toolbelt/0.8.0 tqdm/4.28.1 CPython/3.6.6

File hashes

Hashes for thinc-7.0.0.dev2-cp36-cp36m-manylinux1_x86_64.whl
Algorithm Hash digest
SHA256 090db0108e2f9df13482657185dacbed0254d86fc1842f516b9ef50b51dd89f7
MD5 954355d0d14ca39903f8e4d61952912b
BLAKE2b-256 ae77a08ce04216a6ecdec1a791eb13dec1f25e23590ab2a1aa8f4c314fc3b64c

See more details on using hashes here.

File details

Details for the file thinc-7.0.0.dev2-cp36-cp36m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl.

File metadata

File hashes

Hashes for thinc-7.0.0.dev2-cp36-cp36m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl
Algorithm Hash digest
SHA256 40ee4da979bf87553ffe3134230fa9697fd8824ad1a68a5bebe7f58457640025
MD5 ccb275c425b359a271b57c3ff5c9dd50
BLAKE2b-256 a601b49cb0b56e657537d6f0a489b68c4897e7342c19f89496295c756724f6f8

See more details on using hashes here.

File details

Details for the file thinc-7.0.0.dev2-cp35-cp35m-win_amd64.whl.

File metadata

  • Download URL: thinc-7.0.0.dev2-cp35-cp35m-win_amd64.whl
  • Upload date:
  • Size: 1.9 MB
  • Tags: CPython 3.5m, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.12.1 pkginfo/1.4.2 requests/2.20.1 setuptools/39.0.1 requests-toolbelt/0.8.0 tqdm/4.28.1 CPython/3.6.6

File hashes

Hashes for thinc-7.0.0.dev2-cp35-cp35m-win_amd64.whl
Algorithm Hash digest
SHA256 c736ad7cce6e633e51e9bebdef32c78ace224c2f4bf43a0b12bdf23cabbf0ed7
MD5 f057974cf9e66f3acf783c568137b84a
BLAKE2b-256 3cddba600a4c70a6904d0e015ec45693f774023ed63cfd29ab20f4cd5cef3a75

See more details on using hashes here.

File details

Details for the file thinc-7.0.0.dev2-cp35-cp35m-manylinux1_x86_64.whl.

File metadata

  • Download URL: thinc-7.0.0.dev2-cp35-cp35m-manylinux1_x86_64.whl
  • Upload date:
  • Size: 2.0 MB
  • Tags: CPython 3.5m
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.12.1 pkginfo/1.4.2 requests/2.20.1 setuptools/39.0.1 requests-toolbelt/0.8.0 tqdm/4.28.1 CPython/3.6.6

File hashes

Hashes for thinc-7.0.0.dev2-cp35-cp35m-manylinux1_x86_64.whl
Algorithm Hash digest
SHA256 dbce2e647fd61714e2a83b0f4941c997131fae1cfd8e0f25092821cac5c815be
MD5 3f752da2c4006b43809a19ad164d3c70
BLAKE2b-256 4ba715259c7d1a40f632aef2c84f8200489e8bbf085ac32917efea2fea043f02

See more details on using hashes here.

File details

Details for the file thinc-7.0.0.dev2-cp35-cp35m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl.

File metadata

File hashes

Hashes for thinc-7.0.0.dev2-cp35-cp35m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl
Algorithm Hash digest
SHA256 ebffb48c193deeb3076b0a8f6f7e550b9aba4345a0b29970c5015124b598f202
MD5 6ff80a970a2b0a06b3b2536fb5863796
BLAKE2b-256 a39d39e76420bf3a6e34d9a8237f59c6d1b884ff81ab596899cc530809540546

See more details on using hashes here.

File details

Details for the file thinc-7.0.0.dev2-cp27-cp27mu-manylinux1_x86_64.whl.

File metadata

  • Download URL: thinc-7.0.0.dev2-cp27-cp27mu-manylinux1_x86_64.whl
  • Upload date:
  • Size: 2.0 MB
  • Tags: CPython 2.7mu
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.12.1 pkginfo/1.4.2 requests/2.20.1 setuptools/39.0.1 requests-toolbelt/0.8.0 tqdm/4.28.1 CPython/3.6.6

File hashes

Hashes for thinc-7.0.0.dev2-cp27-cp27mu-manylinux1_x86_64.whl
Algorithm Hash digest
SHA256 6389c899c6f3f9866cc1870087ee285df81b8743a6a3fee4311f3e71c4e37423
MD5 b5a77652d2000784fb461dce5286eba0
BLAKE2b-256 b0d069b7bccb092c80f990e26e600e3f5c9a062181b4bffc0ec88046b11b445b

See more details on using hashes here.

File details

Details for the file thinc-7.0.0.dev2-cp27-cp27m-manylinux1_x86_64.whl.

File metadata

  • Download URL: thinc-7.0.0.dev2-cp27-cp27m-manylinux1_x86_64.whl
  • Upload date:
  • Size: 2.0 MB
  • Tags: CPython 2.7m
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.12.1 pkginfo/1.4.2 requests/2.20.1 setuptools/39.0.1 requests-toolbelt/0.8.0 tqdm/4.28.1 CPython/3.6.6

File hashes

Hashes for thinc-7.0.0.dev2-cp27-cp27m-manylinux1_x86_64.whl
Algorithm Hash digest
SHA256 f916dc69fda53cf35512c35e36ef8a0f2e92f27f9615ff06f1ca328909ddeb84
MD5 863534eef26549e15a103b539aab4dfd
BLAKE2b-256 5995212b4ff44041d6404ffb7fd89f2604be58858b347d715399695b024db45e

See more details on using hashes here.

File details

Details for the file thinc-7.0.0.dev2-cp27-cp27m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl.

File metadata

File hashes

Hashes for thinc-7.0.0.dev2-cp27-cp27m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl
Algorithm Hash digest
SHA256 b0a7fad3b94c4254cebbd53f51a988d9a4f4b7d1ce999ea46e3f2ee2a5da727c
MD5 b3ab7c33c44207c90d0caf98fcc49084
BLAKE2b-256 5ef1397e98e165fcb89850b337e14b34848ad4934df074f9aab46aa0732580ac

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page