Skip to main content

Practical Machine Learning for NLP

Project description

Thinc is the machine learning library powering spaCy. It features a battle-tested linear model designed for large sparse learning problems, and a flexible neural network model under development for spaCy v2.0.

Thinc is a practical toolkit for implementing models that follow the “Embed, encode, attend, predict” architecture. It’s designed to be easy to install, efficient for CPU usage and optimised for NLP and deep learning with text – in particular, hierarchically structured input and variable-length sequences.

🔮 Version 6.10 out now! Read the release notes here.

Build Status Appveyor Build Status Test Coverage Current Release Version pypi Version conda Version Thinc on Gitter Follow us on Twitter

What’s where (as of v6.9.0)

thinc.v2v.Model

Base class.

thinc.v2v

Layers transforming vectors to vectors.

thinc.i2v

Layers embedding IDs to vectors.

thinc.t2v

Layers pooling tensors to vectors.

thinc.t2t

Layers transforming tensors to tensors (e.g. CNN, LSTM).

thinc.api

Higher-order functions, for building networks. Will be renamed.

thinc.extra

Datasets and utilities.

thinc.neural.ops

Container classes for mathematical operations. Will be reorganized.

thinc.linear.avgtron

Legacy efficient Averaged Perceptron implementation.

Development status

Thinc’s deep learning functionality is still under active development: APIs are unstable, and we’re not yet ready to provide usage support. However, if you’re already quite familiar with neural networks, there’s a lot here you might find interesting. Thinc’s conceptual model is quite different from TensorFlow’s. Thinc also implements some novel features, such as a small DSL for concisely wiring up models, embedding tables that support pre-computation and the hashing trick, dynamic batch sizes, a concatenation-based approach to variable-length sequences, and support for model averaging for the Adam solver (which performs very well).

No computational graph – just higher order functions

The central problem for a neural network implementation is this: during the forward pass, you compute results that will later be useful during the backward pass. How do you keep track of this arbitrary state, while making sure that layers can be cleanly composed?

Most libraries solve this problem by having you declare the forward computations, which are then compiled into a graph somewhere behind the scenes. Thinc doesn’t have a “computational graph”. Instead, we just use the stack, because we put the state from the forward pass into callbacks.

All nodes in the network have a simple signature:

f(inputs) -> {outputs, f(d_outputs)->d_inputs}

To make this less abstract, here’s a ReLu activation, following this signature:

def relu(inputs):
    mask = inputs > 0
    def backprop_relu(d_outputs, optimizer):
        return d_outputs * mask
    return inputs * mask, backprop_relu

When you call the relu function, you get back an output variable, and a callback. This lets you calculate a gradient using the output, and then pass it into the callback to perform the backward pass.

This signature makes it easy to build a complex network out of smaller pieces, using arbitrary higher-order functions you can write yourself. To make this clearer, we need a function for a weights layer. Usually this will be implemented as a class — but let’s continue using closures, to keep things concise, and to keep the simplicity of the interface explicit:

import numpy

def create_linear_layer(n_out, n_in):
    W = numpy.zeros((n_out, n_in))
    b = numpy.zeros((n_out, 1))

    def forward(X):
        Y = W @ X + b
        def backward(dY, optimizer):
            dX = W.T @ dY
            dW = numpy.einsum('ik,jk->ij', dY, X)
            db = dY.sum(axis=0)

            optimizer(W, dW)
            optimizer(b, db)

            return dX
        return Y, backward
    return forward

If we call Wb = create_linear_layer(5, 4), the variable Wb will be the forward() function, implemented inside the body of create_linear_layer(). The Wb instance will have access to the W and b variable defined in its outer scope. If we invoke create_linear_layer() again, we get a new instance, with its own internal state.

The Wb instance and the relu function have exactly the same signature. This makes it easy to write higher order functions to compose them. The most obvious thing to do is chain them together:

def chain(*layers):
    def forward(X):
        backprops = []
        Y = X
        for layer in layers:
            Y, backprop = layer(Y)
            backprops.append(backprop)
        def backward(dY, optimizer):
            for backprop in reversed(backprops):
                dY = backprop(dY, optimizer)
            return dY
        return Y, backward
    return forward

We could now chain our linear layer together with the relu activation, to create a simple feed-forward network:

Wb1 = create_linear_layer(10, 5)
Wb2 = create_linear_layer(3, 10)

model = chain(Wb1, relu, Wb2)

X = numpy.random.uniform(size=(5, 4))

y, bp_y = model(X)

dY = y - truth
dX = bp_y(dY, optimizer)

This conceptual model makes Thinc very flexible. The trade-off is that Thinc is less convenient and efficient at workloads that fit exactly into what Tensorflow etc. are designed for. If your graph really is static, and your inputs are homogenous in size and shape, Keras will likely be faster and simpler. But if you want to pass normal Python objects through your network, or handle sequences and recursions of arbitrary length or complexity, you might find Thinc’s design a better fit for your problem.

Quickstart

Thinc should install cleanly with both pip and conda, for Pythons 2.7+ and 3.5+, on Linux, macOS / OSX and Windows. Its only system dependency is a compiler tool-chain (e.g. build-essential) and the Python development headers (e.g. python-dev).

pip install thinc

For GPU support, we’re grateful to use the work of Chainer’s cupy module, which provides a numpy-compatible interface for GPU arrays. However, installing Chainer when no GPU is available currently causes an error. We therefore do not list Chainer as an explicit dependency — so building Thinc for GPU requires some extra steps:

export CUDA_HOME=/usr/local/cuda-8.0 # Or wherever your CUDA is
export PATH=$PATH:$CUDA_HOME/bin
pip install chainer
python -c "import cupy; assert cupy" # Check it installed
pip install thinc
python -c "import thinc.neural.gpu_ops" # Check the GPU ops were built

The rest of this section describes how to build Thinc from source. If you have Fabric installed, you can use the shortcut:

git clone https://github.com/explosion/thinc
cd thinc
fab clean env make test

You can then run the examples as follows:

fab eg.mnist
fab eg.basic_tagger
fab eg.cnn_tagger

Otherwise, you can build and test explicitly with:

git clone https://github.com/explosion/thinc
cd thinc

virtualenv .env
source .env/bin/activate

pip install -r requirements.txt
python setup.py build_ext --inplace
py.test thinc/

And then run the examples as follows:

python examples/mnist.py
python examples/basic_tagger.py
python examples/cnn_tagger.py

Usage

The Neural Network API is still subject to change, even within minor versions. You can get a feel for the current API by checking out the examples. Here are a few quick highlights.

1. Shape inference

Models can be created with some dimensions unspecified. Missing dimensions are inferred when pre-trained weights are loaded or when training begins. This eliminates a common source of programmer error:

# Invalid network — shape mismatch
model = chain(ReLu(512, 748), ReLu(512, 784), Softmax(10))

# Leave the dimensions unspecified, and you can't be wrong.
model = chain(ReLu(512), ReLu(512), Softmax())

2. Operator overloading

The Model.define_operators() classmethod allows you to bind arbitrary binary functions to Python operators, for use in any Model instance. The method can (and should) be used as a context-manager, so that the overloading is limited to the immediate block. This allows concise and expressive model definition:

with Model.define_operators({'>>': chain}):
    model = ReLu(512) >> ReLu(512) >> Softmax()

The overloading is cleaned up at the end of the block. A fairly arbitrary zoo of functions are currently implemented. Some of the most useful:

  • chain(model1, model2): Compose two models f(x) and g(x) into a single model computing g(f(x)).

  • clone(model1, int): Create n copies of a model, each with distinct weights, and chain them together.

  • concatenate(model1, model2): Given two models with output dimensions (n,) and (m,), construct a model with output dimensions (m+n,).

  • add(model1, model2): add(f(x), g(x)) = f(x)+g(x)

  • make_tuple(model1, model2): Construct tuples of the outputs of two models, at the batch level. The backward pass expects to receive a tuple of gradients, which are routed through the appropriate model, and summed.

Putting these things together, here’s the sort of tagging model that Thinc is designed to make easy.

with Model.define_operators({'>>': chain, '**': clone, '|': concatenate}):
    model = (
        add_eol_markers('EOL')
        >> flatten
        >> memoize(
            CharLSTM(char_width)
            | (normalize >> str2int >> Embed(word_width)))
        >> ExtractWindow(nW=2)
        >> BatchNorm(ReLu(hidden_width)) ** 3
        >> Softmax()
    )

Not all of these pieces are implemented yet, but hopefully this shows where we’re going. The memoize function will be particularly important: in any batch of text, the common words will be very common. It’s therefore important to evaluate models such as the CharLSTM once per word type per minibatch, rather than once per token.

3. Callback-based backpropagation

Most neural network libraries use a computational graph abstraction. This takes the execution away from you, so that gradients can be computed automatically. Thinc follows a style more like the autograd library, but with larger operations. Usage is as follows:

def explicit_sgd_update(X, y):
    sgd = lambda weights, gradient: weights - gradient * 0.001
    yh, finish_update = model.begin_update(X, drop=0.2)
    finish_update(y-yh, sgd)

Separating the backpropagation into three parts like this has many advantages. The interface to all models is completely uniform — there is no distinction between the top-level model you use as a predictor and the internal models for the layers. We also make concurrency simple, by making the begin_update() step a pure function, and separating the accumulation of the gradient from the action of the optimizer.

4. Class annotations

To keep the class hierarchy shallow, Thinc uses class decorators to reuse code for layer definitions. Specifically, the following decorators are available:

  • describe.attributes(): Allows attributes to be specified by keyword argument. Used especially for dimensions and parameters.

  • describe.on_init(): Allows callbacks to be specified, which will be called at the end of the __init__.py.

  • describe.on_data(): Allows callbacks to be specified, which will be called on Model.begin_training().

🛠 Changelog

Version

Date

Description

v6.10.1

2017-11-15

Fix GPU install and minor memory leak

v6.10.0

2017-10-28

CPU efficiency improvements, refactoring

v6.9.0

2017-10-03

Reorganize layers, bug fix to Layer Normalization

v6.8.2

2017-09-26

Fix packaging of gpu_ops

v6.8.1

2017-08-23

Fix Windows support

v6.8.0

2017-07-25

SELU layer, attention, improved GPU/CPU compatibility

v6.7.3

2017-06-05

Fix convolution on GPU

v6.7.2

2017-06-02

Bug fixes to serialization

v6.7.1

2017-06-02

Improve serialization

v6.7.0

2017-06-01

Fixes to serialization, hash embeddings and flatten ops

v6.6.0

2017-05-14

Improved GPU usage and examples

v6.5.2

2017-03-20

n/a

v6.5.1

2017-03-20

Improved linear class and Windows fix

v6.5.0

2017-03-11

Supervised similarity, fancier embedding and improvements to linear model

v6.4.0

2017-02-15

n/a

v6.3.0

2017-01-25

Efficiency improvements, argument checking and error messaging

v6.2.0

2017-01-15

Improve API and introduce overloaded operators

v6.1.3

2017-01-10

More neural network functions and training continuation

v6.1.3

2017-01-09

n/a

v6.1.2

2017-01-09

n/a

v6.1.1

2017-01-09

n/a

v6.1.0

2017-01-09

n/a

v6.0.0

2016-12-31

Add thinc.neural for NLP-oriented deep learning

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

thinc-6.12.1.tar.gz (1.2 MB view details)

Uploaded Source

Built Distributions

thinc-6.12.1-cp37-cp37m-win_amd64.whl (1.8 MB view details)

Uploaded CPython 3.7m Windows x86-64

thinc-6.12.1-cp37-cp37m-manylinux1_x86_64.whl (1.9 MB view details)

Uploaded CPython 3.7m

thinc-6.12.1-cp37-cp37m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl (2.6 MB view details)

Uploaded CPython 3.7m macOS 10.10+ intel macOS 10.10+ x86-64 macOS 10.6+ intel macOS 10.9+ intel macOS 10.9+ x86-64

thinc-6.12.1-cp36-cp36m-win_amd64.whl (1.8 MB view details)

Uploaded CPython 3.6m Windows x86-64

thinc-6.12.1-cp36-cp36m-manylinux1_x86_64.whl (1.9 MB view details)

Uploaded CPython 3.6m

thinc-6.12.1-cp36-cp36m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl (2.7 MB view details)

Uploaded CPython 3.6m macOS 10.10+ intel macOS 10.10+ x86-64 macOS 10.6+ intel macOS 10.9+ intel macOS 10.9+ x86-64

thinc-6.12.1-cp35-cp35m-win_amd64.whl (1.8 MB view details)

Uploaded CPython 3.5m Windows x86-64

thinc-6.12.1-cp35-cp35m-manylinux1_x86_64.whl (1.9 MB view details)

Uploaded CPython 3.5m

thinc-6.12.1-cp35-cp35m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl (2.6 MB view details)

Uploaded CPython 3.5m macOS 10.10+ intel macOS 10.10+ x86-64 macOS 10.6+ intel macOS 10.9+ intel macOS 10.9+ x86-64

thinc-6.12.1-cp27-cp27m-manylinux1_x86_64.whl (1.9 MB view details)

Uploaded CPython 2.7m

thinc-6.12.1-cp27-cp27m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl (2.7 MB view details)

Uploaded CPython 2.7m macOS 10.10+ intel macOS 10.10+ x86-64 macOS 10.6+ intel macOS 10.9+ intel macOS 10.9+ x86-64

File details

Details for the file thinc-6.12.1.tar.gz.

File metadata

  • Download URL: thinc-6.12.1.tar.gz
  • Upload date:
  • Size: 1.2 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.12.1 pkginfo/1.4.2 requests/2.20.1 setuptools/39.0.1 requests-toolbelt/0.8.0 tqdm/4.28.1 CPython/3.6.6

File hashes

Hashes for thinc-6.12.1.tar.gz
Algorithm Hash digest
SHA256 90c59454efd7b7a54e68bdc129f411bb27661080b9eba2e2b07e68d6c74277ce
MD5 f62d1be3443b1f2300a0778f65d4e213
BLAKE2b-256 d81efb93b55dce0e7b0f7184cfa34712302cd0a4ec87c99c8cfb908200b6f49f

See more details on using hashes here.

File details

Details for the file thinc-6.12.1-cp37-cp37m-win_amd64.whl.

File metadata

  • Download URL: thinc-6.12.1-cp37-cp37m-win_amd64.whl
  • Upload date:
  • Size: 1.8 MB
  • Tags: CPython 3.7m, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.12.1 pkginfo/1.4.2 requests/2.20.1 setuptools/39.0.1 requests-toolbelt/0.8.0 tqdm/4.28.1 CPython/3.6.6

File hashes

Hashes for thinc-6.12.1-cp37-cp37m-win_amd64.whl
Algorithm Hash digest
SHA256 ab999d890c791b90c5f2f0554a542b6ccb1eba4597479c1f377c052e8481f3f0
MD5 b5ffa0f2c0dee8e770da824c0e58a8f3
BLAKE2b-256 cfa180aabceed94fe348f8ff0a14b89bfbdce7478783c4bba481b96787c05375

See more details on using hashes here.

File details

Details for the file thinc-6.12.1-cp37-cp37m-manylinux1_x86_64.whl.

File metadata

  • Download URL: thinc-6.12.1-cp37-cp37m-manylinux1_x86_64.whl
  • Upload date:
  • Size: 1.9 MB
  • Tags: CPython 3.7m
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.12.1 pkginfo/1.4.2 requests/2.20.1 setuptools/39.0.1 requests-toolbelt/0.8.0 tqdm/4.28.1 CPython/3.6.6

File hashes

Hashes for thinc-6.12.1-cp37-cp37m-manylinux1_x86_64.whl
Algorithm Hash digest
SHA256 e27979b6209a1ae1de53b3b554e3db175730e485799fa43e24e4bdcae850c684
MD5 9faf04643a91632910738ea95ab61ad6
BLAKE2b-256 dac0f19586e8f0d9575a0df162f83bacfe131ced8ed8964434260c55275edce3

See more details on using hashes here.

File details

Details for the file thinc-6.12.1-cp37-cp37m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl.

File metadata

File hashes

Hashes for thinc-6.12.1-cp37-cp37m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl
Algorithm Hash digest
SHA256 f2aead4c354052989ac2462d24fd98e43ebe7421282faa2a0f0c44fedac50d23
MD5 5ce6fae1a58eba6547e284e88f51cd56
BLAKE2b-256 8084a4d8e8b66729ec0f1bc676ed6614333fedefb5ac49235d065067192715e5

See more details on using hashes here.

File details

Details for the file thinc-6.12.1-cp36-cp36m-win_amd64.whl.

File metadata

  • Download URL: thinc-6.12.1-cp36-cp36m-win_amd64.whl
  • Upload date:
  • Size: 1.8 MB
  • Tags: CPython 3.6m, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.12.1 pkginfo/1.4.2 requests/2.20.1 setuptools/39.0.1 requests-toolbelt/0.8.0 tqdm/4.28.1 CPython/3.6.6

File hashes

Hashes for thinc-6.12.1-cp36-cp36m-win_amd64.whl
Algorithm Hash digest
SHA256 77b601a63f5881f7d3b05ba0f24861384b856f3879bb13341b0d8f7e8fc05eb8
MD5 912c3dc6a474243648dc7465b0c43fad
BLAKE2b-256 e4a37a02574e8ed71d7a05660550af2b96ec03f96f2b98d76d5fe4133cf7571b

See more details on using hashes here.

File details

Details for the file thinc-6.12.1-cp36-cp36m-manylinux1_x86_64.whl.

File metadata

  • Download URL: thinc-6.12.1-cp36-cp36m-manylinux1_x86_64.whl
  • Upload date:
  • Size: 1.9 MB
  • Tags: CPython 3.6m
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.12.1 pkginfo/1.4.2 requests/2.20.1 setuptools/39.0.1 requests-toolbelt/0.8.0 tqdm/4.28.1 CPython/3.6.6

File hashes

Hashes for thinc-6.12.1-cp36-cp36m-manylinux1_x86_64.whl
Algorithm Hash digest
SHA256 32dda224e0f0f331b2be87e503f5d3ea5be7095f9c63377e4a33d0dc33601318
MD5 a3776c210f14c769430285c33930d3b8
BLAKE2b-256 dba746640a46fd707aeb204aa4257a70974b6a22a0204ba703164d803215776f

See more details on using hashes here.

File details

Details for the file thinc-6.12.1-cp36-cp36m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl.

File metadata

File hashes

Hashes for thinc-6.12.1-cp36-cp36m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl
Algorithm Hash digest
SHA256 4be66c2b49c2a6ffeef2c5dc199122faf9af64afeade1dfdc8595fd029af2210
MD5 da1046b8fdc84a83cbd9619acc37b8d1
BLAKE2b-256 2700b2abd484bc4ef86e5c308acc6610e8f2c00c89b94701f42b9cc522ca70a4

See more details on using hashes here.

File details

Details for the file thinc-6.12.1-cp35-cp35m-win_amd64.whl.

File metadata

  • Download URL: thinc-6.12.1-cp35-cp35m-win_amd64.whl
  • Upload date:
  • Size: 1.8 MB
  • Tags: CPython 3.5m, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.12.1 pkginfo/1.4.2 requests/2.20.1 setuptools/39.0.1 requests-toolbelt/0.8.0 tqdm/4.28.1 CPython/3.6.6

File hashes

Hashes for thinc-6.12.1-cp35-cp35m-win_amd64.whl
Algorithm Hash digest
SHA256 07394fc067b6ff361b56fe4bdd310fa4b991d3783cc2bc7c201d30f594818182
MD5 8ade3c477ce65461bf75c5f0c970eb2d
BLAKE2b-256 914c888d193cea9f8d98cb5083e850a90fae1f3f7212fd86b9cdb763b770c4fd

See more details on using hashes here.

File details

Details for the file thinc-6.12.1-cp35-cp35m-manylinux1_x86_64.whl.

File metadata

  • Download URL: thinc-6.12.1-cp35-cp35m-manylinux1_x86_64.whl
  • Upload date:
  • Size: 1.9 MB
  • Tags: CPython 3.5m
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.12.1 pkginfo/1.4.2 requests/2.20.1 setuptools/39.0.1 requests-toolbelt/0.8.0 tqdm/4.28.1 CPython/3.6.6

File hashes

Hashes for thinc-6.12.1-cp35-cp35m-manylinux1_x86_64.whl
Algorithm Hash digest
SHA256 1f957eed3cfa5fc42cc454d2a954f0ec00561286ad1b219164096efa813dc8d7
MD5 223dc5b6184a0bfbd4eba48d77b1977f
BLAKE2b-256 2aa33c760f8653abd725ddd0d3662a4ef51f3642a1388202ae52219f14f95b52

See more details on using hashes here.

File details

Details for the file thinc-6.12.1-cp35-cp35m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl.

File metadata

File hashes

Hashes for thinc-6.12.1-cp35-cp35m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl
Algorithm Hash digest
SHA256 2ff44ce3d118536b26dd1ad147916392cc88f8619757174dafaaea98141f4e2e
MD5 a9f92a396871c0e4dd8aea929d58dc6c
BLAKE2b-256 9cf11fe7ce5247353c8a705e00f603a73cf8087ab796997ec784e110a3136f81

See more details on using hashes here.

File details

Details for the file thinc-6.12.1-cp27-cp27m-manylinux1_x86_64.whl.

File metadata

  • Download URL: thinc-6.12.1-cp27-cp27m-manylinux1_x86_64.whl
  • Upload date:
  • Size: 1.9 MB
  • Tags: CPython 2.7m
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.12.1 pkginfo/1.4.2 requests/2.20.1 setuptools/39.0.1 requests-toolbelt/0.8.0 tqdm/4.28.1 CPython/3.6.6

File hashes

Hashes for thinc-6.12.1-cp27-cp27m-manylinux1_x86_64.whl
Algorithm Hash digest
SHA256 ca80b5625df23155d69786fde0912c1a4c7337a2e6f1373c8c9b0e90535986c5
MD5 dfd40b478f9c7b4950c2abaabb41dbfa
BLAKE2b-256 134d46ee95b83f3ae0d138cd40347910a72a5f2713604c304e26b2234802fc0e

See more details on using hashes here.

File details

Details for the file thinc-6.12.1-cp27-cp27m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl.

File metadata

File hashes

Hashes for thinc-6.12.1-cp27-cp27m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl
Algorithm Hash digest
SHA256 2760e3a79af4e2116b28961a6a4331bf9e547ad7c4f06051c045a70c829096c3
MD5 f8f78289c5f3fa8c38067445ba437434
BLAKE2b-256 f879284d72c6cf16e7820aee32dbad88f72673b80558ccd45516cfd687c53332

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page