Skip to main content

fasttext Python bindings, fixed numpy 2 compatibiliy

Project description

fasttext-numpy2

fasttext with one line changed to support numpy 2.

install

pip install fasttext-numpy2

or

# clone and cd into
pip install -e .

build

python -m build

build for pypi

see pypibuild.md

notes

all credits go to original authors.

fastText CircleCI

fastText is a library for efficient learning of word representations and sentence classification.

In this document we present how to use fastText in python.

Table of contents

Requirements

fastText builds on modern Mac OS and Linux distributions. Since it uses C++11 features, it requires a compiler with good C++11 support. You will need Python (version 2.7 or ≥ 3.4), NumPy & SciPy and pybind11.

Installation

To install the latest release, you can do :

$ pip install fasttext

or, to get the latest development version of fasttext, you can install from our github repository :

$ git clone https://github.com/facebookresearch/fastText.git
$ cd fastText
$ sudo pip install .
$ # or :
$ sudo python setup.py install

Usage overview

Word representation model

In order to learn word vectors, as described here, we can use fasttext.train_unsupervised function like this:

import fasttext

# Skipgram model :
model = fasttext.train_unsupervised('data.txt', model='skipgram')

# or, cbow model :
model = fasttext.train_unsupervised('data.txt', model='cbow')

where data.txt is a training file containing utf-8 encoded text.

The returned model object represents your learned model, and you can use it to retrieve information.

print(model.words)   # list of words in dictionary
print(model['king']) # get the vector of the word 'king'

Saving and loading a model object

You can save your trained model object by calling the function save_model.

model.save_model("model_filename.bin")

and retrieve it later thanks to the function load_model :

model = fasttext.load_model("model_filename.bin")

For more information about word representation usage of fasttext, you can refer to our word representations tutorial.

Text classification model

In order to train a text classifier using the method described here, we can use fasttext.train_supervised function like this:

import fasttext

model = fasttext.train_supervised('data.train.txt')

where data.train.txt is a text file containing a training sentence per line along with the labels. By default, we assume that labels are words that are prefixed by the string __label__

Once the model is trained, we can retrieve the list of words and labels:

print(model.words)
print(model.labels)

To evaluate our model by computing the precision at 1 (P@1) and the recall on a test set, we use the test function:

def print_results(N, p, r):
    print("N\t" + str(N))
    print("P@{}\t{:.3f}".format(1, p))
    print("R@{}\t{:.3f}".format(1, r))

print_results(*model.test('test.txt'))

We can also predict labels for a specific text :

model.predict("Which baking dish is best to bake a banana bread ?")

By default, predict returns only one label : the one with the highest probability. You can also predict more than one label by specifying the parameter k:

model.predict("Which baking dish is best to bake a banana bread ?", k=3)

If you want to predict more than one sentence you can pass an array of strings :

model.predict(["Which baking dish is best to bake a banana bread ?", "Why not put knives in the dishwasher?"], k=3)

Of course, you can also save and load a model to/from a file as in the word representation usage.

For more information about text classification usage of fasttext, you can refer to our text classification tutorial.

Compress model files with quantization

When you want to save a supervised model file, fastText can compress it in order to have a much smaller model file by sacrificing only a little bit performance.

# with the previously trained `model` object, call :
model.quantize(input='data.train.txt', retrain=True)

# then display results and save the new model :
print_results(*model.test(valid_data))
model.save_model("model_filename.ftz")

model_filename.ftz will have a much smaller size than model_filename.bin.

For further reading on quantization, you can refer to this paragraph from our blog post.

IMPORTANT: Preprocessing data / encoding conventions

In general it is important to properly preprocess your data. In particular our example scripts in the root folder do this.

fastText assumes UTF-8 encoded text. All text must be unicode for Python2 and str for Python3. The passed text will be encoded as UTF-8 by pybind11 before passed to the fastText C++ library. This means it is important to use UTF-8 encoded text when building a model. On Unix-like systems you can convert text using iconv.

fastText will tokenize (split text into pieces) based on the following ASCII characters (bytes). In particular, it is not aware of UTF-8 whitespace. We advice the user to convert UTF-8 whitespace / word boundaries into one of the following symbols as appropiate.

  • space

  • tab

  • vertical tab

  • carriage return

  • formfeed

  • the null character

The newline character is used to delimit lines of text. In particular, the EOS token is appended to a line of text if a newline character is encountered. The only exception is if the number of tokens exceeds the MAX_LINE_SIZE constant as defined in the Dictionary header. This means if you have text that is not separate by newlines, such as the fil9 dataset, it will be broken into chunks with MAX_LINE_SIZE of tokens and the EOS token is not appended.

The length of a token is the number of UTF-8 characters by considering the leading two bits of a byte to identify subsequent bytes of a multi-byte sequence. Knowing this is especially important when choosing the minimum and maximum length of subwords. Further, the EOS token (as specified in the Dictionary header) is considered a character and will not be broken into subwords.

More examples

In order to have a better knowledge of fastText models, please consider the main README and in particular the tutorials on our website.

You can find further python examples in the doc folder.

As with any package you can get help on any Python function using the help function.

For example

+>>> import fasttext
+>>> help(fasttext.FastText)

Help on module fasttext.FastText in fasttext:

NAME
    fasttext.FastText

DESCRIPTION
    # Copyright (c) 2017-present, Facebook, Inc.
    # All rights reserved.
    #
    # This source code is licensed under the MIT license found in the
    # LICENSE file in the root directory of this source tree.

FUNCTIONS
    load_model(path)
        Load a model given a filepath and return a model object.

    tokenize(text)
        Given a string of text, tokenize it and return a list of tokens
[...]

API

train_unsupervised parameters

input             # training file path (required)
model             # unsupervised fasttext model {cbow, skipgram} [skipgram]
lr                # learning rate [0.05]
dim               # size of word vectors [100]
ws                # size of the context window [5]
epoch             # number of epochs [5]
minCount          # minimal number of word occurences [5]
minn              # min length of char ngram [3]
maxn              # max length of char ngram [6]
neg               # number of negatives sampled [5]
wordNgrams        # max length of word ngram [1]
loss              # loss function {ns, hs, softmax, ova} [ns]
bucket            # number of buckets [2000000]
thread            # number of threads [number of cpus]
lrUpdateRate      # change the rate of updates for the learning rate [100]
t                 # sampling threshold [0.0001]
verbose           # verbose [2]

train_supervised parameters

input             # training file path (required)
lr                # learning rate [0.1]
dim               # size of word vectors [100]
ws                # size of the context window [5]
epoch             # number of epochs [5]
minCount          # minimal number of word occurences [1]
minCountLabel     # minimal number of label occurences [1]
minn              # min length of char ngram [0]
maxn              # max length of char ngram [0]
neg               # number of negatives sampled [5]
wordNgrams        # max length of word ngram [1]
loss              # loss function {ns, hs, softmax, ova} [softmax]
bucket            # number of buckets [2000000]
thread            # number of threads [number of cpus]
lrUpdateRate      # change the rate of updates for the learning rate [100]
t                 # sampling threshold [0.0001]
label             # label prefix ['__label__']
verbose           # verbose [2]
pretrainedVectors # pretrained word vectors (.vec file) for supervised learning []

model object

train_supervised, train_unsupervised and load_model functions return an instance of _FastText class, that we generaly name model object.

This object exposes those training arguments as properties : lr, dim, ws, epoch, minCount, minCountLabel, minn, maxn, neg, wordNgrams, loss, bucket, thread, lrUpdateRate, t, label, verbose, pretrainedVectors. So model.wordNgrams will give you the max length of word ngram used for training this model.

In addition, the object exposes several functions :

get_dimension           # Get the dimension (size) of a lookup vector (hidden layer).
                        # This is equivalent to `dim` property.
get_input_vector        # Given an index, get the corresponding vector of the Input Matrix.
get_input_matrix        # Get a copy of the full input matrix of a Model.
get_labels              # Get the entire list of labels of the dictionary
                        # This is equivalent to `labels` property.
get_line                # Split a line of text into words and labels.
get_output_matrix       # Get a copy of the full output matrix of a Model.
get_sentence_vector     # Given a string, get a single vector represenation. This function
                        # assumes to be given a single line of text. We split words on
                        # whitespace (space, newline, tab, vertical tab) and the control
                        # characters carriage return, formfeed and the null character.
get_subword_id          # Given a subword, return the index (within input matrix) it hashes to.
get_subwords            # Given a word, get the subwords and their indicies.
get_word_id             # Given a word, get the word id within the dictionary.
get_word_vector         # Get the vector representation of word.
get_words               # Get the entire list of words of the dictionary
                        # This is equivalent to `words` property.
is_quantized            # whether the model has been quantized
predict                 # Given a string, get a list of labels and a list of corresponding probabilities.
quantize                # Quantize the model reducing the size of the model and it's memory footprint.
save_model              # Save the model to the given path
test                    # Evaluate supervised model using file given by path
test_label              # Return the precision and recall score for each label.

The properties words, labels return the words and labels from the dictionary :

model.words         # equivalent to model.get_words()
model.labels        # equivalent to model.get_labels()

The object overrides __getitem__ and __contains__ functions in order to return the representation of a word and to check if a word is in the vocabulary.

model['king']       # equivalent to model.get_word_vector('king')
'king' in model     # equivalent to `'king' in model.get_words()`

Join the fastText community

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

fasttext-numpy2-0.10.4.tar.gz (68.0 kB view details)

Uploaded Source

Built Distributions

fasttext_numpy2-0.10.4-cp313-cp313-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl (4.6 MB view details)

Uploaded CPython 3.13 manylinux: glibc 2.27+ x86-64 manylinux: glibc 2.28+ x86-64

fasttext_numpy2-0.10.4-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (4.7 MB view details)

Uploaded CPython 3.13 manylinux: glibc 2.17+ x86-64

fasttext_numpy2-0.10.4-cp312-cp312-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl (4.6 MB view details)

Uploaded CPython 3.12 manylinux: glibc 2.27+ x86-64 manylinux: glibc 2.28+ x86-64

fasttext_numpy2-0.10.4-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (4.7 MB view details)

Uploaded CPython 3.12 manylinux: glibc 2.17+ x86-64

fasttext_numpy2-0.10.4-cp311-cp311-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl (4.6 MB view details)

Uploaded CPython 3.11 manylinux: glibc 2.27+ x86-64 manylinux: glibc 2.28+ x86-64

fasttext_numpy2-0.10.4-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (4.7 MB view details)

Uploaded CPython 3.11 manylinux: glibc 2.17+ x86-64

fasttext_numpy2-0.10.4-cp310-cp310-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl (4.5 MB view details)

Uploaded CPython 3.10 manylinux: glibc 2.27+ x86-64 manylinux: glibc 2.28+ x86-64

fasttext_numpy2-0.10.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (4.6 MB view details)

Uploaded CPython 3.10 manylinux: glibc 2.17+ x86-64

fasttext_numpy2-0.10.4-cp39-cp39-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl (4.5 MB view details)

Uploaded CPython 3.9 manylinux: glibc 2.27+ x86-64 manylinux: glibc 2.28+ x86-64

fasttext_numpy2-0.10.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (4.6 MB view details)

Uploaded CPython 3.9 manylinux: glibc 2.17+ x86-64

fasttext_numpy2-0.10.4-cp38-cp38-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl (4.5 MB view details)

Uploaded CPython 3.8 manylinux: glibc 2.27+ x86-64 manylinux: glibc 2.28+ x86-64

fasttext_numpy2-0.10.4-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (4.7 MB view details)

Uploaded CPython 3.8 manylinux: glibc 2.17+ x86-64

fasttext_numpy2-0.10.4-cp37-cp37m-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl (4.7 MB view details)

Uploaded CPython 3.7m manylinux: glibc 2.27+ x86-64 manylinux: glibc 2.28+ x86-64

fasttext_numpy2-0.10.4-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (4.8 MB view details)

Uploaded CPython 3.7m manylinux: glibc 2.17+ x86-64

fasttext_numpy2-0.10.4-cp36-cp36m-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl (4.7 MB view details)

Uploaded CPython 3.6m manylinux: glibc 2.27+ x86-64 manylinux: glibc 2.28+ x86-64

fasttext_numpy2-0.10.4-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (4.7 MB view details)

Uploaded CPython 3.6m manylinux: glibc 2.17+ x86-64

File details

Details for the file fasttext-numpy2-0.10.4.tar.gz.

File metadata

  • Download URL: fasttext-numpy2-0.10.4.tar.gz
  • Upload date:
  • Size: 68.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.10.15

File hashes

Hashes for fasttext-numpy2-0.10.4.tar.gz
Algorithm Hash digest
SHA256 156e84cf2c7db95b24897884284be52c1038fe2b1d0bd9f21bcaf363d2542825
MD5 c1006b550386556ab0f17e05bb8ae0a9
BLAKE2b-256 38c8515a5a2b3ad37f61e04b600d2ce0c540409a86c95cd445e70510678f8991

See more details on using hashes here.

File details

Details for the file fasttext_numpy2-0.10.4-cp313-cp313-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for fasttext_numpy2-0.10.4-cp313-cp313-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 c4719752a197f1d76f9bfb1343d6c63346f56ec037ba5fd85d14f024740402cb
MD5 902376a9cb9dc18e7ddf6e303856fc77
BLAKE2b-256 ce1f3114fc06342225e724ffcd7d9055f1f2a382fff3653ec20034c823ea323e

See more details on using hashes here.

File details

Details for the file fasttext_numpy2-0.10.4-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for fasttext_numpy2-0.10.4-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 5f65f7c96aa6a66fed58994ee0d45368f07fa3de5080d2f84f9f9b5ed7bca380
MD5 0f2189dd5c8385527131981d8fed66cb
BLAKE2b-256 14b8e682c5f2ee0600ba48e3e848e16f1571773b9e6f48944929de1b51763b3f

See more details on using hashes here.

File details

Details for the file fasttext_numpy2-0.10.4-cp312-cp312-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for fasttext_numpy2-0.10.4-cp312-cp312-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 b781303c8c24324e898b79c5a79f09d2d590e2a87f15c583557dc3ac94a33652
MD5 b0e74a0b42f8d7e445712259e8a7ff3b
BLAKE2b-256 e4a9118ff7f2b38f12794ae04091b684b5f0718db729840be5581df527d419b8

See more details on using hashes here.

File details

Details for the file fasttext_numpy2-0.10.4-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for fasttext_numpy2-0.10.4-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 0ba8a684e31f188a5b6dafb17833d5c9890656164a43e95b8c865ce3439a5ff8
MD5 dbdcf183b89830c893f5430ca1d020eb
BLAKE2b-256 af96eb4e3b4798695cb25987dec4b1fc060bda4408ff67153fe586def88e0f1f

See more details on using hashes here.

File details

Details for the file fasttext_numpy2-0.10.4-cp311-cp311-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for fasttext_numpy2-0.10.4-cp311-cp311-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 3bcc2e0093315ecb6886d6b7c9abc9e2e3c51e0935bbce4038c4705f9b73a25d
MD5 f8bfe907af73b1a5a44ff479e597b535
BLAKE2b-256 482b308c4422f12b0eef0843f1813147448c5c817be8e61c0efd89daf30f5bb0

See more details on using hashes here.

File details

Details for the file fasttext_numpy2-0.10.4-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for fasttext_numpy2-0.10.4-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 4c295f1f4d0b084b9274036ba3e254aa06fec1ac6a8c72f7984cb1bc1f60fb97
MD5 14b94de9234057424ff7eaeb193a1d89
BLAKE2b-256 efc40287b3d6a5f9dca6c8d1c3445522bd8164816a4ec08fb6bfaa674c8ad125

See more details on using hashes here.

File details

Details for the file fasttext_numpy2-0.10.4-cp310-cp310-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for fasttext_numpy2-0.10.4-cp310-cp310-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 b78327f9ad9f0d0f425c646f9bf6c3750642e4d84a134a6f3a1ca236850a0ac0
MD5 142bafa344029b0a28e3f85d800d114c
BLAKE2b-256 afc3951a2facbbbe8b1707853ba6bd952e4fe0f4ee1e6a45e2e9f29f579b8c7a

See more details on using hashes here.

File details

Details for the file fasttext_numpy2-0.10.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for fasttext_numpy2-0.10.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 358b09a612d4c803b2e60bbaaa5e004693777927ce3700d21167f5da7b1c1b80
MD5 5a86fc3453ce2174dafcf44de8dd2a8b
BLAKE2b-256 d5fe0d0e19ebffce8ec0f431c653e53313b2592173f213f31a67ebc8d09dd7fd

See more details on using hashes here.

File details

Details for the file fasttext_numpy2-0.10.4-cp39-cp39-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for fasttext_numpy2-0.10.4-cp39-cp39-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 41cfa07bad0a60bf2a0a0139c125a102b1d6101f46841586f2a1e879151bcc61
MD5 4d67746cd1311c2808fe735bf059a163
BLAKE2b-256 7cd9f85053bff9631c4a90faee34902361bc291808e11a58ffd8a9ca27198c51

See more details on using hashes here.

File details

Details for the file fasttext_numpy2-0.10.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for fasttext_numpy2-0.10.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 010cd3d59f8cafc32112f999e37bc13c75fdf7f690d1d3a7fb5eb132d551baac
MD5 ab703e1747f57c7f24249fc5691576fa
BLAKE2b-256 ee81ca4c33e4248948401a4e0b2121f70fcabe988922c54368a93943b82f008b

See more details on using hashes here.

File details

Details for the file fasttext_numpy2-0.10.4-cp38-cp38-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for fasttext_numpy2-0.10.4-cp38-cp38-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 be188cc52108aad32b15c46b8d53122cf3d1462db9dcd94897533d8dd6eedc44
MD5 f5c1967dd47412066116d861b57fb2e0
BLAKE2b-256 90dbd0b2d561b8ec9bad5d9307e21ad598c304f75a0a2a0615f1e811907b6fa1

See more details on using hashes here.

File details

Details for the file fasttext_numpy2-0.10.4-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for fasttext_numpy2-0.10.4-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 7e6ce53c3c733034d0ac94211cac055ef2cd8e4eb237a7f8d352fae9ce5c2eca
MD5 a73d7f38d4848803948b4d4062ade6f5
BLAKE2b-256 7069c8fae904fec6a5deef238583d3a15618b04184a4c1844107db6a333d0c21

See more details on using hashes here.

File details

Details for the file fasttext_numpy2-0.10.4-cp37-cp37m-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for fasttext_numpy2-0.10.4-cp37-cp37m-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 193f708baa15fd99c9f9094022626c1793ea70e0eccb8ea94107d5099d2b681f
MD5 6f8bb17068d0403858742c7fb2a5b994
BLAKE2b-256 8008b2a23f0069f1d605504ef3cdd2394ef80afd067fe65698b5393466d42148

See more details on using hashes here.

File details

Details for the file fasttext_numpy2-0.10.4-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for fasttext_numpy2-0.10.4-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 6d5fe255ec61a96f1e019ce472b88fe669cd7262dea06d8bdb8e193f91edb06c
MD5 b2dd6079def682d85d0755e83a292735
BLAKE2b-256 9b7cab17bffc73cee538e592c14909048ad93fd8a82c74e050ca142c9c3ad12f

See more details on using hashes here.

File details

Details for the file fasttext_numpy2-0.10.4-cp36-cp36m-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for fasttext_numpy2-0.10.4-cp36-cp36m-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 0edc85a3ca85d4f0170a84e9ec2bfdf606e2e490c107bea95291051cc80335d9
MD5 78627b64d60ba20c741fe131aa8f81a4
BLAKE2b-256 cc9e6ef7e70b05a3962598d16e847584f483d9920ef673c36f68894af3552e80

See more details on using hashes here.

File details

Details for the file fasttext_numpy2-0.10.4-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for fasttext_numpy2-0.10.4-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 4258e8d7d9ac20c4fcb26a6382b6c0ba75c1e3377cf8eef6718dab99066ce677
MD5 43372b41515f4b72248cdd1de9c08fcd
BLAKE2b-256 c15e7fce71d32c175587f9b7022b7aeaf3d92b4d61d6cd678301048065327f0d

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page