Skip to main content

A library for running inference on a DeepSpeech model

Project description

Project DeepSpeech

Documentation Status Task Status

Project DeepSpeech is an open source Speech-To-Text engine, using a model trained by machine learning techniques, based on Baidu's Deep Speech research paper. Project DeepSpeech uses Google's TensorFlow project to make the implementation easier.

Usage

Pre-built binaries that can be used for performing inference with a trained model can be installed with pip3. Proper setup using virtual environment is recommended and you can find that documented below.

A pre-trained English model is available for use, and can be downloaded using the instructions below.

Once everything is installed you can then use the deepspeech binary to do speech-to-text on short, approximately 5 second, audio files (currently only WAVE files with 16-bit, 16 kHz, mono are supported in the Python client):

pip3 install deepspeech
deepspeech --model models/output_graph.pbmm --alphabet models/alphabet.txt --lm models/lm.binary --trie models/trie --audio my_audio_file.wav

Alternatively, quicker inference (The realtime factor on a GeForce GTX 1070 is about 0.44.) can be performed using a supported NVIDIA GPU on Linux. (See the release notes to find which GPU's are supported.) This is done by instead installing the GPU specific package:

pip3 install deepspeech-gpu
deepspeech --model models/output_graph.pbmm --alphabet models/alphabet.txt --lm models/lm.binary --trie models/trie --audio my_audio_file.wav

See the output of deepspeech -h for more information on the use of deepspeech. (If you experience problems running deepspeech, please check required runtime dependencies).

Table of Contents

Prerequisites

Getting the code

Install Git Large File Storage, either manually or through a package like git-lfs if available on your system. Then clone the DeepSpeech repository normally:

git clone https://github.com/mozilla/DeepSpeech

Getting the pre-trained model

If you want to use the pre-trained English model for performing speech-to-text, you can download it (along with other important inference material) from the DeepSpeech releases page. Alternatively, you can run the following command to download and unzip the files in your current directory:

wget -O - https://github.com/mozilla/DeepSpeech/releases/download/v0.3.0/deepspeech-0.3.0-models.tar.gz | tar xvfz -

Using the model

There are three ways to use DeepSpeech inference:

Using the Python package

Pre-built binaries that can be used for performing inference with a trained model can be installed with pip3. You can then use the deepspeech binary to do speech-to-text on an audio file:

For the Python bindings, it is highly recommended that you perform the installation within a Python 3.5 or later virtual environment. You can find more information about those in this documentation. We will continue under the assumption that you already have your system properly setup to create new virtual environments.

Create a DeepSpeech virtual environment

In creating a virtual environment you will create a directory containing a python3 binary and everything needed to run deepspeech. You can use whatever directory you want. For the purpose of the documentation, we will rely on $HOME/tmp/deepspeech-venv. You can create it using this command:

$ virtualenv -p python3 $HOME/tmp/deepspeech-venv/

Once this command completes successfully, the environment will be ready to be activated.

Activating the environment

Each time you need to work with DeepSpeech, you have to activate, load this virtual environment. This is done with this simple command:

$ source $HOME/tmp/deepspeech-venv/bin/activate

Installing DeepSpeech Python bindings

Once your environment has been setup and loaded, you can use pip3 to manage packages locally. On a fresh setup of the virtualenv, you will have to install the DeepSpeech wheel. You can check if it is already installed by taking a look at the output of pip3 list. To perform the installation, just issue:

$ pip3 install deepspeech

If it is already installed, you can also update it:

$ pip3 install --upgrade deepspeech

Alternatively, if you have a supported NVIDIA GPU on Linux (See the release notes to find which GPU's are supported.), you can install the GPU specific package as follows:

$ pip3 install deepspeech-gpu

or update it as follows:

$ pip3 install --upgrade deepspeech-gpu

In both cases, it should take care of installing all the required dependencies. Once it is done, you should be able to call the sample binary using deepspeech on your command-line.

Note: the following command assumes you downloaded the pre-trained model.

deepspeech --model models/output_graph.pbmm --alphabet models/alphabet.txt --lm models/lm.binary --trie models/trie --audio my_audio_file.wav

The last two arguments are optional, and represent a language model.

See client.py for an example of how to use the package programatically.

Using the command-line client

To download the pre-built binaries, use util/taskcluster.py:

python3 util/taskcluster.py --target .

or if you're on macOS:

python3 util/taskcluster.py --arch osx --target .

also, if you need some binaries different than current master, like v0.2.0-alpha.6, you can use --branch:

python3 util/taskcluster.py --branch "v0.2.0-alpha.6 --target ."

This will download native_client.tar.xz which includes the deepspeech binary and associated libraries, and extract it into the current folder. taskcluster.py will download binaries for Linux/x86_64 by default, but you can override that behavior with the --arch parameter. See the help info with python util/taskcluster.py -h for more details. Proper DeepSpeech or TensorFlow's branch can be specified as well.

Note: the following command assumes you downloaded the pre-trained model.

./deepspeech --model models/output_graph.pbmm --alphabet models/alphabet.txt --lm models/lm.binary --trie models/trie --audio audio_input.wav

See the help output with ./deepspeech -h and the native client README for more details.

Using the Node.JS package

You can download the Node.JS bindings using npm:

npm install deepspeech

Alternatively, if you're using Linux and have a supported NVIDIA GPU (See the release notes to find which GPU's are supported.), you can install the GPU specific package as follows:

npm install deepspeech-gpu

See client.js for an example of how to use the bindings.

Installing bindings from source

If pre-built binaries aren't available for your system, you'll need to install them from scratch. Follow these instructions.

Third party bindings

In addition to the bindings above, third party developers have started to provide bindings to other languages:

Training

Installing prerequisites for training

Install the required dependencies using pip:

cd DeepSpeech
pip3 install -r requirements.txt

You'll also need to download native_client.tar.xz or build the native client files yourself to get the custom TensorFlow OP needed for decoding the outputs of the neural network. You can use util/taskcluster.py to download the files for your architecture:

python3 util/taskcluster.py --target .

This will download the native client files for the x86_64 architecture without CUDA support, and extract them into the current folder. If you prefer building the binaries from source, see the native_client README file. We also have binaries with CUDA enabled ("--arch gpu") and for ARM7 ("--arch arm").

Recommendations

If you have a capable (Nvidia, at least 8GB of VRAM) GPU, it is highly recommended to install TensorFlow with GPU support. Training will likely be significantly quicker than using the CPU. To enable GPU support, you can do:

pip3 uninstall tensorflow
pip3 install 'tensorflow-gpu==1.11.0'

Common Voice training data

The Common Voice corpus consists of voice samples that were donated through Common Voice. We provide an importer, that automates the whole process of downloading and preparing the corpus. You just specify a target directory where all Common Voice contents should go. If you already downloaded the Common Voice corpus archive from here, you can simply run the import script on the directory where the corpus is located. The importer will then skip downloading it and immediately proceed to unpackaging and importing. To start the import process, you can call:

bin/import_cv.py path/to/target/directory

Please be aware that this requires at least 70GB of free disk space and quite some time to conclude. As this process creates a huge number of small files, using an SSD drive is highly recommended. If the import script gets interrupted, it will try to continue from where it stopped the next time you run it. Unfortunately, there are some cases where it will need to start over. Once the import is done, the directory will contain a bunch of CSV files.

The following files are official user-validated sets for training, validating and testing:

  • cv-valid-train.csv
  • cv-valid-dev.csv
  • cv-valid-test.csv

The following files are the non-validated unofficial sets for training, validating and testing:

  • cv-other-train.csv
  • cv-other-dev.csv
  • cv-other-test.csv

cv-invalid.csv contains all samples that users flagged as invalid.

A sub-directory called cv_corpus_{version} contains the mp3 and wav files that were extracted from an archive named cv_corpus_{version}.tar.gz. All entries in the CSV files refer to their samples by absolute paths. So moving this sub-directory would require another import or tweaking the CSV files accordingly.

To use Common Voice data during training, validation and testing, you pass (comma separated combinations of) their filenames into --train_files, --dev_files, --test_files parameters of DeepSpeech.py. If, for example, Common Voice was imported into ../data/CV, DeepSpeech.py could be called like this:

./DeepSpeech.py --train_files ../data/CV/cv-valid-train.csv --dev_files ../data/CV/cv-valid-dev.csv --test_files ../data/CV/cv-valid-test.csv

If you are brave enough, you can also include the other dataset, which contains not-yet-validated content, and thus can be broken from time to time:

./DeepSpeech.py --train_files ../data/CV/cv-valid-train.csv,../data/CV/cv-other-train.csv --dev_files ../data/CV/cv-valid-dev.csv --test_files ../data/CV/cv-valid-test.csv

Training a model

The central (Python) script is DeepSpeech.py in the project's root directory. For its list of command line options, you can call:

./DeepSpeech.py --help

To get the output of this in a slightly better-formatted way, you can also look up the option definitions top of DeepSpeech.py.

For executing pre-configured training scenarios, there is a collection of convenience scripts in the bin folder. Most of them are named after the corpora they are configured for. Keep in mind that the other speech corpora are very large, on the order of tens of gigabytes, and some aren't free. Downloading and preprocessing them can take a very long time, and training on them without a fast GPU (GTX 10 series recommended) takes even longer.

If you experience GPU OOM errors while training, try reducing the batch size with the --train_batch_size, --dev_batch_size and --test_batch_size parameters.

As a simple first example you can open a terminal, change to the directory of the DeepSpeech checkout and run:

./bin/run-ldc93s1.sh

This script will train on a small sample dataset called LDC93S1, which can be overfitted on a GPU in a few minutes for demonstration purposes. From here, you can alter any variables with regards to what dataset is used, how many training iterations are run and the default values of the network parameters. Feel also free to pass additional (or overriding) DeepSpeech.py parameters to these scripts. Then, just run the script to train the modified network.

Each dataset has a corresponding importer script in bin/ that can be used to download (if it's freely available) and preprocess the dataset. See bin/import_librivox.py for an example of how to import and preprocess a large dataset for training with Deep Speech.

If you've run the old importers (in util/importers/), they could have removed source files that are needed for the new importers to run. In that case, simply remove the extracted folders and let the importer extract and process the dataset from scratch, and things should work.

Checkpointing

During training of a model so-called checkpoints will get stored on disk. This takes place at a configurable time interval. The purpose of checkpoints is to allow interruption (also in the case of some unexpected failure) and later continuation of training without losing hours of training time. Resuming from checkpoints happens automatically by just (re)starting training with the same --checkpoint_dir of the former run.

Be aware however that checkpoints are only valid for the same model geometry they had been generated from. In other words: If there are error messages of certain Tensors having incompatible dimensions, this is most likely due to an incompatible model change. One usual way out would be to wipe all checkpoint files in the checkpoint directory or changing it before starting the training.

Exporting a model for inference

If the --export_dir parameter is provided, a model will have been exported to this directory during training. Refer to the corresponding README.md for information on building and running a client that can use the exported model.

Making a mmap-able model for inference

The output_graph.pb model file generated in the above step will be loaded in memory to be dealt with when running inference. This will result in extra loading time and memory consumption. One way to avoid this is to directly read data from the disk.

TensorFlow has tooling to achieve this: it requires building the target //tensorflow/contrib/util:convert_graphdef_memmapped_format (binaries are produced by our TaskCluster for some systems including Linux/amd64 and macOS/amd64), use util/taskcluster.py tool to download, specifying tensorflow as a source. Producing a mmap-able model is as simple as:

$ convert_graphdef_memmapped_format --in_graph=output_graph.pb --out_graph=output_graph.pbmm

Upon sucessfull run, it should report about conversion of a non zero number of nodes. If it reports converting 0 nodes, something is wrong: make sure your model is a frozen one, and that you have not applied any incompatible changes (this includes quantize_weights).

Distributed training across more than one machine

DeepSpeech has built-in support for distributed TensorFlow. To get an idea on how this works, you can use the script bin/run-cluster.sh for running a cluster with workers just on the local machine.

$ bin/run-cluster.sh --help
Usage: run-cluster.sh [--help] [--script script] [p:w:g] <arg>*

--help      print this help message
--script    run the provided script instead of DeepSpeech.py
p           number of local parameter servers
w           number of local workers
g           number of local GPUs per worker
<arg>*      remaining parameters will be forwarded to DeepSpeech.py or a provided script

Example usage - The following example will create a local DeepSpeech.py cluster
with 1 parameter server, and 2 workers with 1 GPU each:
$ run-cluster.sh 1:2:1 --epoch 10

Be aware that for the help example to be able to run, you need at least two CUDA capable GPUs (2 workers times 1 GPU). The script utilizes environment variable CUDA_VISIBLE_DEVICES for DeepSpeech.py to see only the provided number of GPUs per worker. The script is meant to be a template for your own distributed computing instrumentation. Just modify the startup code for the different servers (workers and parameter servers) accordingly. You could use SSH or something similar for running them on your remote hosts.

Continuing training from a frozen graph

If you'd like to use one of the pre-trained models released by Mozilla to bootstrap your training process (transfer learning, fine tuning), you can do so by using the --initialize_from_frozen_model flag in DeepSpeech.py. For best results, make sure you're passing an empty --checkpoint_dir when resuming from a frozen model.

For example, if you want to fine tune the entire graph using your own data in my-train.csv, my-dev.csv and my-test.csv, for three epochs, you can something like the following, tuning the hyperparameters as needed:

mkdir fine_tuning_checkpoints
python3 DeepSpeech.py --n_hidden 2048 --initialize_from_frozen_model path/to/model/output_graph.pb --checkpoint_dir fine_tuning_checkpoints --epoch 3 --train_files my-train.csv --dev_files my-dev.csv --test_files my_dev.csv --learning_rate 0.0001

Note: the released models were trained with --n_hidden 2048, so you need to use that same value when initializing from the release models.

Code documentation

Documentation (incomplete) for the code can be found here: http://deepspeech.readthedocs.io/en/latest/

Contact/Getting Help

There are several ways to contact us or to get help:

  1. FAQ - We have a list of common questions, and their answers, in our FAQ. When just getting started, it's best to first check the FAQ to see if your question is addressed.

  2. Discourse Forums - If your question is not addressed in the FAQ, the Discourse Forums is the next place to look. They contain conversations on General Topics, Using Deep Speech, and Deep Speech Development.

  3. IRC - If your question is not addressed by either the FAQ or Discourse Forums, you can contact us on the #machinelearning channel on Mozilla IRC; people there can try to answer/help

  4. Issues - Finally, if all else fails, you can open an issue in our repo.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

deepspeech-0.3.0-cp37-cp37m-manylinux1_x86_64.whl (9.2 MB view details)

Uploaded CPython 3.7m

deepspeech-0.3.0-cp37-cp37m-macosx_10_10_x86_64.whl (11.7 MB view details)

Uploaded CPython 3.7mmacOS 10.10+ x86-64

deepspeech-0.3.0-cp36-cp36m-manylinux1_x86_64.whl (9.2 MB view details)

Uploaded CPython 3.6m

deepspeech-0.3.0-cp36-cp36m-macosx_10_10_x86_64.whl (11.7 MB view details)

Uploaded CPython 3.6mmacOS 10.10+ x86-64

deepspeech-0.3.0-cp35-cp35m-manylinux1_x86_64.whl (9.2 MB view details)

Uploaded CPython 3.5m

deepspeech-0.3.0-cp35-cp35m-macosx_10_10_x86_64.whl (11.7 MB view details)

Uploaded CPython 3.5mmacOS 10.10+ x86-64

deepspeech-0.3.0-cp35-cp35m-linux_armv7l.whl (9.9 MB view details)

Uploaded CPython 3.5m

deepspeech-0.3.0-cp34-cp34m-manylinux1_x86_64.whl (9.2 MB view details)

Uploaded CPython 3.4m

deepspeech-0.3.0-cp34-cp34m-macosx_10_10_x86_64.whl (11.7 MB view details)

Uploaded CPython 3.4mmacOS 10.10+ x86-64

deepspeech-0.3.0-cp34-cp34m-linux_armv7l.whl (9.9 MB view details)

Uploaded CPython 3.4m

deepspeech-0.3.0-cp27-cp27mu-manylinux1_x86_64.whl (9.2 MB view details)

Uploaded CPython 2.7mu

deepspeech-0.3.0-cp27-cp27mu-macosx_10_10_x86_64.whl (11.7 MB view details)

Uploaded CPython 2.7mumacOS 10.10+ x86-64

deepspeech-0.3.0-cp27-cp27m-manylinux1_x86_64.whl (9.2 MB view details)

Uploaded CPython 2.7m

deepspeech-0.3.0-cp27-cp27m-macosx_10_10_x86_64.whl (11.7 MB view details)

Uploaded CPython 2.7mmacOS 10.10+ x86-64

File details

Details for the file deepspeech-0.3.0-cp37-cp37m-manylinux1_x86_64.whl.

File metadata

  • Download URL: deepspeech-0.3.0-cp37-cp37m-manylinux1_x86_64.whl
  • Upload date:
  • Size: 9.2 MB
  • Tags: CPython 3.7m
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.11.0 pkginfo/1.4.2 requests/2.19.1 setuptools/39.0.1 requests-toolbelt/0.8.0 tqdm/4.26.0 CPython/3.6.2

File hashes

Hashes for deepspeech-0.3.0-cp37-cp37m-manylinux1_x86_64.whl
Algorithm Hash digest
SHA256 bed389311ae5fa1d546f9b419f8c93ad7fe022533b28a275c03fc0df3758c94b
MD5 75967b8cc8e53fd68857a9ceb3d3ff66
BLAKE2b-256 2188c7e42dbf7496211494f954a8b0dbaf10f375c083192051b1986064ef2571

See more details on using hashes here.

File details

Details for the file deepspeech-0.3.0-cp37-cp37m-macosx_10_10_x86_64.whl.

File metadata

  • Download URL: deepspeech-0.3.0-cp37-cp37m-macosx_10_10_x86_64.whl
  • Upload date:
  • Size: 11.7 MB
  • Tags: CPython 3.7m, macOS 10.10+ x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.11.0 pkginfo/1.4.2 requests/2.19.1 setuptools/39.0.1 requests-toolbelt/0.8.0 tqdm/4.26.0 CPython/3.6.2

File hashes

Hashes for deepspeech-0.3.0-cp37-cp37m-macosx_10_10_x86_64.whl
Algorithm Hash digest
SHA256 fe1eded751a3eeb96b8da1c20b897891b154c23bb1f2adaa5b4730dd787d94c9
MD5 eeaf6a1c664ca9f446a6a7369448ec84
BLAKE2b-256 98c682885ff2e0ddeefdd1cc2c929f3165c91d1d59cf84861d8ad98a1c640371

See more details on using hashes here.

File details

Details for the file deepspeech-0.3.0-cp36-cp36m-manylinux1_x86_64.whl.

File metadata

  • Download URL: deepspeech-0.3.0-cp36-cp36m-manylinux1_x86_64.whl
  • Upload date:
  • Size: 9.2 MB
  • Tags: CPython 3.6m
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.11.0 pkginfo/1.4.2 requests/2.19.1 setuptools/39.0.1 requests-toolbelt/0.8.0 tqdm/4.26.0 CPython/3.6.2

File hashes

Hashes for deepspeech-0.3.0-cp36-cp36m-manylinux1_x86_64.whl
Algorithm Hash digest
SHA256 d5e53954b063f0a1aeb7e0d5e3e98c27f2d0984617b378d846b48076f3b0bb82
MD5 9c80d3837a04ccdc663bcc21a2f43955
BLAKE2b-256 75440a3d8ff63486798324445a1f70d016aa19e981d3af85e079242f83e87ef5

See more details on using hashes here.

File details

Details for the file deepspeech-0.3.0-cp36-cp36m-macosx_10_10_x86_64.whl.

File metadata

  • Download URL: deepspeech-0.3.0-cp36-cp36m-macosx_10_10_x86_64.whl
  • Upload date:
  • Size: 11.7 MB
  • Tags: CPython 3.6m, macOS 10.10+ x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.11.0 pkginfo/1.4.2 requests/2.19.1 setuptools/39.0.1 requests-toolbelt/0.8.0 tqdm/4.26.0 CPython/3.6.2

File hashes

Hashes for deepspeech-0.3.0-cp36-cp36m-macosx_10_10_x86_64.whl
Algorithm Hash digest
SHA256 c7dcc7868841d9c4a78a362d3ab44672150cfce92008deb90275383e9bb6c711
MD5 642a3b82a372587a0ad94e7023948477
BLAKE2b-256 8c378511bc26315cc117b129c2c492a1bdbc2bae4d8b8f6a92ecc2fc36a4f4e0

See more details on using hashes here.

File details

Details for the file deepspeech-0.3.0-cp35-cp35m-manylinux1_x86_64.whl.

File metadata

  • Download URL: deepspeech-0.3.0-cp35-cp35m-manylinux1_x86_64.whl
  • Upload date:
  • Size: 9.2 MB
  • Tags: CPython 3.5m
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.11.0 pkginfo/1.4.2 requests/2.19.1 setuptools/39.0.1 requests-toolbelt/0.8.0 tqdm/4.26.0 CPython/3.6.2

File hashes

Hashes for deepspeech-0.3.0-cp35-cp35m-manylinux1_x86_64.whl
Algorithm Hash digest
SHA256 77215226ce4c0f0dacd7c17a7cf0253d1b032972fd9f207b8e810e94047370ff
MD5 d55ba3059d7570fe7635ab1310c37514
BLAKE2b-256 d62d8fd6651317d70a2ef2178df822b94f7fad731e3dfe80442b5bb4176b77ca

See more details on using hashes here.

File details

Details for the file deepspeech-0.3.0-cp35-cp35m-macosx_10_10_x86_64.whl.

File metadata

  • Download URL: deepspeech-0.3.0-cp35-cp35m-macosx_10_10_x86_64.whl
  • Upload date:
  • Size: 11.7 MB
  • Tags: CPython 3.5m, macOS 10.10+ x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.11.0 pkginfo/1.4.2 requests/2.19.1 setuptools/39.0.1 requests-toolbelt/0.8.0 tqdm/4.26.0 CPython/3.6.2

File hashes

Hashes for deepspeech-0.3.0-cp35-cp35m-macosx_10_10_x86_64.whl
Algorithm Hash digest
SHA256 ce3df9de570b6406bae38e27eec2a9a5f25826258b54e971dabe6479d94146cb
MD5 1bdd962baf89199cc4f1f75f3b55d716
BLAKE2b-256 01ae77433277a0208a2743b9a75dd46a5773fbcbf50bd93761e3188deb15c619

See more details on using hashes here.

File details

Details for the file deepspeech-0.3.0-cp35-cp35m-linux_armv7l.whl.

File metadata

  • Download URL: deepspeech-0.3.0-cp35-cp35m-linux_armv7l.whl
  • Upload date:
  • Size: 9.9 MB
  • Tags: CPython 3.5m
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.11.0 pkginfo/1.4.2 requests/2.19.1 setuptools/39.0.1 requests-toolbelt/0.8.0 tqdm/4.26.0 CPython/3.6.2

File hashes

Hashes for deepspeech-0.3.0-cp35-cp35m-linux_armv7l.whl
Algorithm Hash digest
SHA256 ce8f90057b40ea1e8f898d4bf237a568a61bb23d97dbbdd50c764a942d39221e
MD5 ebd741f174b38a6e4ca4c3c69b5b5b48
BLAKE2b-256 cb7076a465dea111dfbf01c50357ed17eccb124ae57c4ef3ebca4976a381ac6c

See more details on using hashes here.

File details

Details for the file deepspeech-0.3.0-cp34-cp34m-manylinux1_x86_64.whl.

File metadata

  • Download URL: deepspeech-0.3.0-cp34-cp34m-manylinux1_x86_64.whl
  • Upload date:
  • Size: 9.2 MB
  • Tags: CPython 3.4m
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.11.0 pkginfo/1.4.2 requests/2.19.1 setuptools/39.0.1 requests-toolbelt/0.8.0 tqdm/4.26.0 CPython/3.6.2

File hashes

Hashes for deepspeech-0.3.0-cp34-cp34m-manylinux1_x86_64.whl
Algorithm Hash digest
SHA256 ada3542ac474306364b219e2613864afe9917ebc7482b28329b4bddb595507df
MD5 73e2485e56af199d086c0df0fba4a681
BLAKE2b-256 439ca5322d4570c9fa42bd407d798c88584ad75c15f4eef6ca293749c541af80

See more details on using hashes here.

File details

Details for the file deepspeech-0.3.0-cp34-cp34m-macosx_10_10_x86_64.whl.

File metadata

  • Download URL: deepspeech-0.3.0-cp34-cp34m-macosx_10_10_x86_64.whl
  • Upload date:
  • Size: 11.7 MB
  • Tags: CPython 3.4m, macOS 10.10+ x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.11.0 pkginfo/1.4.2 requests/2.19.1 setuptools/39.0.1 requests-toolbelt/0.8.0 tqdm/4.26.0 CPython/3.6.2

File hashes

Hashes for deepspeech-0.3.0-cp34-cp34m-macosx_10_10_x86_64.whl
Algorithm Hash digest
SHA256 4beb225f0fbe896cebed9af2a810065e661f913b95f6d8ad9554e1d72f12263a
MD5 70616c7dcae59f073d2cc17094102bfd
BLAKE2b-256 18da567518681d8c1fed4e7b4312cd7411c12f70a38f74be3e1cff7648de4cbf

See more details on using hashes here.

File details

Details for the file deepspeech-0.3.0-cp34-cp34m-linux_armv7l.whl.

File metadata

  • Download URL: deepspeech-0.3.0-cp34-cp34m-linux_armv7l.whl
  • Upload date:
  • Size: 9.9 MB
  • Tags: CPython 3.4m
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.11.0 pkginfo/1.4.2 requests/2.19.1 setuptools/39.0.1 requests-toolbelt/0.8.0 tqdm/4.26.0 CPython/3.6.2

File hashes

Hashes for deepspeech-0.3.0-cp34-cp34m-linux_armv7l.whl
Algorithm Hash digest
SHA256 d26e4fcb9f3cdfbb6590a493a7124de2fac3b33eeb2b03202ba5d834662b268b
MD5 1e045fbcafcc4038666132d8b082685e
BLAKE2b-256 df0171d984864c04e6824a2ff06397c3f531e3a7cba3b751977ea221ec837f35

See more details on using hashes here.

File details

Details for the file deepspeech-0.3.0-cp27-cp27mu-manylinux1_x86_64.whl.

File metadata

  • Download URL: deepspeech-0.3.0-cp27-cp27mu-manylinux1_x86_64.whl
  • Upload date:
  • Size: 9.2 MB
  • Tags: CPython 2.7mu
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.11.0 pkginfo/1.4.2 requests/2.19.1 setuptools/39.0.1 requests-toolbelt/0.8.0 tqdm/4.26.0 CPython/3.6.2

File hashes

Hashes for deepspeech-0.3.0-cp27-cp27mu-manylinux1_x86_64.whl
Algorithm Hash digest
SHA256 285f3a03bf69029d0ec34303976c7695f8bae01b4dd0415e6eaf77f79eb14e18
MD5 577918768fc8a4c0a1c90e5880525c3c
BLAKE2b-256 b47eadeed4797ca6909950bd5cddd7df721632f611397ccc3cff87e176d9c4f6

See more details on using hashes here.

File details

Details for the file deepspeech-0.3.0-cp27-cp27mu-macosx_10_10_x86_64.whl.

File metadata

  • Download URL: deepspeech-0.3.0-cp27-cp27mu-macosx_10_10_x86_64.whl
  • Upload date:
  • Size: 11.7 MB
  • Tags: CPython 2.7mu, macOS 10.10+ x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.11.0 pkginfo/1.4.2 requests/2.19.1 setuptools/39.0.1 requests-toolbelt/0.8.0 tqdm/4.26.0 CPython/3.6.2

File hashes

Hashes for deepspeech-0.3.0-cp27-cp27mu-macosx_10_10_x86_64.whl
Algorithm Hash digest
SHA256 0d8f9b0ea896304004e095b9bfeabcd098a42ccf8c4875dc493c59cf5be1f93f
MD5 4a4be5d6cefbc6b077162dbee57906f6
BLAKE2b-256 1d8b1d8f4e6739cecdfb11cf82d5e017c3c81a970bf87ab2ecfb1d2a7e6b8258

See more details on using hashes here.

File details

Details for the file deepspeech-0.3.0-cp27-cp27m-manylinux1_x86_64.whl.

File metadata

  • Download URL: deepspeech-0.3.0-cp27-cp27m-manylinux1_x86_64.whl
  • Upload date:
  • Size: 9.2 MB
  • Tags: CPython 2.7m
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.11.0 pkginfo/1.4.2 requests/2.19.1 setuptools/39.0.1 requests-toolbelt/0.8.0 tqdm/4.26.0 CPython/3.6.2

File hashes

Hashes for deepspeech-0.3.0-cp27-cp27m-manylinux1_x86_64.whl
Algorithm Hash digest
SHA256 2b42b6f1243726bb60d028297d201fe7114634abcbdfd4e6f40abf43ce246d9e
MD5 5861ec8a544f6e797d203604c9e1a2c9
BLAKE2b-256 d81ce99a2cafb01002f9b8b1a303ea801cf83e923ee0ef199728334bd77b6432

See more details on using hashes here.

File details

Details for the file deepspeech-0.3.0-cp27-cp27m-macosx_10_10_x86_64.whl.

File metadata

  • Download URL: deepspeech-0.3.0-cp27-cp27m-macosx_10_10_x86_64.whl
  • Upload date:
  • Size: 11.7 MB
  • Tags: CPython 2.7m, macOS 10.10+ x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.11.0 pkginfo/1.4.2 requests/2.19.1 setuptools/39.0.1 requests-toolbelt/0.8.0 tqdm/4.26.0 CPython/3.6.2

File hashes

Hashes for deepspeech-0.3.0-cp27-cp27m-macosx_10_10_x86_64.whl
Algorithm Hash digest
SHA256 be0ec8f09a85cccb1e42ae0b923972799f11baefddec05997281202f1079fd62
MD5 f4277c33000b27ef66cdd260bcc1f472
BLAKE2b-256 3122ff1a0c3183c63870e1c5554f6e31a8bdb4f37e63c58a39b984eb12f026b7

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page