Skip to main content

FSA/FST algorithms, intended to (eventually) be interoperable with PyTorch and similar

Project description

Documentation Status

k2

The vision of k2 is to be able to seamlessly integrate Finite State Automaton (FSA) and Finite State Transducer (FST) algorithms into autograd-based machine learning toolkits like PyTorch and TensorFlow. For speech recognition applications, this should make it easy to interpolate and combine various training objectives such as cross-entropy, CTC and MMI and to jointly optimize a speech recognition system with multiple decoding passes including lattice rescoring and confidence estimation. We hope k2 will have many other applications as well.

One of the key algorithms that we want to make efficient in the short term is pruned composition of a generic FSA with a "dense" FSA (i.e. one that corresponds to log-probs of symbols at the output of a neural network). This can be used as a fast implementation of decoding for ASR, and for CTC and LF-MMI training. This won't give a direct advantage in terms of Word Error Rate when compared with existing technology; but the point is to do this in a much more general and extensible framework to allow further development of ASR technology.

Implementation

A few key points on our implementation strategy.

Most of the code is in C++ and CUDA. We implement a templated class Ragged, which is quite like TensorFlow's RaggedTensor (actually we came up with the design independently, and were later told that TensorFlow was using the same ideas). Despite a close similarity at the level of data structures, the design is quite different from TensorFlow and PyTorch. Most of the time we don't use composition of simple operations, but rely on C++11 lambdas defined directly in the C++ implementations of algorithms. The code in these lambdas operate directly on data pointers and, if the backend is CUDA, they can run in parallel for each element of a tensor. (The C++ and CUDA code is mixed together and the CUDA kernels get instantiated via templates).

It is difficult to adequately describe what we are doing with these Ragged objects without going in detail through the code. The algorithms look very different from the way you would code them on CPU because of the need to avoid sequential processing. We are using coding patterns that make the most expensive parts of the computations "embarrassingly parallelizable"; the only somewhat nontrivial CUDA operations are generally reduction-type operations such as exclusive-prefix-sum, for which we use NVidia's cub library. Our design is not too specific to the NVidia hardware and the bulk of the code we write is fairly normal-looking C++; the nontrivial CUDA programming is mostly done via the cub library, parts of which we wrap with our own convenient interface.

The Finite State Automaton object is then implemented as a Ragged tensor templated on a specific data type (a struct representing an arc in the automaton).

Autograd

If you look at the code as it exists now, you won't find any references to autograd. The design is quite different to TensorFlow and PyTorch (which is why we didn't simply extend one of those toolkits). Instead of making autograd come from the bottom up (by making individual operations differentiable) we are implementing it from the top down, which is much more efficient in this case (and will tend to have better roundoff properties).

An example: suppose we are finding the best path of an FSA, and we need derivatives. We implement this by keeping track of, for each arc in the output best-path, which input arc it corresponds to. (For more complex algorithms an arc in the output might correspond to a sum of probabilities of a list of input arcs). We can make this compatible with PyTorch/TensorFlow autograd at the Python level, by, for example, defining a Function class in PyTorch that remembers this relationship between the arcs and does the appropriate (sparse) operations to propagate back the derivatives w.r.t. the weights.

Current state of the code

A lot of the code is still unfinished (Sep 11, 2020). We finished the CPU versions of many algorithms and this code is in k2/csrc/host/; however, after that we figured out how to implement things on the GPU and decided to change the interfaces so the CPU and GPU code had a more unified interface. Currently in k2/csrc/ we have more GPU-oriented implementations (although these algorithms will also work on CPU). We had almost finished the Python wrapping for the older code, in the k2/python/ subdirectory, but we decided not to release code with that wrapping because it would have had to be reworked to be compatible with our GPU algorithms. Instead we will use the interfaces drafted in k2/csrc/ e.g. the Context object (which encapsulates things like memory managers from external toolkits) and the Tensor object which can be used to wrap tensors from external toolkits; and wrap those in Python (using pybind11). The code in host/ will eventually be either deprecated, rewritten or wrapped with newer-style interfaces.

Plans for initial release

We hope to get the first version working in early October. The current short-term aim is to finish the GPU implementation of pruned composition of a normal FSA with a dense FSA, which is the same as decoder search in speech recognition and can be used to implement CTC training and lattice-free MMI (LF-MMI) training. The proof-of-concept that we will release initially is something that's like CTC but allowing more general supervisions (general FSAs rather than linear sequences). This will work on GPU. The same underlying code will support LF-MMI so that would be easy to implement soon after. We plan to put example code in a separate repository.

Plans after initial release

We will then gradually implement more algorithms in a way that's compatible with the interfaces in k2/csrc/. Some of them will be CPU-only to start with. The idea is to eventually have very rich capabilities for operating on collections of sequences, including methods to convert from a lattice to a collection of linear sequences and back again (for purposes of neural language model rescoring, neural confidence estimation and the like).

Quick start

Want to try it out without installing anything? We have setup a Google Colab.

Caution: k2 is not nearly ready for actual use! We are still coding the core algorithms, and hope to have an early version working by early October.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

k2-0.3.3.dev20210426-py38-none-any.whl (54.4 MB view details)

Uploaded Python 3.8

k2-0.3.3.dev20210426-py37-none-any.whl (54.4 MB view details)

Uploaded Python 3.7

k2-0.3.3.dev20210426-py36-none-any.whl (54.4 MB view details)

Uploaded Python 3.6

File details

Details for the file k2-0.3.3.dev20210426-py38-none-any.whl.

File metadata

  • Download URL: k2-0.3.3.dev20210426-py38-none-any.whl
  • Upload date:
  • Size: 54.4 MB
  • Tags: Python 3.8
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.0.1 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.60.0 CPython/3.8.9

File hashes

Hashes for k2-0.3.3.dev20210426-py38-none-any.whl
Algorithm Hash digest
SHA256 cf655a90d9e35e69d7d6b490a0a7f48c590057bd8e282805a8c4de0d6b42ed82
MD5 c55c2f80bf39be667385079fe7c67421
BLAKE2b-256 ef60c4a824c22dee5d13bcfd462fbf6149c81dbca4053a2ee7c22efa1985d3e4

See more details on using hashes here.

File details

Details for the file k2-0.3.3.dev20210426-py37-none-any.whl.

File metadata

  • Download URL: k2-0.3.3.dev20210426-py37-none-any.whl
  • Upload date:
  • Size: 54.4 MB
  • Tags: Python 3.7
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.0.1 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.60.0 CPython/3.7.10

File hashes

Hashes for k2-0.3.3.dev20210426-py37-none-any.whl
Algorithm Hash digest
SHA256 cf0bb5e0a4a1c6ee0c5e4be645c1cf18a75816ea20f369925fa91b3ad9643c10
MD5 cf97670184c2b4ce9b9025c90c791503
BLAKE2b-256 9a3151358cc213afdd8273eeae3a1d03aabce82ddde444059bcb7ac28e07514c

See more details on using hashes here.

File details

Details for the file k2-0.3.3.dev20210426-py36-none-any.whl.

File metadata

  • Download URL: k2-0.3.3.dev20210426-py36-none-any.whl
  • Upload date:
  • Size: 54.4 MB
  • Tags: Python 3.6
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.0.1 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.60.0 CPython/3.6.13

File hashes

Hashes for k2-0.3.3.dev20210426-py36-none-any.whl
Algorithm Hash digest
SHA256 9db82fab95844e31e3d2960a14ba276c7e1399799420d70e22fddf7d48abe5e2
MD5 2b48eee7adcc2279eb2e492a6925cfb3
BLAKE2b-256 4494b6ae307541470397ca4b14d393ab62c382c1f7a038bb45ee4129c7340592

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page