Skip to main content

FSA/FST algorithms, intended to (eventually) be interoperable with PyTorch and similar

Project description

k2

The vision of k2 is to be able to seamlessly integrate Finite State Automaton (FSA) and Finite State Transducer (FST) algorithms into autograd-based machine learning toolkits like PyTorch and TensorFlow. For speech recognition applications, this should make it easy to interpolate and combine various training objectives such as cross-entropy, CTC and MMI and to jointly optimize a speech recognition system with multiple decoding passes including lattice rescoring and confidence estimation. We hope k2 will have many other applications as well.

One of the key algorithms that we want to make efficient in the short term is pruned composition of a generic FSA with a "dense" FSA (i.e. one that corresponds to log-probs of symbols at the output of a neural network). This can be used as a fast implementation of decoding for ASR, and for CTC and LF-MMI training. This won't give a direct advantage in terms of Word Error Rate when compared with existing technology; but the point is to do this in a much more general and extensible framework to allow further development of ASR technology.

Implementation

A few key points on our implementation strategy.

Most of the code is in C++ and CUDA. We implement a templated class Ragged, which is quite like TensorFlow's RaggedTensor (actually we came up with the design independently, and were later told that TensorFlow was using the same ideas). Despite a close similarity at the level of data structures, the design is quite different from TensorFlow and PyTorch. Most of the time we don't use composition of simple operations, but rely on C++11 lambdas defined directly in the C++ implementations of algorithms. The code in these lambdas operate directly on data pointers and, if the backend is CUDA, they can run in parallel for each element of a tensor. (The C++ and CUDA code is mixed together and the CUDA kernels get instantiated via templates).

It is difficult to adequately describe what we are doing with these Ragged objects without going in detail through the code. The algorithms look very different from the way you would code them on CPU because of the need to avoid sequential processing. We are using coding patterns that make the most expensive parts of the computations "embarrassingly parallelizable"; the only somewhat nontrivial CUDA operations are generally reduction-type operations such as exclusive-prefix-sum, for which we use NVidia's cub library. Our design is not too specific to the NVidia hardware and the bulk of the code we write is fairly normal-looking C++; the nontrivial CUDA programming is mostly done via the cub library, parts of which we wrap with our own convenient interface.

The Finite State Automaton object is then implemented as a Ragged tensor templated on a specific data type (a struct representing an arc in the automaton).

Autograd

If you look at the code as it exists now, you won't find any references to autograd. The design is quite different to TensorFlow and PyTorch (which is why we didn't simply extend one of those toolkits). Instead of making autograd come from the bottom up (by making individual operations differentiable) we are implementing it from the top down, which is much more efficient in this case (and will tend to have better roundoff properties).

An example: suppose we are finding the best path of an FSA, and we need derivatives. We implement this by keeping track of, for each arc in the output best-path, which input arc it corresponds to. (For more complex algorithms an arc in the output might correspond to a sum of probabilities of a list of input arcs). We can make this compatible with PyTorch/TensorFlow autograd at the Python level, by, for example, defining a Function class in PyTorch that remembers this relationship between the arcs and does the appropriate (sparse) operations to propagate back the derivatives w.r.t. the weights.

Current state of the code

A lot of the code is still unfinished (Sep 11, 2020). We finished the CPU versions of many algorithms and this code is in k2/csrc/host/; however, after that we figured out how to implement things on the GPU and decided to change the interfaces so the CPU and GPU code had a more unified interface. Currently in k2/csrc/ we have more GPU-oriented implementations (although these algorithms will also work on CPU). We had almost finished the Python wrapping for the older code, in the k2/python/ subdirectory, but we decided not to release code with that wrapping because it would have had to be reworked to be compatible with our GPU algorithms. Instead we will use the interfaces drafted in k2/csrc/ e.g. the Context object (which encapsulates things like memory managers from external toolkits) and the Tensor object which can be used to wrap tensors from external toolkits; and wrap those in Python (using pybind11). The code in host/ will eventually be either deprecated, rewritten or wrapped with newer-style interfaces.

Plans for initial release

We hope to get the first version working in early October. The current short-term aim is to finish the GPU implementation of pruned composition of a normal FSA with a dense FSA, which is the same as decoder search in speech recognition and can be used to implement CTC training and lattice-free MMI (LF-MMI) training. The proof-of-concept that we will release initially is something that's like CTC but allowing more general supervisions (general FSAs rather than linear sequences). This will work on GPU. The same underlying code will support LF-MMI so that would be easy to implement soon after. We plan to put example code in a separate repository.

Plans after initial release

We will then gradually implement more algorithms in a way that's compatible with the interfaces in k2/csrc/. Some of them will be CPU-only to start with. The idea is to eventually have very rich capabilities for operating on collections of sequences, including methods to convert from a lattice to a collection of linear sequences and back again (for purposes of neural language model rescoring, neural confidence estimation and the like).

Quick start

Want to try it out without installing anything? We have setup a Google Colab.

Caution: k2 is not nearly ready for actual use! We are still coding the core algorithms, and hope to have an early version working by early October.

Project details


Release history Release notifications | RSS feed

This version

1.8

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

k2-1.8-py38-none-any.whl (77.7 MB view details)

Uploaded Python 3.8

k2-1.8-py37-none-any.whl (77.7 MB view details)

Uploaded Python 3.7

k2-1.8-py36-none-any.whl (77.7 MB view details)

Uploaded Python 3.6

k2-1.8-cp38-cp38-macosx_10_15_x86_64.whl (1.7 MB view details)

Uploaded CPython 3.8 macOS 10.15+ x86-64

k2-1.8-cp37-cp37m-macosx_10_15_x86_64.whl (1.6 MB view details)

Uploaded CPython 3.7m macOS 10.15+ x86-64

k2-1.8-cp36-cp36m-macosx_10_15_x86_64.whl (1.6 MB view details)

Uploaded CPython 3.6m macOS 10.15+ x86-64

File details

Details for the file k2-1.8-py38-none-any.whl.

File metadata

  • Download URL: k2-1.8-py38-none-any.whl
  • Upload date:
  • Size: 77.7 MB
  • Tags: Python 3.8
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.2 importlib_metadata/4.8.1 pkginfo/1.7.1 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.2 CPython/3.8.11

File hashes

Hashes for k2-1.8-py38-none-any.whl
Algorithm Hash digest
SHA256 79c3960bb4668cd4b66641800dfba584cac40d4fe297b516a779b8a84aa3266a
MD5 c229d7a6f0a68f2dc1d4a824bdea7506
BLAKE2b-256 b88c7601d82e5bb9f2c8e69ef787d5ffcfa494c2d2e6bca0a3fae9bfd2c50d0e

See more details on using hashes here.

Provenance

File details

Details for the file k2-1.8-py37-none-any.whl.

File metadata

  • Download URL: k2-1.8-py37-none-any.whl
  • Upload date:
  • Size: 77.7 MB
  • Tags: Python 3.7
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.2 importlib_metadata/4.8.1 pkginfo/1.7.1 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.2 CPython/3.7.11

File hashes

Hashes for k2-1.8-py37-none-any.whl
Algorithm Hash digest
SHA256 60ef9de1d8c460cfb8ac2ab427c2da0225624f5d53ab5ec3583c81aeab6e43b9
MD5 6096f7d4cdf9d0b1edc6366895b23e83
BLAKE2b-256 fde0dac5907fb3e44302b3f9b22d87b805ee57ecb3b7dd30125d73158652472f

See more details on using hashes here.

Provenance

File details

Details for the file k2-1.8-py36-none-any.whl.

File metadata

  • Download URL: k2-1.8-py36-none-any.whl
  • Upload date:
  • Size: 77.7 MB
  • Tags: Python 3.6
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.2 importlib_metadata/4.8.1 pkginfo/1.7.1 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.2 CPython/3.6.14

File hashes

Hashes for k2-1.8-py36-none-any.whl
Algorithm Hash digest
SHA256 684ba7331cf39711f4c76050a9366b29d48ca4576c01df44b272c0bdeecd9d67
MD5 7115e383e2a1a143aa3761baf0b02fc9
BLAKE2b-256 85a949d1b19d965a32d5e506916daba48991c75fcbafd634b357124500992d8d

See more details on using hashes here.

Provenance

File details

Details for the file k2-1.8-cp38-cp38-macosx_10_15_x86_64.whl.

File metadata

  • Download URL: k2-1.8-cp38-cp38-macosx_10_15_x86_64.whl
  • Upload date:
  • Size: 1.7 MB
  • Tags: CPython 3.8, macOS 10.15+ x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.2 importlib_metadata/4.8.1 pkginfo/1.7.1 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.2 CPython/3.8.11

File hashes

Hashes for k2-1.8-cp38-cp38-macosx_10_15_x86_64.whl
Algorithm Hash digest
SHA256 6bee4ba2aa81fb2fce7026303c5aeedd8a906c34fea1d051437e6e88f9a0afd2
MD5 e65c930a8feee05193e5c9b19b839ec8
BLAKE2b-256 0b402a398eb3b3b6f70579d706243737f9700f7457c362ac5e168a0ec3b374f3

See more details on using hashes here.

Provenance

File details

Details for the file k2-1.8-cp37-cp37m-macosx_10_15_x86_64.whl.

File metadata

  • Download URL: k2-1.8-cp37-cp37m-macosx_10_15_x86_64.whl
  • Upload date:
  • Size: 1.6 MB
  • Tags: CPython 3.7m, macOS 10.15+ x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.2 importlib_metadata/4.8.1 pkginfo/1.7.1 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.2 CPython/3.7.11

File hashes

Hashes for k2-1.8-cp37-cp37m-macosx_10_15_x86_64.whl
Algorithm Hash digest
SHA256 a9b66575aeacdb082ab1c80f6f250efaeb826f23816bfff92060fb8403afe604
MD5 13c64522a40f9831fc77d1c8d8d0b996
BLAKE2b-256 57b9d6db592e87ab7bc373f4be612b47ca05ff121d7e35ebd100306a2c70a724

See more details on using hashes here.

Provenance

File details

Details for the file k2-1.8-cp36-cp36m-macosx_10_15_x86_64.whl.

File metadata

  • Download URL: k2-1.8-cp36-cp36m-macosx_10_15_x86_64.whl
  • Upload date:
  • Size: 1.6 MB
  • Tags: CPython 3.6m, macOS 10.15+ x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.2 importlib_metadata/4.8.1 pkginfo/1.7.1 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.2 CPython/3.6.14

File hashes

Hashes for k2-1.8-cp36-cp36m-macosx_10_15_x86_64.whl
Algorithm Hash digest
SHA256 592c3a377130d64b414bbffe3de17150e3a7be0eea0438cf4707615455f18f42
MD5 093d3117214be630f376a62f089dd48b
BLAKE2b-256 089cc76f7a6bdeacc230526af99456cf4af9a1dd20a0a29feae08b1fc9dadab9

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page