Skip to main content

A library of reinforcement learning building blocks in JAX.

Project description

The CLRS Algorithmic Reasoning Benchmark

Learning representations of algorithms is an emerging area of machine learning, seeking to bridge concepts from neural networks with classical algorithms. The CLRS Algorithmic Reasoning Benchmark (CLRS) consolidates and extends previous work torward evaluation algorithmic reasoning by providing a suite of implementations of classical algorithms. These algorithms have been selected from the third edition of the standard Introduction to Algorithms by Cormen, Leiserson, Rivest and Stein.

Installation

The CLRS Algorithmic Reasoning Benchmark can be installed with pip directly from GitHub, with the following command:

pip install git+git://github.com/deepmind/clrs.git

Getting started

To set up a Python virtual environment with the required dependencies, run:

python3 -m venv clrs_env
source clrs_env/bin/activate
python setup.py install

and to run our example baseline model:

python -m clrs.examples.run

Algorithms as graphs

CLRS implements the selected algorithms in an idiomatic way, which aligns as closely as possible to the original CLRS 3ed pseudocode. By controlling the input data distribution to conform to the preconditions we are able to automatically generate input/output pairs. We additionally provide trajectories of "hints" that expose the internal state of each algorithm, to both optionally simplify the learning challenge and to distinguish between different algorithms that solve the same overall task (e.g. sorting).

In the most generic sense, algorithms can be seen as manipulating sets of objects, along with any relations between them (which can themselves be decomposed into binary relations). Accordingly, we study all of the algorithms in this benchmark using a graph representation. In the event that objects obey a more strict ordered structure (e.g. arrays or rooted trees), we impose this ordering through inclusion of predecessor links.

How it works

For each algorithm, we provide a canonical set of train, eval and test trajectories for benchmarking out-of-distribution generalization.

Trajectories Problem Size
Train 1000 20
Eval 32 20
Test 32 100

where "problem size" refers to e.g. the length of an array or number of nodes in a graph, depending on the algorithm. These trajectories can be used like so:

train_ds, spec = clrs.clrs21_train("bfs")

for step in range(num_train_steps):
  feedback = train_sampler.next(batch_size)
  model.train(feedback.features)

Here, feedback is a namedtuple with the following structure:

Feedback = collections.namedtuple('Feedback', ['features', 'outputs'])
Features = collections.namedtuple('Features', ['inputs', 'hints', 'lengths'])

where the content of Features can be used for training and outputs is reserved for evaluation. Each field of the tuple is an ndarray with a leading batch dimension. Because hints are provided for the full algorithm trajectory, these contain an additional time dimension padded up to the maximum length max(T) of any trajectory within the dataset. The lengths field specifies the true length t <= max(T) for each trajectory, which can be used e.g. for loss masking.

Please see the examples directory for full working Graph Neural Network (GNN) examples using JAX and the DeepMind JAX Ecosystem of libraries.

What we provide

Algorithms

Our initial CLRS-21 benchmark includes the following 21 algorithms. More algorithms will be supported in the near future.

  • Divide and conquer
    • Maximum subarray (Kadane)
  • Dynamic programming
    • Matrix chain order
    • Optimal binary search tree
  • Graphs
    • Depth-first search
    • Breadth-first search
    • Topological sort
    • Minimum spanning tree (Prim)
    • Single-source shortest-path (Bellman Ford)
    • Single-source shortest-path (Dijsktra)
    • DAG shortest paths
    • All-pairs shortest-path (Floyd Warshall)
  • Greedy
    • Task scheduling
  • Searching
    • Minimum
    • Binary search
    • Quickselect
  • Sorting
    • Insertion sort
    • Bubble sort
    • Heapsort
    • Quicksort
  • Strings
    • String matcher (naive)
    • String matcher (KMP)

Baselines

We additionally provide JAX implementations of the following GNN baselines:

  • Graph Attention Networks (Velickovic et al., ICLR 2018)
  • Message-Passing Neural Networks (Gilmer et al., ICML 2017)

Citation

To cite the CLRS Algorithmic Reasoning Benchmark:

@article{deepmind2021clrs,
  author = {Petar Veli\v{c}kovi\'{c} and Adri\`{a} Puigdom\`{e}nech Badia and
    David Budden and Razvan Pascanu and Andrea Banino and Misha Dashevskiy and
    Raia Hadsell and Charles Blundell},
  title = {The CLRS Algorithmic Reasoning Benchmark},
  year = {2021},
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

dm-clrs-0.0.1.tar.gz (47.3 kB view details)

Uploaded Source

Built Distribution

dm_clrs-0.0.1-py3-none-any.whl (64.9 kB view details)

Uploaded Python 3

File details

Details for the file dm-clrs-0.0.1.tar.gz.

File metadata

  • Download URL: dm-clrs-0.0.1.tar.gz
  • Upload date:
  • Size: 47.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.2 importlib_metadata/4.7.1 pkginfo/1.7.1 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.2 CPython/3.9.6

File hashes

Hashes for dm-clrs-0.0.1.tar.gz
Algorithm Hash digest
SHA256 46fed13e6a2cfb6eae2aa9ec4323ad60ddd7769f671982b7e06c8ac7892698a0
MD5 4f5a55664404c31475f83ab62a67f91c
BLAKE2b-256 a7f3850b167e798eb9b0cc658d5fdce183c000ce6feff10e3f1a346aece273c1

See more details on using hashes here.

File details

Details for the file dm_clrs-0.0.1-py3-none-any.whl.

File metadata

  • Download URL: dm_clrs-0.0.1-py3-none-any.whl
  • Upload date:
  • Size: 64.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.2 importlib_metadata/4.7.1 pkginfo/1.7.1 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.2 CPython/3.9.6

File hashes

Hashes for dm_clrs-0.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 e8ae3854de7c3ca37a1d4e6081c3b93e9ca5fe86f621b9ab6b3e7c4c52e336de
MD5 a5816bb3018a51e26c4991d488bd0718
BLAKE2b-256 ee51eda12855eea3ad1e00bbea7eca3ce04274f2c0318c64d334f70c8a7c8a43

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page