A benchmark of progressively more difficult AI tasks to measure learning speed of ML systems
Project description
Incremental tasks
This is a modular and extendable benchmark of progressively more difficult AI tasks to measure learning speed of ML systems.
This repository contains the code to generate the incremental task dataset used in [1].
Installation
This package can also be used as a library. Just install it from PyPI (ideally in a virtual env if you don't want the CLI command to pollute your path).
pip install incremental_tasks
This installs the library as well as an executable generate_tasks_cli
Task generation
The command generate_tasks_cli can be used to directly generate sequences from
the command line. They are printed to stdout and can be saved to a file to
quickly create a dataset.
Interactive task solving
A user can try the tasks by himself by running generate_tasks_cli. This will
start an interactive session that will show random examples from the tasks of
the benchmarks, starting from the easiest.
Once a task is solved, it switches to a new harder one.
An example interactive session:
$ generate_tasks_cli --interactive
======================================================================
0 1 1 0 0 0 1 1 0 1 1 0 0 0 1 1 0 1 1 {?} {?} {?} {?} {?}
Type you answers (space separated) 0 0 0 1 1
OK!
0 1 1 0 0 0 1 1 0 1 1 0 0 0 1 1 0 1 1 0 0 0 1 1
======================================================================
1 0 0 0 1 0 0 0 {?} {?} {?} {?} {?}
Type you answers (space separated) 0 1 1 1 0
Wrong! right answer was:
1 0 0 0 1 0 0 0 1 0 0 0 1
In [1] the human evaluation score were computed using this interactive
game with the extra flag --human-eval which maps every token to a random one
so the player doesn't have any prior knowledge about the text and needs to do
pattern matching like a neural network would.
Library
You can use the library in your own code to generate the data on the fly:
from incremental_tasks import ElementaryLanguageWithWorldDef
task = ElementaryLanguageWithWorldDef()
To generate a single sentence from the task use generate_single:
print(task.generate_single())
# This will print (['I', 'DO', 'NOT', 'SMELL', 'PETER', '.', 'DO', 'I', 'SMELL', 'PETER', '?', 'NO'], [11])
To generate n unique sequences (will be less than n if there aren't enough
available unique sequences):
task.generate_tasks(max_n_seq=n)
A task can also create a generator that will yield an endless stream of sequences (not necessarily unique):
task.generate_tasks_generator(max_n_seq=None)
References
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file incremental_tasks-0.1.3.tar.gz.
File metadata
- Download URL: incremental_tasks-0.1.3.tar.gz
- Upload date:
- Size: 15.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.2.0b2 CPython/3.9.9 Darwin/21.5.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4458a113a1afdf592f46bd672ca163ad373e4ce9eb8319be5db70372c1f8bc46
|
|
| MD5 |
884311ffd2a773234bf0c11edb977d2c
|
|
| BLAKE2b-256 |
41cefb879524c20abecea8938c99338fb042590c65849be67c83b4f0c221a705
|
File details
Details for the file incremental_tasks-0.1.3-py3-none-any.whl.
File metadata
- Download URL: incremental_tasks-0.1.3-py3-none-any.whl
- Upload date:
- Size: 16.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.2.0b2 CPython/3.9.9 Darwin/21.5.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9bd914fc316a7008ec54acd191ede99b828f06ae77919cc33ec65cf4471f9971
|
|
| MD5 |
2ea6f01a84ef171388f7c00f69cc5559
|
|
| BLAKE2b-256 |
36d5b923a542b8aa4dc09f2f249764084409bfca4126c5c15b4f5826f63925a8
|