Skip to main content

A package for creating IL datasets

Project description

IL Datasets

Hi, welcome to the Imitation Learning (IL) Datasets. Something that always bothered me a lot was how difficult it was to find good weights for an expert, or trying to create a dataset for different state-of-the-art methods. For this reason I've created this repository in an effort to make it more accessible for researches to create datasets using experts from the Hugging Face.


How does it work?

This project also works with multithreading, which should accelerate the dataset creation. It consists of one Controller class, which requires two different functions to work: (i) a enjoy function (for the agent to play and record an episode); and a (ii) collate function (for putting all episodes together).


The enjoy function will receive 3 parameters and return 1:

  • path: str - where the episode is going to be recorded

  • experiment: Context - A class for recording all information (if you don't want to use print - keeping the console clear)

  • expert: Policy - A model based on the StableBaselines3 BaseAlgorithm.

  • returns: bool - Whether it was successfull or not

Obs: To use the model you can call predict, the policy class already has the correct form of using it (a.k.a., how the StableBaselines3 uses).


The collate function will receive 2 parameters and return 1:

  • path: str - where it should save the final dataset

  • episodes: list[str] - A list of paths for each file

  • returns: bool - Whether it was successfull or not


Requirements

I did use Python=3.9 during development.
All other requirements are listed in requirements.txt.


Registering new experts

If you would like to add new experts locally, you can call the Experts class. It uses the following structure:

  • identifier: str - A name for calling the expert.
  • policy: Policy - A dataclass with:
    • name: str - Gym Environment name
    • repo_id: str - Hugging Face repo indentification
    • filename: str - Weights file name
    • threshold: float - How much reward should the episode accumulate to be considered good
    • algo: BaseAlgorithm - The class from StableBaselines3

Obs: If not using StableBaselines, the expert has to have a predict function that receives:

  • obs: Tensor - Current environment state
  • state: Tensor - Model's internal state
  • deterministic: bool - If it should explore or not

This repository is not complete

Here is a list of the upcoming releases:

  • Collate function support
  • Support for installing as a dependency
  • Module for downloading trajectories from a Hugging Face dataset
  • Create actual documentation
  • Create some examples
  • Create tests

If you like this repository be sure to check my other projects:

Development

Academic

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

il-datasets-0.1.1.tar.gz (7.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

il_datasets-0.1.1-py3-none-any.whl (9.6 kB view details)

Uploaded Python 3

File details

Details for the file il-datasets-0.1.1.tar.gz.

File metadata

  • Download URL: il-datasets-0.1.1.tar.gz
  • Upload date:
  • Size: 7.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.11

File hashes

Hashes for il-datasets-0.1.1.tar.gz
Algorithm Hash digest
SHA256 b508c0a44f642fb45d5926251d3c89dbbdf0746599d4ca36546fa9b2e55c82ac
MD5 1bb6dc0abf73dd9b40e859ea686bf1df
BLAKE2b-256 6ec202730ab32ba9ea1b6a82fcea01ea8297f5560a2956a1f045f94b60734f40

See more details on using hashes here.

File details

Details for the file il_datasets-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: il_datasets-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 9.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.11

File hashes

Hashes for il_datasets-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 1864a407d08f268ed73d18b53e9b7f11d9859a90fea7d1f7118848de77216d8c
MD5 e657645d37bf312d93aea640d961378c
BLAKE2b-256 460d621fcc0bac7db0fca81c2f5dc8b4e7b70a2a337b5c5c39054c8768874bf5

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page