Skip to main content

A package for creating IL datasets

Project description

IL Datasets

Hi, welcome to the Imitation Learning (IL) Datasets. Something that always bothered me a lot was how difficult it was to find good weights for an expert, or trying to create a dataset for different state-of-the-art methods. For this reason I've created this repository in an effort to make it more accessible for researches to create datasets using experts from the Hugging Face.


How does it work?

This project also works with multithreading, which should accelerate the dataset creation. It consists of one Controller class, which requires two different functions to work: (i) a enjoy function (for the agent to play and record an episode); and a (ii) collate function (for putting all episodes together).


The enjoy function will receive 3 parameters and return 1:

  • path: str - where the episode is going to be recorded

  • experiment: Context - A class for recording all information (if you don't want to use print - keeping the console clear)

  • expert: Policy - A model based on the StableBaselines3 BaseAlgorithm.

  • returns: bool - Whether it was successfull or not

Obs: To use the model you can call predict, the policy class already has the correct form of using it (a.k.a., how the StableBaselines3 uses).


The collate function will receive 2 parameters and return 1:

  • path: str - where it should save the final dataset

  • episodes: list[str] - A list of paths for each file

  • returns: bool - Whether it was successfull or not


Requirements

I did use Python=3.9 during development.
All other requirements are listed in requirements.txt.


Registering new experts

If you would like to add new experts locally, you can call the Experts class. It uses the following structure:

  • identifier: str - A name for calling the expert.
  • policy: Policy - A dataclass with:
    • name: str - Gym Environment name
    • repo_id: str - Hugging Face repo indentification
    • filename: str - Weights file name
    • threshold: float - How much reward should the episode accumulate to be considered good
    • algo: BaseAlgorithm - The class from StableBaselines3

Obs: If not using StableBaselines, the expert has to have a predict function that receives:

  • obs: Tensor - Current environment state
  • state: Tensor - Model's internal state
  • deterministic: bool - If it should explore or not

This repository is not complete

Here is a list of the upcoming releases:

  • Collate function support
  • Support for installing as a dependency
  • Module for downloading trajectories from a Hugging Face dataset
  • Create actual documentation
  • Create some examples
  • Create tests

If you like this repository be sure to check my other projects:

Development

Academic

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

il-datasets-0.0.1.tar.gz (7.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

il_datasets-0.0.1-py3-none-any.whl (9.6 kB view details)

Uploaded Python 3

File details

Details for the file il-datasets-0.0.1.tar.gz.

File metadata

  • Download URL: il-datasets-0.0.1.tar.gz
  • Upload date:
  • Size: 7.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.9.13

File hashes

Hashes for il-datasets-0.0.1.tar.gz
Algorithm Hash digest
SHA256 bb60836aee507f064748dfd5f5b48d0fd5473662d7e80522394485c6bc87179d
MD5 6a871e13b8d3ffcc4189acbc924fbb77
BLAKE2b-256 57345243921debb658097d47aa32742aa4c9a6cc5a295350e1deba23c9951a05

See more details on using hashes here.

File details

Details for the file il_datasets-0.0.1-py3-none-any.whl.

File metadata

  • Download URL: il_datasets-0.0.1-py3-none-any.whl
  • Upload date:
  • Size: 9.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.9.13

File hashes

Hashes for il_datasets-0.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 18029576109689d567e252a46382385db8696da84de074ec21e0cabadcbbf799
MD5 99544b4d66647b5790826b0348a6b184
BLAKE2b-256 96a15fb9aa0274017687d74b3d6ec1447585d2eba7ab9b2ced7d2d55e18cc47b

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page