Skip to main content

Rapidly exploring random trees with machine learning

Project description

RRT-ML

Rapidly exploring random trees with machine learning - learned sampling distributions, local reinforcement learning controller and learned supervised distance function for car-like mobile robots

Table of Contents
  1. About The Project
  2. Getting Started
  3. Usage
  4. License
  5. Acknowledgments

About The Project

This project unites optimal rapidly exploring random trees (RRT*) with the following machine learning techniques:

The experiments are conducted in PyBullet with a car-like mobile robot in a narrow passage type of scenario. The code allows the training and testing of each machine learning modules individually, but also in the context of the broader RRT* approach.

Getting Started

Follow the instructions below to install the package.

Prerequisites

You need to install PyTorch and PyBullet.

Torch

The easiest way is to use conda.

If you have a CUDA-enabled GPU:

conda install pytorch torchvision torchaudio pytorch-cuda=11.6 -c pytorch -c nvidia

If you want to use CPU only:

conda install pytorch torchvision torchaudio cpuonly -c pytorch

PyBullet

Installing PyBullet with pip requires build tools. I recommend using conda:

conda install -c conda-forge pybullet

Installation

Install the package with pip:

pip install rrt-ml

(back to top)

Usage

You can run experiments with different parameters for each module. Run from the command line or from any python file.

Interface

Run experiments from the command line:

rrt-ml (--rl | --sl | --rrt) (--train | --test) [--hyper] [--config CONFIG]
  1. The first option controls which algorithm will run:
  • --rl: reinforcement learning agent as a local controller
  • --sl: "sample learner" to learn sampling distributions for optimal motion planning with RRT*
  • --rrt: optimal rapidly-exploring random tree
  1. The second option controls how should it run:
  • --train: train a machine learning model (RL or SL) or grow the RRT* tree
  • --test: generate various results (only after training)
  1. The third option, --hyper, determines whether a search for hyperparameters should be made. If the second option is --train, then each possible config of the grid search is trained. If the second option is --test, then a bunch of results will be produced in order to compare different models (only after training).

  2. The fourth option specify the name of the config. Below there are instructions to create a config file or directly run it, without using the command line.

Configuring an Experiment

Create a python file anywhere. Create a MasterConfig object and set a name to it:

from rrt_ml.utilities.configs import MasterConfig


cfg = MasterConfig()

cfg.general.config_name_or_prefix = "MyExperimentConfigName"

Your IDE should auto-complete cfg and show all nested attributes. You can change various settings for all algorithms individually:

# Change reinforcement learning agent config
cfg.rl.general.gamma = 0.98
cfg.rl.actor.lr = 0.01
cfg.rl.critic.lr = 0.01

# Change sample learner (beta-cvae) config
cfg.sl.loss.beta = 5
cfg.sl.train.batch_size = 128

# Change RRT* config
cfg.rrt.general.seed = 1
cfg.rrt.sample.goal_prob = 0.05
cfg.rrt.names.rl = 'best'  # Use config name 'best' as controller for RRT
cfg.rrt.names.sl = 'best'  # Use config name 'best' as sample generator

You can also set a grid search over any setting that exists inside the rl|sl|rrt attributes above. Below an example on how to setup a grid search over the hyper-parameters for the RL agent:

cfg.hyperparams.rl.general.gamma = [0.95, 0.96, 0.97, 0.98, 0.99]
cfg.hyperparams.rl.actor.lr = [0.1, 0.01, 0.001]
cfg.hyperparams.rl.net.activ = ['ReLU', 'GeLU']

Now you can save the config to run later from the command line:

cfg.save()

Training an RL agent with this example config from the command line:

rrt-ml --rl --train --config=MyExperimentConfigName

Instead of saving and the running from the command line, you can simply run it from the python file you are editing, by adding this:

cfg.run(algorithm_to_run='rl', train_or_test='train', hyperparam_search_or_test=False)

The parameters of the run method are equal to the ones from the command line.

(back to top)

License

Distributed under the MIT License. See LICENSE.txt for more information.

(back to top)

Acknowledgments

(back to top)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

rrt-ml-0.0.8.tar.gz (32.9 MB view details)

Uploaded Source

Built Distribution

rrt_ml-0.0.8-py3-none-any.whl (33.1 MB view details)

Uploaded Python 3

File details

Details for the file rrt-ml-0.0.8.tar.gz.

File metadata

  • Download URL: rrt-ml-0.0.8.tar.gz
  • Upload date:
  • Size: 32.9 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.10.6

File hashes

Hashes for rrt-ml-0.0.8.tar.gz
Algorithm Hash digest
SHA256 dac65a4d6c1a5ca325d21052bff5ea8c9d36c7ffb66f3395ee67ad7a43b02780
MD5 8e813d1817c7ee22887e2af22df73275
BLAKE2b-256 c2ebf2e5daf490ea92c9c52979047727ea42ae838e3e7f3710f4cdffd87df3d4

See more details on using hashes here.

File details

Details for the file rrt_ml-0.0.8-py3-none-any.whl.

File metadata

  • Download URL: rrt_ml-0.0.8-py3-none-any.whl
  • Upload date:
  • Size: 33.1 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.10.6

File hashes

Hashes for rrt_ml-0.0.8-py3-none-any.whl
Algorithm Hash digest
SHA256 b9f333ecb451e6dc3c7d00a24df6ba749d06a0e93d558a1c8c04a9ae35ddcb98
MD5 a73d4082b50042af3bb816f1e85345c8
BLAKE2b-256 6898c81f286c2e811edda3d42fb5e487145dc0eb3d16ee77e86b0f748dc51aca

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page