Skip to main content

The Safe-Autonomy-Sims library provides the components and tools to build modular, integration-focused Reinforcement Learning environments with Run Time Assurance (RTA)

Project description

safe-autonomy-sims

The safe-autonomy-sims package provides components and tools to build modular, integration-focused Reinforcement Learning environments with Run Time Assurance (RTA). This repo is designed to work hand-in-glove with the corl, safe-autonomy-simulation, and run-time-assurance packages.

Installation

The following instructions detail how to install the safe-autonomy-sims library on your local system. It is recommended to install the python modules within a virtual environment.

The easiest way to install safe-autonomy-sims into your environment is via pip:

pip install safe-autonomy-sims

Usage

The safe-autonomy-sims package provides RL training environments and example training configurations using Gymnasium, PettingZoo, and CoRL. These environments are designed to provide challenge problems for safe autonomous control.

Gym

This package provides the following single-agent Gymnasium environments:

These environments can be built using the gymnasium.make() function:

import gymnasium
import safe_autonomy_sims.gym

# Build the Docking-v0 environment
env = gymnasium.make("Docking-v0")

See the Gymnasium documentation for more information.

PettingZoo

This package also provides the following multi-agent PettingZoo environments:

These environments can be built using the following syntax:

import safe_autonomy_sims

# Build the MultiDocking-v0 environment
env = safe_autonomy_sims.pettingzoo.MultiDockingEnv()

See the PettingZoo documentation for more information.

CoRL

This package provides several environments designed to use the CoRL library for RL training. The following sections give an overview on using CoRL for training and the provided CoRL-compatible environments.

Training

Training experiments using CoRL are conducted in safe-autonomy-sims via configuration files. These files can be manipulated to define experiment parameters, agent configurations, environments, tasks, policies, and platforms.

The corl package provides a training endpoint script which uses the RLLib reinforcement learning library to train agents in an environment.

As an example, you can launch a training loop for the provided Docking environment using the following command:

# from root of safe-autonomy-sims
python -m corl.train_rl --cfg configs/docking/experiment.yml

Further information on training and experiment configuration can be found here.

Environments

This package includes the following CoRL-compatible environments:

Docking

Spacecraft docking scenario where an agent controlled deputy spacecraft must dock with a stationary chief spacecraft while both orbit a central body. This is accomplished by approaching the chief to within a predefined docking distance while maintaining a safe relative velocity within that distance. The motion of the deputy spacecraft is governed by the Clohessy-Wiltshire linearized dynamics model. Comes in the following flavors:

  • Docking Static 1N thrusters in $\pm x, \pm y, \pm z$.

  • Multiagent Docking Multiple agent controlled deputy spaceraft. Each controlled by static 1N thrusters in $\pm x, \pm y, \pm z$.

Inspection

Spacecraft inspection scenario where an agent controlled deputy spacecraft must inspect points on a stationary chief spacecraft while both orbit a central body. This is accomplished by approaching and navigating around the chief to view all points on a sphere. Points on the sphere can be illuminated by the sun, and only illuminated points can be inspected. Inspection 3D environments assume the deputy always points a sensor towards the chief, while Inspection Six DoF environments allow the deputy to control the orientation of the sensor. The translational motion of the deputy spacecraft is governed by the Clohessy-Wiltshire linearized dynamics model, and the Six DoF environments use a quaternion formulation to model attitude. All have static 1N thrusters in $\pm x, \pm y, \pm z$, and Six DoF environments also have moment controllers in $\pm x, \pm y, \pm z$. Comes in the following flavors:

  • Translational Inspection Agent can only control its translational motion. Orientation is assumed to be pointing at the chief. All points are weighted equally.

  • Weighted Translational Inspection Agent can only control its translational motion. Points are prioritizied through a directional unit vector, and are assigned weights/scores based on their angular distance to this vector. Inspected points are rewarded based on score. Success is determined by reaching a score threshold, rather than all points inspected.

  • Multiagent Translational Inspection Same as translational-inspection environment, with multiple agent controlled deputy spacecraft.

  • Weighted Six DoF Inspection Same as translational-inspection environment, but agent can control attitude (does not always point at chief).

  • Multiagent Weighted Six DoF Inspection Same as weighted-six-dof-inspection environment, with multiple agent controlled deputy spacecraft.

Development

If you are interested in contributing to the development of safe-autonomy-sims, the following sections outline the recommended process for setting up a development environment and building the package documentation.

Developer Installation

The safe-autonomy-sims library was developed using the python packaging tool poetry. It is recommended to perform a local development installation of this project using poetry if you plan on contributing.

git clone <safe-autonomy-sims-url>
cd safe-autonomy-sims
poetry install

Poetry will handle installing appropriate dependencies into your environment, if they aren't already installed. Poetry will also install an editable version of safe-autonomy-sims into the environment. For more information on managing Poetry environments see the official documentation.

Local Documentation

This repository is setup to use MkDocs which is a static site generator geared towards building project documentation. Documentation source files are written in Markdown, and configured with a single YAML configuration file.

NOTE: In order to properly build the documentation locally, you must first have safe-autonomy-sims and its dependencies installed in your container/environment!

Install the MkDocs modules in a container/virtual environment via Poetry:

poetry install --with docs

To build the documentation locally without serving it, use the following command from within your container/virtual environment:

poetry run mkdocs build

To serve the documentation on a local port, use the following command from within your container/virtual environment:

poetry run mkdocs serve 

Public Release

Approved for public release; distribution is unlimited. Case Number: AFRL-2023-6156

Team

Jamie Cunningham, Umberto Ravaioli, John McCarroll, Kyle Dunlap, Nate Hamilton, Charles Keating, Kochise Bennett, Aditesh Kumar, Kerianne Hobbs

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

safe_autonomy_sims-4.1.1.tar.gz (548.9 kB view hashes)

Uploaded Source

Built Distribution

safe_autonomy_sims-4.1.1-py3-none-any.whl (194.6 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page