Machine Learning Applied to Autonomous Driving
Project description
BARK-ML - Machine Learning for Autonomous Driving
Discrete and continuous environments for autonomous driving — ranging from highway, over merging, to intersection scenarios.
Gym Environments
Install the BARK-ML package using pip install bark-ml
.
Highway Scenario
env = gym.make("highway-v0")
In the highway scenario, the ego agent's goal is a StateLimitGoal
on the left lane that is reached once the states are in a pre-defined range.
A positive reward (+1
) is given for reaching the goal and a negative reward for having a collision or leaving the drivable area (-1
).
The highway scenario can use discrete or continuous actions:
highway-v0
: Continuous highway environmenthighway-v1
: Discrete highway environment
Merging Scenario
env = gym.make("merging-v0")
In the merging scenario, the ego agent's goal is a StateLimitGoal
on the left lane that is reached once the states are in a pre-defined range.
A positive reward (+1
) is given for reaching the goal and a negative reward for having a collision or leaving the drivable area (-1
).
The highway scenario can use discrete or continuous actions:
merging-v0
: Continuous merging environmentmerging-v1
: Discrete merging environment
Unprotected Left Turn
env = gym.make("intersection-v0")
In this scenario, the ego agent's goal is a StateLimitGoal
on the right lane that it has to achieve.
Positive reward (+1
) is given for reaching the goal and negative reward for having a collision or leaving the drivable area (-1
).
The highway scenario can use discrete or continuous actions:
intersection-v0
: Continuous intersection environmentintersection-v1
: Discrete intersection environment
Getting Started
A complete example of using the OpenAi-Gym inteface can be found here:
import gym
import numpy as np
import bark_ml.environments.gym
env = gym.make("merging-v0")
initial_state = env.reset()
done = False
while done is False:
action = np.random.uniform(low=np.array([-0.5, -0.1]), high=np.array([0.5, 0.1]), size=(2, ))
observed_state, reward, done, info = env.step(action)
print(f"Observed state: {observed_state}, Action: {action}, Reward: {reward}, Done: {done}")
Graph Neural Network Actor-Critic
The graph neural network actor-critic architecture proposed in the paper "Graph Neural Networks and Reinforcement Learning for Behavior Generation in Semantic Environments" can be visualized using
bazel run //experiments:experiment_runner -- --exp_json=/ABSOLUTE_PATH/bark-ml/experiments/configs/phd/01_hyperparams/dnns/merging_large_network.json
and trained using
bazel run //experiments:experiment_runner -- --exp_json=/ABSOLUTE_PATH/bark-ml/experiments/configs/phd/01_hyperparams/dnns/merging_large_network.json --mode=train
If you use BARK-ML and build upon the graph neural network architecture please cite the following paper:
@inproceedings{Hart2020,
title = {Graph Neural Networks and Reinforcement Learning for Behavior Generation in Semantic Environments},
author = {Patrick Hart and Alois Knoll},
booktitle = {2020 IEEE Intelligent Vehicles Symposium (IV)},
url = {https://arxiv.org/abs/2006.12576},
year = {2020}
}
License
BARK-ML specific code is distributed under MIT License.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for bark_ml-0.2.6-cp37-cp37m-macosx_11_0_x86_64.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 0ebd9bc24545628b106d8be9ba41a70f8adbc269ac42818eddc15f406f67ac27 |
|
MD5 | e7eac23ec267a144ecdbf53a3e2951a2 |
|
BLAKE2b-256 | a70bb394155b7945c3ddc5c3299f418c6adf8c18214fd58cb22746a13bb6bd4a |