Skip to main content

Simple yet effective DDPG implementation for continuous action space,

Project description

Deep Deterministic Policy Gradient (DDPG) in Tensorflow 2

python 3 tensorflow 2

Looking at Reinforcement Learning, there are two kinds of action space, namely discrete and continuous. The continuous action space represents the continuous movement a robot can have when actuating. I was bias towards the continuous one at the time of having the idea to write the DDPG implementation. I find that the continuous one can provide a smoother movement, which may be benefitial to control robotic actuator. DDPG is an approach to do so. The source code is available here https://github.com/samuelmat19/DDPG-tf2

My implementation of DDPG based on paper https://arxiv.org/abs/1509.02971, but also highly inspired by https://spinningup.openai.com/en/latest/algorithms/ddpg.html . This implementation is simple and can be used as a boilerplate for your need. It also modifies a bit the original algorithm which mainly aims to speed up the training process. I would highly recommend to use Spinning Up library as it provides more algorithm options. This repository is suitable if direct modification to Tensorflow 2 model or simple training API is favorable.

Several videos of proof-of-concepts are as such:

Table of Contents

Why?

Reinforcement learning is important when it comes to real environment. As there is no definite right way to achieve a goal, the AI can be optimized based on reward function instead of continuously supervised by human.

In continuous action space, DDPG algorithm shines as one of the best in the field. In contrast to discrete action space, continuous action space mimics the reality of the world.

The original implementation is in PyTorch. Additionally, there are several modifications of the original algorithm that may improve it.

Changes from original paper

As mentioned above, there are several changes with different aims:

  • The loss function of Q-function uses Mean Absolute Error instead of Mean Squared Error. After experimenting, this speeds up training by a lot of margin. One possible cause is because Mean Squared Error may overestimate value above one and underestime value below one (x^2 function). This might be unfavorable for the Q-function update as all value range should be treated similarly.
  • Epsilon-greedy is implemented in addition to the policy's action. This increases faster exploration. Sometimes the agent can stuck with one policy's action, this can be exited with random policy action introduced by epsilon-greedy. As DDPG is off-policy, this surely is fine. The epsilon-greedy and noise are turned off in the testing state.
  • Unbalance replay buffer. Recent entries in the replay buffer are more likely to be taken than the earlier ones. This reduces repetitive current mistakes that the agent does.

Requirements

pip3 install ddpg-tf2

Training

ddpg-tf2 --train True --use-noise True

After every epoch, the network's weights will be stored in the checkpoints directory defined in common_definitions.py. There are 4 weights files that represent each networks, namely critic network, actor network, target critic, and target actor. Additionally, TensorBoard is used to track the resultive losses and rewards.

The pretrained weights can be retrieved from these links:

Testing (Sampling)

Testing is done by the similar executable, but with specific parameters as such. If the weight is available in the checkpoint folder, it will load the weight automatically from there.

ddpg-tf2 --train False --use-noise False

Future improvements

  • Improve documentation
  • GitHub Workflow
  • Publish to PyPI

CONTRIBUTING

To contribute to the project, these steps can be followed. Anyone that contributes will surely be recognized and mentioned here!

Contributions to the project are made using the "Fork & Pull" model. The typical steps would be:

  1. create an account on github
  2. fork this repository
  3. make a local clone
  4. make changes on the local copy
  5. commit changes git commit -m "my message"
  6. push to your GitHub account: git push origin
  7. create a Pull Request (PR) from your GitHub fork (go to your fork's webpage and click on "Pull Request." You can then add a message to describe your proposal.)

LICENSE

This open-source project is licensed under MIT License.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ddpg-tf2-1.0.1.tar.gz (10.6 kB view details)

Uploaded Source

Built Distribution

ddpg_tf2-1.0.1-py3-none-any.whl (12.3 kB view details)

Uploaded Python 3

File details

Details for the file ddpg-tf2-1.0.1.tar.gz.

File metadata

  • Download URL: ddpg-tf2-1.0.1.tar.gz
  • Upload date:
  • Size: 10.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.9.13

File hashes

Hashes for ddpg-tf2-1.0.1.tar.gz
Algorithm Hash digest
SHA256 45ca3f4a8f16cd2afad67b3c0ef8c3f6919401098135258da214ee6a50aa8ca9
MD5 954ba8af5d47dbadb17434d0c0d33331
BLAKE2b-256 3329b39f32129cf4686aaf72f257fc8122f5ba87406cb62a3960fdc5cd440d23

See more details on using hashes here.

File details

Details for the file ddpg_tf2-1.0.1-py3-none-any.whl.

File metadata

  • Download URL: ddpg_tf2-1.0.1-py3-none-any.whl
  • Upload date:
  • Size: 12.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.9.13

File hashes

Hashes for ddpg_tf2-1.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 e325b931feed19a97a89ff020bef51fe7c588e6c14721ef440554969a5439ccd
MD5 cf97762844f8720f63f04ec8ad82092a
BLAKE2b-256 94ff81ae0f4ff21cebb5ab7737c3ad828a82cf2eba8f36cfe8e1f1f99f1f54ea

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page