Skip to main content

Decision AI Engine

Project description


Twitter PyPI Conda Conda update PyPI - Python Version PyTorch Version

Loc Comments

Style Docs Unittest Algotest deploy codecov

GitHub Org's stars GitHub stars GitHub forks GitHub commit activity GitHub issues GitHub pulls Contributors GitHub license

Updated on 2023.02.17 DI-engine-v0.4.6

Introduction to DI-engine

DI-engine doc | 中文文档

DI-engine is a generalized decision intelligence engine. It supports various deep reinforcement learning algorithms (link):

  • Most basic DRL algorithms, such as DQN, PPO, SAC, R2D2, IMPALA
  • Multi-agent RL algorithms like QMIX, MAPPO, ACE
  • Imitation learning algorithms (BC/IRL/GAIL) , such as GAIL, SQIL, Guided Cost Learning, Implicit Behavioral Cloning
  • Exploration algorithms like HER, RND, ICM, NGU
  • Offline RL algorithms: CQL, TD3BC, Decision Transformer
  • Model-based RL algorithms: SVG, MVE, STEVE / MBPO, DDPPO

DI-engine aims to standardize different Decision Intelligence enviroments and applications. Various training pipelines and customized decision AI applications are also supported.

(Click to Collapse)

DI-engine also has some system optimization and design for efficient and robust large-scale RL training:

(Click for Details)

Have fun with exploration and exploitation.

Outline

Installation

You can simply install DI-engine from PyPI with the following command:

pip install DI-engine

If you use Anaconda or Miniconda, you can install DI-engine from conda-forge through the following command:

conda install -c opendilab di-engine

For more information about installation, you can refer to installation.

And our dockerhub repo can be found here,we prepare base image and env image with common RL environments.

(Click for Details)
  • base: opendilab/ding:nightly
  • atari: opendilab/ding:nightly-atari
  • mujoco: opendilab/ding:nightly-mujoco
  • dmc: opendilab/ding:nightly-dmc2gym
  • metaworld: opendilab/ding:nightly-metaworld
  • smac: opendilab/ding:nightly-smac
  • grf: opendilab/ding:nightly-grf

The detailed documentation are hosted on doc | 中文文档.

Quick Start

3 Minutes Kickoff

3 Minutes Kickoff (colab)

How to migrate a new RL Env | 如何迁移一个新的强化学习环境

How to customize the neural network model | 如何定制策略使用的神经网络模型

测试/部署 强化学习策略 的样例

Bonus: Train RL agent in one line code:

ding -m serial -e cartpole -p dqn -s 0

Feature

Algorithm Versatility

discrete  discrete means discrete action space, which is only label in normal DRL algorithms (1-18)

continuous  means continuous action space, which is only label in normal DRL algorithms (1-18)

hybrid  means hybrid (discrete + continuous) action space (1-18)

dist  Distributed Reinforcement Learning分布式强化学习

MARL  Multi-Agent Reinforcement Learning多智能体强化学习

exp  Exploration Mechanisms in Reinforcement Learning强化学习中的探索机制

IL  Imitation Learning模仿学习

offline  Offiline Reinforcement Learning离线强化学习

mbrl  Model-Based Reinforcement Learning基于模型的强化学习

other  means other sub-direction algorithm, usually as plugin-in in the whole pipeline

P.S: The .py file in Runnable Demo can be found in dizoo

(Click to Collapse)
No. Algorithm Label Doc and Implementation Runnable Demo
1 DQN discrete DQN doc
DQN中文文档
policy/dqn
python3 -u cartpole_dqn_main.py / ding -m serial -c cartpole_dqn_config.py -s 0
2 C51 discrete C51 doc
policy/c51
ding -m serial -c cartpole_c51_config.py -s 0
3 QRDQN discrete QRDQN doc
policy/qrdqn
ding -m serial -c cartpole_qrdqn_config.py -s 0
4 IQN discrete IQN doc
policy/iqn
ding -m serial -c cartpole_iqn_config.py -s 0
5 FQF discrete FQF doc
policy/fqf
ding -m serial -c cartpole_fqf_config.py -s 0
6 Rainbow discrete Rainbow doc
policy/rainbow
ding -m serial -c cartpole_rainbow_config.py -s 0
7 SQL discretecontinuous SQL doc
policy/sql
ding -m serial -c cartpole_sql_config.py -s 0
8 R2D2 distdiscrete R2D2 doc
policy/r2d2
ding -m serial -c cartpole_r2d2_config.py -s 0
9 PG discrete PG doc
policy/pg
ding -m serial -c cartpole_pg_config.py -s 0
10 A2C discrete A2C doc
policy/a2c
ding -m serial -c cartpole_a2c_config.py -s 0
11 PPO/MAPPO discretecontinuousMARL PPO doc
policy/ppo
python3 -u cartpole_ppo_main.py / ding -m serial_onpolicy -c cartpole_ppo_config.py -s 0
12 PPG discrete PPG doc
policy/ppg
python3 -u cartpole_ppg_main.py
13 ACER discretecontinuous ACER doc
policy/acer
ding -m serial -c cartpole_acer_config.py -s 0
14 IMPALA distdiscrete IMPALA doc
policy/impala
ding -m serial -c cartpole_impala_config.py -s 0
15 DDPG/PADDPG continuoushybrid DDPG doc
policy/ddpg
ding -m serial -c pendulum_ddpg_config.py -s 0
16 TD3 continuoushybrid TD3 doc
policy/td3
python3 -u pendulum_td3_main.py / ding -m serial -c pendulum_td3_config.py -s 0
17 D4PG continuous D4PG doc
policy/d4pg
python3 -u pendulum_d4pg_config.py
18 SAC/[MASAC] discretecontinuousMARL SAC doc
policy/sac
ding -m serial -c pendulum_sac_config.py -s 0
19 PDQN hybrid policy/pdqn ding -m serial -c gym_hybrid_pdqn_config.py -s 0
20 MPDQN hybrid policy/pdqn ding -m serial -c gym_hybrid_mpdqn_config.py -s 0
21 HPPO hybrid policy/ppo ding -m serial_onpolicy -c gym_hybrid_hppo_config.py -s 0
22 QMIX MARL QMIX doc
policy/qmix
ding -m serial -c smac_3s5z_qmix_config.py -s 0
23 COMA MARL COMA doc
policy/coma
ding -m serial -c smac_3s5z_coma_config.py -s 0
24 QTran MARL policy/qtran ding -m serial -c smac_3s5z_qtran_config.py -s 0
25 WQMIX MARL WQMIX doc
policy/wqmix
ding -m serial -c smac_3s5z_wqmix_config.py -s 0
26 CollaQ MARL CollaQ doc
policy/collaq
ding -m serial -c smac_3s5z_collaq_config.py -s 0
27 MADDPG MARL MADDPG doc
policy/ddpg
ding -m serial -c ant_maddpg_config.py -s 0
28 GAIL IL GAIL doc
reward_model/gail
ding -m serial_gail -c cartpole_dqn_gail_config.py -s 0
29 SQIL IL SQIL doc
entry/sqil
ding -m serial_sqil -c cartpole_sqil_config.py -s 0
30 DQFD IL DQFD doc
policy/dqfd
ding -m serial_dqfd -c cartpole_dqfd_config.py -s 0
31 R2D3 IL R2D3 doc
R2D3中文文档
policy/r2d3
python3 -u pong_r2d3_r2d2expert_config.py
32 Guided Cost Learning IL Guided Cost Learning中文文档
reward_model/guided_cost
python3 lunarlander_gcl_config.py
33 TREX IL TREX doc
reward_model/trex
python3 mujoco_trex_main.py
34 Implicit Behavorial Cloning (DFO+MCMC) IL policy/ibc
model/template/ebm
python3 d4rl_ibc_main.py -s 0 -c pen_human_ibc_mcmc_config.py
35 BCO IL entry/bco python3 -u cartpole_bco_config.py
36 HER exp HER doc
reward_model/her
python3 -u bitflip_her_dqn.py
37 RND exp RND doc
reward_model/rnd
python3 -u cartpole_rnd_onppo_config.py
38 ICM exp ICM doc
ICM中文文档
reward_model/icm
python3 -u cartpole_ppo_icm_config.py
39 CQL offline CQL doc
policy/cql
python3 -u d4rl_cql_main.py
40 TD3BC offline TD3BC doc
policy/td3_bc
python3 -u d4rl_td3_bc_main.py
41 Decision Transformer offline policy/dt python3 -u d4rl_dt_main.py
42 MBSAC(SAC+MVE+SVG) continuousmbrl policy/mbpolicy/mbsac python3 -u pendulum_mbsac_mbpo_config.py \ python3 -u pendulum_mbsac_ddppo_config.py
43 STEVESAC(SAC+STEVE+SVG) continuousmbrl policy/mbpolicy/mbsac python3 -u pendulum_stevesac_mbpo_config.py
44 MBPO mbrl MBPO doc
world_model/mbpo
python3 -u pendulum_sac_mbpo_config.py
45 DDPPO mbrl world_model/ddppo python3 -u pendulum_mbsac_ddppo_config.py
46 PER other worker/replay_buffer rainbow demo
47 GAE other rl_utils/gae ppo demo
48 ST-DIM other torch_utils/loss/contrastive_loss ding -m serial -c cartpole_dqn_stdim_config.py -s 0
49 PLR other PLR doc
data/level_replay/level_sampler
python3 -u bigfish_plr_config.py -s 0
50 PCGrad other torch_utils/optimizer_helper/PCGrad python3 -u multi_mnist_pcgrad_main.py -s 0
51 BDQ other policy/bdq python3 -u hopper_bdq_config.py

Environment Versatility

(Click to Collapse)
No Environment Label Visualization Code and Doc Links
1 Atari discrete original dizoo link
env tutorial
环境指南
2 box2d/bipedalwalker continuous original dizoo link
env tutorial
环境指南
3 box2d/lunarlander discrete original dizoo link
env tutorial
环境指南
4 classic_control/cartpole discrete original dizoo link
env tutorial
环境指南
5 classic_control/pendulum continuous original dizoo link
env tutorial
环境指南
6 competitive_rl discrete selfplay original dizoo link
环境指南
7 gfootball discretesparseselfplay original dizoo link
env tutorial
环境指南
8 minigrid discretesparse original dizoo link
env tutorial
环境指南
9 MuJoCo continuous original dizoo link
env tutorial
环境指南
10 PettingZoo discrete continuous marl original dizoo link
env tutorial
环境指南
11 overcooked discrete marl original dizoo link
env tutorial
12 procgen discrete original dizoo link
env tutorial
环境指南
13 pybullet continuous original dizoo link
环境指南
14 smac discrete marlselfplaysparse original dizoo link
env tutorial
环境指南
15 d4rl offline ori dizoo link
环境指南
16 league_demo discrete selfplay original dizoo link
17 pomdp atari discrete dizoo link
18 bsuite discrete original dizoo link
env tutorial
19 ImageNet IL original dizoo link
环境指南
20 slime_volleyball discreteselfplay ori dizoo link
env tutorial
环境指南
21 gym_hybrid hybrid ori dizoo link
env tutorial
环境指南
22 GoBigger hybridmarlselfplay ori dizoo link
env tutorial
环境指南
23 gym_soccer hybrid ori dizoo link
环境指南
24 multiagent_mujoco continuous marl original dizoo link
环境指南
25 bitflip discrete sparse original dizoo link
环境指南
26 sokoban discrete Game 2 dizoo link
env tutorial
环境指南
27 gym_anytrading discrete original dizoo link
环境指南
28 mario discrete original dizoo link
env tutorial
环境指南
29 dmc2gym continuous original dizoo link
env tutorial
环境指南
30 evogym continuous original dizoo link
env tutorial
环境指南
31 gym-pybullet-drones continuous original dizoo link
环境指南
32 beergame discrete original dizoo link
环境指南
33 classic_control/acrobot discrete original dizoo link
环境指南
34 box2d/car_racing discrete
continuous
original dizoo link
环境指南
35 metadrive continuous original dizoo link
环境指南

discrete means discrete action space

continuous means continuous action space

hybrid means hybrid (discrete + continuous) action space

MARL means multi-agent RL environment

sparse means environment which is related to exploration and sparse reward

offline means offline RL environment

IL means Imitation Learning or Supervised Learning Dataset

selfplay means environment that allows agent VS agent battle

P.S. some enviroments in Atari, such as MontezumaRevenge, are also sparse reward type

Feedback and Contribution

We appreciate all the feedbacks and contributions to improve DI-engine, both algorithms and system designs. And CONTRIBUTING.md offers some necessary information.

Supporters

↳ Stargazers

Stargazers repo roster for @opendilab/DI-engine

↳ Forkers

Forkers repo roster for @opendilab/DI-engine

Citation

@misc{ding,
    title={{DI-engine: OpenDILab} Decision Intelligence Engine},
    author={DI-engine Contributors},
    publisher = {GitHub},
    howpublished = {\url{https://github.com/opendilab/DI-engine}},
    year={2021},
}

License

DI-engine released under the Apache 2.0 license.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

DI-engine-0.4.6.tar.gz (1.3 MB view details)

Uploaded Source

Built Distribution

DI_engine-0.4.6-py3-none-any.whl (2.0 MB view details)

Uploaded Python 3

File details

Details for the file DI-engine-0.4.6.tar.gz.

File metadata

  • Download URL: DI-engine-0.4.6.tar.gz
  • Upload date:
  • Size: 1.3 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.5.0 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.61.0 CPython/3.8.2

File hashes

Hashes for DI-engine-0.4.6.tar.gz
Algorithm Hash digest
SHA256 37fcb476bc49e2957deb0426ab2ee4e53db6f5e16b592200e4e5166c97259c27
MD5 50f2da0749df606b6c7b7a6f0d898e32
BLAKE2b-256 fd02e84f2a0006c2040739cb2e8c19dc4dfcfac5eb8e9bc8305eeddd69ecf5af

See more details on using hashes here.

File details

Details for the file DI_engine-0.4.6-py3-none-any.whl.

File metadata

  • Download URL: DI_engine-0.4.6-py3-none-any.whl
  • Upload date:
  • Size: 2.0 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.5.0 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.61.0 CPython/3.8.2

File hashes

Hashes for DI_engine-0.4.6-py3-none-any.whl
Algorithm Hash digest
SHA256 fb652414b1a0f2dccccac4bd2ed82c0f722c02a98c06414d5cc5dc236a3fafa0
MD5 fa404ddc9a949f39fd5295589c862d61
BLAKE2b-256 15d5e1c340621f3b56d744f6f0d25180098bd729f7e925f4a0fc9088c59deaa1

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page