Decision AI Engine
Project description
Updated on 2022.09.08 DI-engine-v0.4.2
Introduction to DI-engine
DI-engine is a generalized decision intelligence engine. It supports various deep reinforcement learning algorithms (link):
- Most basic DRL algorithms, such as DQN, PPO, SAC, R2D2, IMPALA
- Multi-agent RL algorithms like QMIX, MAPPO
- Imitation learning algorithms (BC/IRL/GAIL) , such as GAIL, SQIL, Guided Cost Learning, Implicit Behavioral Cloning
- Exploration algorithms like HER, RND, ICM, NGU
- Offline RL algorithms: CQL, TD3BC, Decision Transformer
- Model-based RL algorithms: SVG, MVE, STEVE / MBPO, DDPPO
DI-engine aims to standardize different Decision Intelligence enviroments and applications. Various training pipelines and customized decision AI applications are also supported.
- Traditional academic environments
- Real world decision AI applications
- DI-star: Decision AI in StarCraftII
- DI-drive: Auto-driving platform
- GoBigger: Multi-Agent Decision Intelligence Environment
- DI-smartcross: Decision AI in Traffic Light Control
- DI-bioseq: Decision AI in Biological Sequence Prediction and Searching
- Research paper
- InterFuser: Safety-Enhanced Autonomous Driving Using Interpretable Sensor Fusion Transformer
- General nested data lib
- treevalue: Tree-nested data structure
- DI-treetensor: Tree-nested PyTorch tensor Lib
- Docs and Tutorials
- DI-engine-docs
- awesome-model-based-RL: A curated list of awesome Model-Based RL resources
- awesome-exploration-RL: A curated list of awesome exploration RL resources
- awesome-decision-transformer: A curated list of Decision Transformer resources
DI-engine also has some system optimization and design for efficient and robust large-scale RL training:
- DI-orchestrator: RL Kubernetes Custom Resource and Operator Lib
- DI-hpc: RL HPC OP Lib
- DI-store: RL Object Store
Have fun with exploration and exploitation.
Outline
- Introduction to DI-engine
- Outline
- Installation
- Quick Start
- Feature
- Feedback and Contribution
- Supporters
- Citation
- License
Installation
You can simply install DI-engine from PyPI with the following command:
pip install DI-engine
If you use Anaconda or Miniconda, you can install DI-engine from conda-forge through the following command:
conda install -c opendilab di-engine
For more information about installation, you can refer to installation.
And our dockerhub repo can be found here,we prepare base image
and env image
with common RL environments.
- base: opendilab/ding:nightly
- atari: opendilab/ding:nightly-atari
- mujoco: opendilab/ding:nightly-mujoco
- dmc: opendilab/ding:nightly-dmc2gym
- metaworld: opendilab/ding:nightly-metaworld
- smac: opendilab/ding:nightly-smac
- grf: opendilab/ding:nightly-grf
The detailed documentation are hosted on doc | 中文文档.
Quick Start
How to migrate a new RL Env | 如何迁移一个新的强化学习环境
Bonus: Train RL agent in one line code:
ding -m serial -e cartpole -p dqn -s 0
Feature
Algorithm Versatility
No | Algorithm | Label | Doc and Implementation | Runnable Demo |
---|---|---|---|---|
1 | DQN | DQN doc DQN中文文档 policy/dqn |
python3 -u cartpole_dqn_main.py / ding -m serial -c cartpole_dqn_config.py -s 0 | |
2 | C51 | policy/c51 | ding -m serial -c cartpole_c51_config.py -s 0 | |
3 | QRDQN | policy/qrdqn | ding -m serial -c cartpole_qrdqn_config.py -s 0 | |
4 | IQN | policy/iqn | ding -m serial -c cartpole_iqn_config.py -s 0 | |
5 | FQF | policy/fqf | ding -m serial -c cartpole_fqf_config.py -s 0 | |
6 | Rainbow | policy/rainbow | ding -m serial -c cartpole_rainbow_config.py -s 0 | |
7 | SQL | policy/sql | ding -m serial -c cartpole_sql_config.py -s 0 | |
8 | R2D2 | policy/r2d2 | ding -m serial -c cartpole_r2d2_config.py -s 0 | |
9 | A2C | policy/a2c | ding -m serial -c cartpole_a2c_config.py -s 0 | |
10 | PPO/MAPPO | policy/ppo | python3 -u cartpole_ppo_main.py / ding -m serial_onpolicy -c cartpole_ppo_config.py -s 0 | |
11 | PPG | policy/ppg | python3 -u cartpole_ppg_main.py | |
12 | ACER | policy/acer | ding -m serial -c cartpole_acer_config.py -s 0 | |
13 | IMPALA | policy/impala | ding -m serial -c cartpole_impala_config.py -s 0 | |
14 | DDPG/PADDPG | policy/ddpg | ding -m serial -c pendulum_ddpg_config.py -s 0 | |
15 | TD3 | policy/td3 | python3 -u pendulum_td3_main.py / ding -m serial -c pendulum_td3_config.py -s 0 | |
16 | D4PG | policy/d4pg | python3 -u pendulum_d4pg_config.py | |
17 | SAC/[MASAC] | policy/sac | ding -m serial -c pendulum_sac_config.py -s 0 | |
18 | PDQN | policy/pdqn | ding -m serial -c gym_hybrid_pdqn_config.py -s 0 | |
19 | MPDQN | policy/pdqn | ding -m serial -c gym_hybrid_mpdqn_config.py -s 0 | |
20 | HPPO | policy/ppo | ding -m serial_onpolicy -c gym_hybrid_hppo_config.py -s 0 | |
21 | QMIX | policy/qmix | ding -m serial -c smac_3s5z_qmix_config.py -s 0 | |
22 | COMA | policy/coma | ding -m serial -c smac_3s5z_coma_config.py -s 0 | |
23 | QTran | policy/qtran | ding -m serial -c smac_3s5z_qtran_config.py -s 0 | |
24 | WQMIX | policy/wqmix | ding -m serial -c smac_3s5z_wqmix_config.py -s 0 | |
25 | CollaQ | policy/collaq | ding -m serial -c smac_3s5z_collaq_config.py -s 0 | |
26 | GAIL | reward_model/gail | ding -m serial_gail -c cartpole_dqn_gail_config.py -s 0 | |
27 | SQIL | entry/sqil | ding -m serial_sqil -c cartpole_sqil_config.py -s 0 | |
28 | DQFD | policy/dqfd | ding -m serial_dqfd -c cartpole_dqfd_config.py -s 0 | |
29 | R2D3 | R2D3中文文档 policy/r2d3 |
python3 -u pong_r2d3_r2d2expert_config.py | |
30 | Guided Cost Learning | reward_model/guided_cost | python3 lunarlander_gcl_config.py | |
31 | TREX | reward_model/trex | python3 mujoco_trex_main.py | |
32 | Implicit Behavorial Cloning (DFO+MCMC) | policy/ibc & model/template/ebm | python3 d4rl_ibc_main.py -s 0 -c pen_human_ibc_mcmc_config.py | |
33 | BCO | entry/bco | python3 -u cartpole_bco_config.py | |
34 | HER | reward_model/her | python3 -u bitflip_her_dqn.py | |
35 | RND | reward_model/rnd | python3 -u cartpole_rnd_onppo_config.py | |
36 | ICM | ICM中文文档 reward_model/icm |
python3 -u cartpole_ppo_icm_config.py | |
37 | CQL | policy/cql | python3 -u d4rl_cql_main.py | |
38 | TD3BC | policy/td3_bc | python3 -u mujoco_td3_bc_main.py | |
39 | MBSAC(SAC+MVE+SVG) | policy/mbpolicy/mbsac | python3 -u pendulum_mbsac_mbpo_config.py \ python3 -u pendulum_mbsac_ddppo_config.py | |
40 | STEVESAC(SAC+STEVE+SVG) | policy/mbpolicy/mbsac | python3 -u pendulum_stevesac_mbpo_config.py | |
41 | MBPO | world_model/mbpo | python3 -u pendulum_sac_mbpo_config.py | |
42 | DDPPO | world_model/ddppo | python3 -u pendulum_mbsac_ddppo_config.py | |
43 | PER | worker/replay_buffer | rainbow demo |
|
44 | GAE | rl_utils/gae | ppo demo |
|
45 | ST-DIM | torch_utils/loss/contrastive_loss | ding -m serial -c cartpole_dqn_stdim_config.py -s 0 | |
46 | PLR | data/level_replay/level_sampler | python3 -u bigfish_plr_config.py -s 0 |
means discrete action space, which is only label in normal DRL algorithms (1-18)
means continuous action space, which is only label in normal DRL algorithms (1-18)
means hybrid (discrete + continuous) action space (1-18)
means distributed training (collector-learner parallel) RL algorithm
means multi-agent RL algorithm
means RL algorithm which is related to exploration and sparse reward
means Imitation Learning, including Behaviour Cloning, Inverse RL, Adversarial Structured IL
means offline RL algorithm
means model-based RL algorithm
means other sub-direction algorithm, usually as plugin-in in the whole pipeline
P.S: The .py
file in Runnable Demo
can be found in dizoo
Environment Versatility
No | Environment | Label | Visualization | Code and Doc Links |
---|---|---|---|---|
1 | atari | code link env tutorial 环境指南 |
||
2 | box2d/bipedalwalker | dizoo link 环境指南 |
||
3 | box2d/lunarlander | dizoo link 环境指南 |
||
4 | classic_control/cartpole | dizoo link 环境指南 |
||
5 | classic_control/pendulum | dizoo link 环境指南 |
||
6 | competitive_rl | dizoo link 环境指南 |
||
7 | gfootball | dizoo link 环境指南 |
||
8 | minigrid | dizoo link 环境指南 |
||
9 | mujoco | dizoo link 环境指南 |
||
10 | PettingZoo | dizoo link 环境指南 |
||
11 | overcooked | dizoo link env tutorial |
||
12 | procgen | dizoo link 环境指南 |
||
13 | pybullet | dizoo link 环境指南 |
||
14 | smac | dizoo link 环境指南 |
||
15 | d4rl | dizoo link 环境指南 |
||
16 | league_demo | dizoo link | ||
17 | pomdp atari | dizoo link | ||
18 | bsuite | dizoo link env tutorial |
||
19 | ImageNet | dizoo link 环境指南 |
||
20 | slime_volleyball | dizoo link env tutorial 环境指南 |
||
21 | gym_hybrid | dizoo link 环境指南 |
||
22 | GoBigger | opendilab link env tutorial 环境指南 |
||
23 | gym_soccer | dizoo link 环境指南 |
||
24 | multiagent_mujoco | dizoo link 环境指南 |
||
25 | bitflip | dizoo link 环境指南 |
||
26 | sokoban | dizoo link 环境指南 |
||
27 | gym_anytrading | dizoo link 环境指南 |
||
28 | mario | dizoo link 环境指南 |
means discrete action space
means continuous action space
means hybrid (discrete + continuous) action space
means multi-agent RL environment
means environment which is related to exploration and sparse reward
means offline RL environment
means Imitation Learning or Supervised Learning Dataset
means environment that allows agent VS agent battle
P.S. some enviroments in Atari, such as MontezumaRevenge, are also sparse reward type
Feedback and Contribution
-
File an issue on Github
-
Open or participate in our forum
-
Discuss on DI-engine slack communication channel
-
Discuss on DI-engine's QQ group (700157520) or add us on WeChat
-
Contact our email (opendilab.contact@gmail.com)
-
Contributes to our future plan Roadmap
We appreciate all the feedbacks and contributions to improve DI-engine, both algorithms and system designs. And CONTRIBUTING.md
offers some necessary information.
Supporters
↳ Stargazers
↳ Forkers
Citation
@misc{ding,
title={{DI-engine: OpenDILab} Decision Intelligence Engine},
author={DI-engine Contributors},
publisher = {GitHub},
howpublished = {\url{https://github.com/opendilab/DI-engine}},
year={2021},
}
License
DI-engine released under the Apache 2.0 license.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file DI-engine-0.4.2.tar.gz
.
File metadata
- Download URL: DI-engine-0.4.2.tar.gz
- Upload date:
- Size: 1.1 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.1 CPython/3.9.13
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 3f66ca50fbe38e3993904850d3709fdd75e413ededef403c02b898aaaf8261b8 |
|
MD5 | a1b23997e8d15732e35745d01fd3c8e5 |
|
BLAKE2b-256 | c9c6bd40f204b56c456531e12e615ed2d1049bccfb1db6b39a129677bbb64550 |
File details
Details for the file DI_engine-0.4.2-py3-none-any.whl
.
File metadata
- Download URL: DI_engine-0.4.2-py3-none-any.whl
- Upload date:
- Size: 1.8 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.1 CPython/3.9.13
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 7d18603a474939c34987558dc3601fa8852b155a81a6dd0c2adc7d4a62088f01 |
|
MD5 | 7cd86c60ddfa315e318e1af391be747f |
|
BLAKE2b-256 | fc40b8cb0b2444547353cedd1f1e44fcbd5623792238ac38a53998904a578e74 |