Skip to main content

Rofunc: The Full Process Python Package for Robot Learning from Demonstration and Robot Manipulation

Project description

Rofunc: The Full Process Python Package for Robot Learning from Demonstration and Robot Manipulation

Release License Documentation Status Build Status

Repository address: https://github.com/Skylark0924/Rofunc
Documentation: https://rofunc.readthedocs.io/

Rofunc package focuses on the Imitation Learning (IL), Reinforcement Learning (RL) and Learning from Demonstration (LfD) for (Humanoid) Robot Manipulation. It provides valuable and convenient python functions, including demonstration collection, data pre-processing, LfD algorithms, planning, and control methods. We also provide an IsaacGym and OmniIsaacGym based robot simulator for evaluation. This package aims to advance the field by building a full-process toolkit and validation platform that simplifies and standardizes the process of demonstration data collection, processing, learning, and its deployment on robots.

Update News ๐ŸŽ‰๐ŸŽ‰๐ŸŽ‰

Installation

Please refer to the installation guide.

Documentation

Documentation Example Gallery

To give you a quick overview of the pipeline of rofunc, we provide an interesting example of learning to play Taichi from human demonstration. You can find it in the Quick start section of the documentation.

The available functions and plans can be found as follows.

Note โœ…: Achieved ๐Ÿ”ƒ: Reformatting โ›”: TODO

Data Learning P&C Tools Simulator
xsens.record โœ… DMP โ›” LQT โœ… config โœ… Franka โœ…
xsens.export โœ… GMR โœ… LQTBi โœ… logger โœ… CURI โœ…
xsens.visual โœ… TPGMM โœ… LQTFb โœ… datalab โœ… CURIMini ๐Ÿ”ƒ
opti.record โœ… TPGMMBi โœ… LQTCP โœ… robolab.coord โœ… CURISoftHand โœ…
opti.export โœ… TPGMM_RPCtl โœ… LQTCPDMP โœ… robolab.fk โœ… Walker โœ…
opti.visual โœ… TPGMM_RPRepr โœ… LQR โœ… robolab.ik โœ… Gluon ๐Ÿ”ƒ
zed.record โœ… TPGMR โœ… PoGLQRBi โœ… robolab.fd โ›” Baxter ๐Ÿ”ƒ
zed.export โœ… TPGMRBi โœ… iLQR ๐Ÿ”ƒ robolab.id โ›” Sawyer ๐Ÿ”ƒ
zed.visual โœ… TPHSMM โœ… iLQRBi ๐Ÿ”ƒ visualab.dist โœ… Humanoid โœ…
emg.record โœ… RLBaseLine(SKRL) โœ… iLQRFb ๐Ÿ”ƒ visualab.ellip โœ… Multi-Robot โœ…
emg.export โœ… RLBaseLine(RLlib) โœ… iLQRCP ๐Ÿ”ƒ visualab.traj โœ…
mmodal.record โ›” RLBaseLine(ElegRL) โœ… iLQRDyna ๐Ÿ”ƒ oslab.dir_proc โœ…
mmodal.sync โœ… BCO(RofuncIL) ๐Ÿ”ƒ iLQRObs ๐Ÿ”ƒ oslab.file_proc โœ…
BC-Z(RofuncIL) โ›” MPC โ›” oslab.internet โœ…
STrans(RofuncIL) โ›” RMP โ›” oslab.path โœ…
RT-1(RofuncIL) โ›”
A2C(RofuncRL) โœ…
PPO(RofuncRL) โœ…
SAC(RofuncRL) โœ…
TD3(RofuncRL) โœ…
CQL(RofuncRL) โ›”
TD3BC(RofuncRL) โ›”
DTrans(RofuncRL) โœ…
EDAC(RofuncRL) โ›”
AMP(RofuncRL) โœ…
ASE(RofuncRL) โœ…
ODTrans(RofuncRL) โ›”

RofuncRL

RofuncRL is one of the most important sub-packages of Rofunc. It is a modular easy-to-use Reinforcement Learning sub-package designed for Robot Learning tasks. It has been tested with simulators like OpenAIGym, IsaacGym, OmniIsaacGym (see example gallery), and also differentiable simulators like PlasticineLab and DiffCloth. Here is a list of robot tasks trained by RofuncRL:

Note
You can customize your own project based on RofuncRL by following the RofuncRL customize tutorial.
We also provide a RofuncRL-based repository template to generate your own repository following the RofuncRL structure by one click.
For more details, please check the documentation for RofuncRL.

The list of all supported tasks.
Tasks Animation Performance ModelZoo
Ant โœ…
Cartpole
Franka
Cabinet
โœ…
Franka
CubeStack
CURI
Cabinet
โœ…
CURI
CabinetImage
CURI
CabinetBimanual
CURIQbSoftHand
SynergyGrasp
โœ…
Humanoid โœ…
HumanoidAMP
Backflip
โœ…
HumanoidAMP
Walk
โœ…
HumanoidAMP
Run
โœ…
HumanoidAMP
Dance
โœ…
HumanoidAMP
Hop
โœ…
HumanoidASE
GetupSwordShield
โœ…
HumanoidASE
PerturbSwordShield
โœ…
HumanoidASE
HeadingSwordShield
โœ…
HumanoidASE
LocationSwordShield
โœ…
HumanoidASE
ReachSwordShield
โœ…
HumanoidASE
StrikeSwordShield
โœ…
BiShadowHand
BlockStack
โœ…
BiShadowHand
BottleCap
โœ…
BiShadowHand
CatchAbreast
โœ…
BiShadowHand
CatchOver2Underarm
โœ…
BiShadowHand
CatchUnderarm
โœ…
BiShadowHand
DoorOpenInward
โœ…
BiShadowHand
DoorOpenOutward
โœ…
BiShadowHand
DoorCloseInward
โœ…
BiShadowHand
DoorCloseOutward
โœ…
BiShadowHand
GraspAndPlace
โœ…
BiShadowHand
LiftUnderarm
โœ…
BiShadowHand
HandOver
โœ…
BiShadowHand
Pen
โœ…
BiShadowHand
PointCloud
BiShadowHand
PushBlock
โœ…
BiShadowHand
ReOrientation
โœ…
BiShadowHand
Scissors
โœ…
BiShadowHand
SwingCup
โœ…
BiShadowHand
Switch
โœ…
BiShadowHand
TwoCatchUnderarm
โœ…

Star History

Star History Chart

Citation

If you use rofunc in a scientific publication, we would appreciate citations to the following paper:

@software{liu2023rofunc,
          title = {Rofunc: The Full Process Python Package for Robot Learning from Demonstration and Robot Manipulation},
          author = {Liu, Junjia and Dong, Zhipeng and Li, Chenzui and Li, Zhihao and Yu, Minghao and Delehelle, Donatien and Chen, Fei},
          year = {2023},
          publisher = {Zenodo},
          doi = {10.5281/zenodo.10016946},
          url = {https://doi.org/10.5281/zenodo.10016946},
          dimensions = {true},
          google_scholar_id = {0EnyYjriUFMC},
}

Related Papers

  1. Robot cooking with stir-fry: Bimanual non-prehensile manipulation of semi-fluid objects (IEEE RA-L 2022 | Code)
@article{liu2022robot,
         title={Robot cooking with stir-fry: Bimanual non-prehensile manipulation of semi-fluid objects},
         author={Liu, Junjia and Chen, Yiting and Dong, Zhipeng and Wang, Shixiong and Calinon, Sylvain and Li, Miao and Chen, Fei},
         journal={IEEE Robotics and Automation Letters},
         volume={7},
         number={2},
         pages={5159--5166},
         year={2022},
         publisher={IEEE}
}
  1. SoftGPT: Learn Goal-oriented Soft Object Manipulation Skills by Generative Pre-trained Heterogeneous Graph Transformer (IROS 2023๏ฝœCode coming soon)
@inproceedings{liu2023softgpt,
               title={Softgpt: Learn goal-oriented soft object manipulation skills by generative pre-trained heterogeneous graph transformer},
               author={Liu, Junjia and Li, Zhihao and Lin, Wanyu and Calinon, Sylvain and Tan, Kay Chen and Chen, Fei},
               booktitle={2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
               pages={4920--4925},
               year={2023},
               organization={IEEE}
}
  1. BiRP: Learning Robot Generalized Bimanual Coordination using Relative Parameterization Method on Human Demonstration (IEEE CDC 2023 | Code)
@inproceedings{liu2023birp,
               title={Birp: Learning robot generalized bimanual coordination using relative parameterization method on human demonstration},
               author={Liu, Junjia and Sim, Hengyi and Li, Chenzui and Tan, Kay Chen and Chen, Fei},
               booktitle={2023 62nd IEEE Conference on Decision and Control (CDC)},
               pages={8300--8305},
               year={2023},
               organization={IEEE}
}

The Team

Rofunc is developed and maintained by the CLOVER Lab (Collaborative and Versatile Robots Laboratory), CUHK.

Acknowledge

We would like to acknowledge the following projects:

Learning from Demonstration

  1. pbdlib
  2. Ray RLlib
  3. ElegantRL
  4. SKRL
  5. DexterousHands

Planning and Control

  1. Robotics codes from scratch (RCFS)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

rofunc-0.0.2.6.tar.gz (201.5 MB view hashes)

Uploaded Source

Built Distribution

rofunc-0.0.2.6-py3-none-any.whl (202.9 MB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page