Skip to main content

LIBERO: Benchmark and environments for robot learning

Project description

LIBERO is designed for studying knowledge transfer in multitask and lifelong robot learning problems. Successfully resolving these problems require both declarative knowledge about objects/spatial relationships and procedural knowledge about motion/behaviors. This repository started as a fork of the official LIBERO benchmark . It is now maintained by the Hugging Face team, with modifications for compatibility with LeRobot , simplified installation, and large-scale robotics experiments. LIBERO provides:

  • a procedural generation pipeline that could in principle generate an infinite number of manipulation tasks.
  • 130 tasks grouped into four task suites: LIBERO-Spatial, LIBERO-Object, LIBERO-Goal, and LIBERO-100. The first three task suites have controlled distribution shifts, meaning that they require the transfer of a specific type of knowledge. In contrast, LIBERO-100 consists of 100 manipulation tasks that require the transfer of entangled knowledge. LIBERO-100 is further splitted into LIBERO-90 for pretraining a policy and LIBERO-10 for testing the agent's downstream lifelong learning performance.
  • five research topics.
  • three visuomotor policy network architectures.
  • three lifelong learning algorithms with the sequential finetuning and multitask learning baselines.

Contents

Installtion

Please run the following commands in the given order to install the dependency for LIBERO.

conda create -n libero python=3.8.13
conda activate libero
git clone https://github.com/Lifelong-Robot-Learning/LIBERO.git
cd LIBERO
pip install -r requirements.txt
pip install torch==1.11.0+cu113 torchvision==0.12.0+cu113 torchaudio==0.11.0 --extra-index-url https://download.pytorch.org/whl/cu113

Then install the libero package:

pip install -e .

Datasets

We provide high-quality human teleoperation demonstrations for the four task suites in LIBERO. To download the demonstration dataset, run:

python benchmark_scripts/download_libero_datasets.py

By default, the dataset will be stored under the LIBERO folder and all four datasets will be downloaded. To download a specific dataset, use

python benchmark_scripts/download_libero_datasets.py --datasets DATASET

where DATASET is chosen from [libero_spatial, libero_object, libero_100, libero_goal.

NEW!!!

Alternatively, you can download the dataset from HuggingFace by using:

python benchmark_scripts/download_libero_datasets.py --use-huggingface

This option can also be combined with the specific dataset selection:

python benchmark_scripts/download_libero_datasets.py --datasets DATASET --use-huggingface

The datasets hosted on HuggingFace are available at here.

Assets

IMPORTANT: Asset Loading from HuggingFace Hub

The simulation assets (3D models, textures, scene files, etc.) are now automatically loaded from HuggingFace Hub instead of being bundled with the package. When you first run LIBERO, the assets will be automatically downloaded from the Hub repository yifengzhu-hf/LIBERO-assets and cached locally.

This change:

  • Reduces the size of the installed package
  • Ensures you always have the latest assets
  • Allows for easy asset versioning and updates

The assets will be cached at ~/.cache/libero/assets/ and will only be downloaded once. If you have local assets installed from a previous version, those will be used instead.

Getting Started

For a detailed walk-through, please either refer to the documentation or the notebook examples provided under the notebooks folder. In the following, we provide example scripts for retrieving a task, training and evaluation.

Task

The following is a minimal example of retrieving a specific task from a specific task suite.

from libero.libero import benchmark
from libero.libero.envs import OffScreenRenderEnv


benchmark_dict = benchmark.get_benchmark_dict()
task_suite_name = "libero_10" # can also choose libero_spatial, libero_object, etc.
task_suite = benchmark_dict[task_suite_name]()

# retrieve a specific task
task_id = 0
task = task_suite.get_task(task_id)
task_name = task.name
task_description = task.language
task_bddl_file = os.path.join(get_libero_path("bddl_files"), task.problem_folder, task.bddl_file)
print(f"[info] retrieving task {task_id} from suite {task_suite_name}, the " + \
      f"language instruction is {task_description}, and the bddl file is {task_bddl_file}")

# step over the environment
env_args = {
    "bddl_file_name": task_bddl_file,
    "camera_heights": 128,
    "camera_widths": 128
}
env = OffScreenRenderEnv(**env_args)
env.seed(0)
env.reset()
init_states = task_suite.get_task_init_states(task_id) # for benchmarking purpose, we fix the a set of initial states
init_state_id = 0
env.set_init_state(init_states[init_state_id])

dummy_action = [0.] * 7
for step in range(10):
    obs, reward, done, info = env.step(dummy_action)
env.close()

Currently, we only support sparse reward function (i.e., the agent receives +1 when the task is finished). As sparse-reward RL is extremely hard to learn, currently we mainly focus on lifelong imitation learning.

Training

To start a lifelong learning experiment, please choose:

  • BENCHMARK from [LIBERO_SPATIAL, LIBERO_OBJECT, LIBERO_GOAL, LIBERO_90, LIBERO_10]
  • POLICY from [bc_rnn_policy, bc_transformer_policy, bc_vilt_policy]
  • ALGO from [base, er, ewc, packnet, multitask]

then run the following:

export CUDA_VISIBLE_DEVICES=GPU_ID && \
export MUJOCO_EGL_DEVICE_ID=GPU_ID && \
python libero/lifelong/main.py seed=SEED \
                               benchmark_name=BENCHMARK \
                               policy=POLICY \
                               lifelong=ALGO

Please see the documentation for the details of reproducing the study results.

Evaluation

By default the policies will be evaluated on the fly during training. If you have limited computing resource of GPUs, we offer an evaluation script for you to evaluate models separately.

python libero/lifelong/evaluate.py --benchmark BENCHMARK_NAME \
                                   --task_id TASK_ID \ 
                                   --algo ALGO_NAME \
                                   --policy POLICY_NAME \
                                   --seed SEED \
                                   --ep EPOCH \
                                   --load_task LOAD_TASK \
                                   --device_id CUDA_ID

Citation

If you find LIBERO to be useful in your own research, please consider citing our paper:

@article{liu2023libero,
  title={LIBERO: Benchmarking Knowledge Transfer for Lifelong Robot Learning},
  author={Liu, Bo and Zhu, Yifeng and Gao, Chongkai and Feng, Yihao and Liu, Qiang and Zhu, Yuke and Stone, Peter},
  journal={arXiv preprint arXiv:2306.03310},
  year={2023}
}

License

Component License
Codebase MIT License
Datasets Creative Commons Attribution 4.0 International (CC BY 4.0)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

libero-0.1.1.tar.gz (3.0 MB view details)

Uploaded Source

File details

Details for the file libero-0.1.1.tar.gz.

File metadata

  • Download URL: libero-0.1.1.tar.gz
  • Upload date:
  • Size: 3.0 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.19

File hashes

Hashes for libero-0.1.1.tar.gz
Algorithm Hash digest
SHA256 31f17f69ced68e2ecde630353dda924372b0b7b647991e21eade231b241fc373
MD5 4f654bb32d737f7b04d5a375ded1e1ad
BLAKE2b-256 4c7a5f0ec2fb6013e55fbf6a7fb3b29b98bc7b9a7f1fee6ecdfcb9b1d60225fb

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page