--- # Example metadata to be added to a dataset card. # Full dataset card template at https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md language: - en license: mit # Example: apache-2.0 or any license from https://hf.co/docs/hub/repositories-licenses tags: - robotics - manipulation - rearrangement - computer-vision - reinforcement-learning - imitation-learning - rgbd - rgb - depth - low-level-control - whole-body-control - home-assistant - simulation - maniskill annotations_creators: - machine-generated # Generated from RL policies with filtering language_creators: - machine-generated language_details: en-US pretty_name: ManiSkill-HAB SetTabkle Dataset size_categories: - 1M Whole-body, low-level control/manipulation demonstration dataset for ManiSkill-HAB SetTable. ## Dataset Details ### Dataset Description Demonstration dataset for ManiSkill-HAB SetTable. Each subtask/object combination (e.g pick 013_apple) has 1000 successful episodes (200 samples/demonstration) gathered using [RL policies](https://huggingface.co/arth-shukla/mshab_checkpoints) fitered for safe robot behavior with a rule-based event labeling system. SetTable contains the Pick, Place, Open, and Close subtasks. Relative to the other MS-HAB long-horizon tasks (TidyHouse, PrepareGroceries), SetTable Pick, Place, Open, and Close are easy difficulty (on a scale of easy-medium-hard). The difficulty of SetTable primarily comes from skill chaining rather than individual subtasks. ### Related Datasets Full information about the MS-HAB datasets (size, difficulty, links, etc), including the other long horizon tasks, are available [on the ManiSkill-HAB website](https://arth-shukla.github.io/mshab/#dataset-section). - [ManiSkill-HAB TidyHouse Dataset](https://huggingface.co/datasets/arth-shukla/MS-HAB-TidyHouse) - [ManiSkill-HAB PrepareGroceries Dataset](https://huggingface.co/datasets/arth-shukla/MS-HAB-PrepareGroceries) ## Uses ### Direct Use This dataset can be used to train vision-based learning from demonstrations and imitation learning methods, which can be evaluated with the [MS-HAB environments](https://github.com/arth-shukla/mshab). This dataset may be useful as synthetic data for computer vision tasks as well. ### Out-of-Scope Use While blind state-based policies can be trained on this dataset, it is recommended to train vision-based policies to handle collisions and obstructions. ## Dataset Structure Each subtask/object combination has files `[SUBTASK]/[OBJECT].json` and `[SUBTASK]/[OBJECT].h5`. The JSON file contains episode metadata, event labels, etc, while the HDF5 file contains the demonstration data. ## Dataset Creation The data is gathered using [RL policies](https://huggingface.co/arth-shukla/mshab_checkpoints) fitered for safe robot behavior with a rule-based event labeling system. ## Bias, Risks, and Limitations The dataset is purely synthetic. While MS-HAB supports high-quality ray-traced rendering, this dataset uses ManiSkill's default rendering for data generation due to efficiency. However, users can generate their own data with the [data generation code](https://github.com/arth-shukla/mshab/blob/main/mshab/utils/gen/gen_data.py). ## Citation ``` @inproceedings{shukla2025maniskillhab, author = {Arth Shukla and Stone Tao and Hao Su}, title = {ManiSkill-HAB: {A} Benchmark for Low-Level Manipulation in Home Rearrangement Tasks}, booktitle = {The Thirteenth International Conference on Learning Representations, {ICLR} 2025, Singapore, April 24-28, 2025}, publisher = {OpenReview.net}, year = {2025}, url = {https://openreview.net/forum?id=6bKEWevgSd}, timestamp = {Thu, 15 May 2025 17:19:05 +0200}, biburl = {https://dblp.org/rec/conf/iclr/ShuklaTS25.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```