OmniRetarget: Interaction-Preserving Data Generation for Humanoid Whole-Body Loco-Manipulation and Scene Interaction
Abstract
OmniRetarget generates high-quality, interaction-preserving motion data for training RL policies, enabling complex skills like parkour and loco-manipulation on humanoid robots.
A dominant paradigm for teaching humanoid robots complex skills is to retarget human motions as kinematic references to train reinforcement learning (RL) policies. However, existing retargeting pipelines often struggle with the significant embodiment gap between humans and robots, producing physically implausible artifacts like foot-skating and penetration. More importantly, common retargeting methods neglect the rich human-object and human-environment interactions essential for expressive locomotion and loco-manipulation. To address this, we introduce OmniRetarget, an interaction-preserving data generation engine based on an interaction mesh that explicitly models and preserves the crucial spatial and contact relationships between an agent, the terrain, and manipulated objects. By minimizing the Laplacian deformation between the human and robot meshes while enforcing kinematic constraints, OmniRetarget generates kinematically feasible trajectories. Moreover, preserving task-relevant interactions enables efficient data augmentation, from a single demonstration to different robot embodiments, terrains, and object configurations. We comprehensively evaluate OmniRetarget by retargeting motions from OMOMO, LAFAN1, and our in-house MoCap datasets, generating over 8-hour trajectories that achieve better kinematic constraint satisfaction and contact preservation than widely used baselines. Such high-quality data enables proprioceptive RL policies to successfully execute long-horizon (up to 30 seconds) parkour and loco-manipulation skills on a Unitree G1 humanoid, trained with only 5 reward terms and simple domain randomization shared by all tasks, without any learning curriculum.
Community
Dataset: https://huggingface.co/datasets/omniretarget/OmniRetarget_Dataset
Neat tweet about it: https://x.com/zhenkirito123/status/1973417906412331361
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- HDMI: Learning Interactive Humanoid Whole-Body Control from Human Videos (2025)
- ResMimic: From General Motion Tracking to Humanoid Whole-body Loco-Manipulation via Residual Learning (2025)
- Retargeting Matters: General Motion Retargeting for Humanoid Motion Tracking (2025)
- DreamControl: Human-Inspired Whole-Body Humanoid Control for Scene Interaction via Guided Diffusion (2025)
- DexMan: Learning Bimanual Dexterous Manipulation from Human and Generated Videos (2025)
- HERMES: Human-to-Robot Embodied Learning from Multi-Source Motion Data for Mobile Dexterous Manipulation (2025)
- VisualMimic: Visual Humanoid Loco-Manipulation via Motion Tracking and Generation (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper