Manipulation as in Simulation: Enabling Accurate Geometry Perception in Robots

This repository contains the Camera Depth Models (CDMs) from the paper Manipulation as in Simulation: Enabling Accurate Geometry Perception in Robots.

CDMs are proposed as a simple plugin for daily-use depth cameras, taking RGB images and raw depth signals as input to output denoised, accurate metric depth. This enables policies trained purely in simulation to transfer directly to real robots, effectively bridging the sim-to-real gap for manipulation tasks.

Usage

To run depth inference on RGB-D camera data, follow the example from the GitHub repository's CDM section:

cd cdm
python infer.py \
    --encoder vitl \
    --model-path /path/to/model.pth \
    --rgb-image /path/to/rgb.jpg \
    --depth-image /path/to/depth.png \
    --output result.png
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including depth-anything/camera-depth-model-d435