--- pipeline_tag: robotics library_name: diffusers license: mit --- # Real-Time Iteration Scheme for Diffusion Policy (RTI-DP) This repository contains the official model weights and code for the paper: **"Real-Time Iteration Scheme for Diffusion Policy"**. - 📚 [Paper](https://huggingface.co/papers/2508.05396) - 🌐 [Project Page](https://rti-dp.github.io/) - 💻 [Code](https://github.com/RTI-DP/rti-dp) RTI-DP enables fast inference in diffusion-based robotic policies by initializing each denoising step with the previous prediction — no retraining, no distillation.
RTI-DP Teaser
## Usage This model is designed to be used with its official codebase. For detailed installation instructions, environment setup, and further information, please refer to the [official GitHub repository](https://github.com/RTI-DP/rti-dp), which is based on [Diffusion Policy](https://github.com/real-stanford/diffusion_policy). ### Evaluation To evaluate RTI-DP policies with DDPM, you can use the provided script from the repository: ```shell python ../eval_rti.py --config-name=eval_diffusion_rti_lowdim_workspace.yaml ``` For RTI-DP-scale checkpoints, refer to the [duandaxia/rti-dp-scale](https://huggingface.co/duandaxia/rti-dp-scale) on Hugging Face. ## Citation If you find our work useful, please consider citing our paper: ```bibtex @misc{duan2025realtimeiterationschemediffusion, title={Real-Time Iteration Scheme for Diffusion Policy}, author={Yufei Duan and Hang Yin and Danica Kragic}, year={2025}, eprint={2508.05396}, archivePrefix={arXiv}, primaryClass={cs.RO}, url={https://arxiv.org/abs/2508.05396}, } ``` ## Acknowledgements We thank the authors of [Diffusion Policy](https://github.com/real-stanford/diffusion_policy), [Consistency Policy](https://github.com/Aaditya-Prasad/Consistency-Policy/) and [Streaming Diffusion Policy](https://github.com/Streaming-Diffusion-Policy/streaming_diffusion_policy/) for sharing their codebase.