--- tags: - model_hub_mixin - pytorch_model_hub_mixin ---

VGGT: Visual Geometry Grounded Transformer

Paper PDF arXiv Project Page **[Meta AI Research](https://ai.facebook.com/research/)**; **[University of Oxford, VGG](https://www.robots.ox.ac.uk/~vgg/)** [Jianyuan Wang](https://jytime.github.io/), [Minghao Chen](https://silent-chen.github.io/), [Nikita Karaev](https://nikitakaraevv.github.io/), [Andrea Vedaldi](https://www.robots.ox.ac.uk/~vedaldi/), [Christian Rupprecht](https://chrirupp.github.io/), [David Novotny](https://d-novotny.github.io/)
## Overview Visual Geometry Grounded Transformer (VGGT, CVPR 2025) is a feed-forward neural network that directly infers all key 3D attributes of a scene, including extrinsic and intrinsic camera parameters, point maps, depth maps, and 3D point tracks, **from one, a few, or hundreds of its views, within seconds**. ## Quick Start Please refer to our [Github Repo](https://github.com/facebookresearch/vggt) ## Citation If you find our repository useful, please consider giving it a star ⭐ and citing our paper in your work: ```bibtex @inproceedings{wang2025vggt, title={VGGT: Visual Geometry Grounded Transformer}, author={Wang, Jianyuan and Chen, Minghao and Karaev, Nikita and Vedaldi, Andrea and Rupprecht, Christian and Novotny, David}, booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition}, year={2025} } ```