πŸš€ Galaxea Open-World Dataset and G0 Dual-System VLA Model

Project Page Paper Videos Visualizer Modelscope Twitter Linkedin

G0-VLA architecture and training pipeline: Stage 1 pre-trains a vision-language model on cross-embodiment data in an autoregressive manner. Stage 2 and post-train share the same model structure, trained on Galaxea open-world data with embodiment-specific views and high-level and subtask instructions, by supervising the Action Transformer’s action reconstruction with a flow- matching loss.

image/png

In this repo, you can find:

  • G0_3B_base.pt: Model Weights after Stage2 Pretraining
  • G0_3B_base_dataset_statistics: Statistics for Dataset Used in Pretraining

πŸ“œ Citation

All the data and code within this repo are under CC BY-NC-SA 4.0. If you use our dataset or models, please cite:

@article{galaxea2025,
  title={Galaxea G0: Open-World Dataset and Dual-System VLA Model},
  author={Galaxea Team},
  journal={arXiv preprint arXiv:2509.00576},
  year={2025}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Video Preview
loading

Dataset used to train OpenGalaxea/G0-VLA