cindyxl mixerr commited on
Commit
da1a1f9
·
verified ·
1 Parent(s): 96d0bdc

Update README.md (#4)

Browse files

- Update README.md (39be6f71608e87d1902a461a36656fca6f1f0082)


Co-authored-by: Chendi Lin <mixerr@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +97 -1
README.md CHANGED
@@ -14,4 +14,100 @@ tags:
14
  pretty_name: ObjaversePlusPlus
15
  size_categories:
16
  - 100K<n<1M
17
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
  pretty_name: ObjaversePlusPlus
15
  size_categories:
16
  - 100K<n<1M
17
+ ---
18
+
19
+ # Objaverse++: Curated 3D Object Dataset with Quality Annotations
20
+
21
+ <a href="https://arxiv.org/abs/2504.07334" class="btn btn-light" role="button" aria-pressed="true">
22
+ <svg class="btn-content" style="height: 1.5rem" aria-hidden="true" focusable="false" data-prefix="fas" data-icon="file-pdf" role="img" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 384 512" data-fa-i2svg=""><path fill="currentColor" d="M181.9 256.1c-5-16-4.9-46.9-2-46.9 8.4 0 7.6 36.9 2 46.9zm-1.7 47.2c-7.7 20.2-17.3 43.3-28.4 62.7 18.3-7 39-17.2 62.9-21.9-12.7-9.6-24.9-23.4-34.5-40.8zM86.1 428.1c0 .8 13.2-5.4 34.9-40.2-6.7 6.3-29.1 24.5-34.9 40.2zM248 160h136v328c0 13.3-10.7 24-24 24H24c-13.3 0-24-10.7-24-24V24C0 10.7 10.7 0 24 0h200v136c0 13.2 10.8 24 24 24zm-8 171.8c-20-12.2-33.3-29-42.7-53.8 4.5-18.5 11.6-46.6 6.2-64.2-4.7-29.4-42.4-26.5-47.8-6.8-5 18.3-.4 44.1 8.1 77-11.6 27.6-28.7 64.6-40.8 85.8-.1 0-.1.1-.2.1-27.1 13.9-73.6 44.5-54.5 68 5.6 6.9 16 10 21.5 10 17.9 0 35.7-18 61.1-61.8 25.8-8.5 54.1-19.1 79-23.2 21.7 11.8 47.1 19.5 64 19.5 29.2 0 31.2-32 19.7-43.4-13.9-13.6-54.3-9.7-73.6-7.2zM377 105L279 7c-4.5-4.5-10.6-7-17-7h-6v128h128v-6.1c0-6.3-2.5-12.4-7-16.9zm-74.1 255.3c4.1-2.7-2.5-11.9-42.8-9 37.1 15.8 42.8 9 42.8 9z"></path></svg>
23
+ <span class="btn-content">Paper</span>
24
+ </a>
25
+ <a href="https://github.com/TCXX/ObjaversePlusPlus/" class="btn btn-light" role="button" aria-pressed="true">
26
+ <svg class="btn-content" style="height: 1.5rem" aria-hidden="true" focusable="false" data-prefix="fab" data-icon="github" role="img" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 496 512" data-fa-i2svg=""><path fill="currentColor" d="M165.9 397.4c0 2-2.3 3.6-5.2 3.6-3.3.3-5.6-1.3-5.6-3.6 0-2 2.3-3.6 5.2-3.6 3-.3 5.6 1.3 5.6 3.6zm-31.1-4.5c-.7 2 1.3 4.3 4.3 4.9 2.6 1 5.6 0 6.2-2s-1.3-4.3-4.3-5.2c-2.6-.7-5.5.3-6.2 2.3zm44.2-1.7c-2.9.7-4.9 2.6-4.6 4.9.3 2 2.9 3.3 5.9 2.6 2.9-.7 4.9-2.6 4.6-4.6-.3-1.9-3-3.2-5.9-2.9zM244.8 8C106.1 8 0 113.3 0 252c0 110.9 69.8 205.8 169.5 239.2 12.8 2.3 17.3-5.6 17.3-12.1 0-6.2-.3-40.4-.3-61.4 0 0-70 15-84.7-29.8 0 0-11.4-29.1-27.8-36.6 0 0-22.9-15.7 1.6-15.4 0 0 24.9 2 38.6 25.8 21.9 38.6 58.6 27.5 72.9 20.9 2.3-16 8.8-27.1 16-33.7-55.9-6.2-112.3-14.3-112.3-110.5 0-27.5 7.6-41.3 23.6-58.9-2.6-6.5-11.1-33.3 2.6-67.9 20.9-6.5 69 27 69 27 20-5.6 41.5-8.5 62.8-8.5s42.8 2.9 62.8 8.5c0 0 48.1-33.6 69-27 13.7 34.7 5.2 61.4 2.6 67.9 16 17.7 25.8 31.5 25.8 58.9 0 96.5-58.9 104.2-114.8 110.5 9.2 7.9 17 22.9 17 46.4 0 33.7-.3 75.4-.3 83.6 0 6.5 4.6 14.4 17.3 12.1C428.2 457.8 496 362.9 496 252 496 113.3 383.5 8 244.8 8zM97.2 352.9c-1.3 1-1 3.3.7 5.2 1.6 1.6 3.9 2.3 5.2 1 1.3-1 1-3.3-.7-5.2-1.6-1.6-3.9-2.3-5.2-1zm-10.8-8.1c-.7 1.3.3 2.9 2.3 3.9 1.6 1 3.6.7 4.3-.7.7-1.3-.3-2.9-2.3-3.9-2-.6-3.6-.3-4.3.7zm32.4 35.6c-1.6 1.3-1 4.3 1.3 6.2 2.3 2.3 5.2 2.6 6.5 1 1.3-1.3.7-4.3-1.3-6.2-2.2-2.3-5.2-2.6-6.5-1zm-11.4-14.7c-1.6 1-1.6 3.6 0 5.9 1.6 2.3 4.3 3.3 5.6 2.3 1.6-1.3 1.6-3.9 0-6.2-1.4-2.3-4-3.3-5.6-2z"></path></svg>
27
+ <span class="btn-content">Code</span>
28
+ </a>
29
+
30
+
31
+ [Chendi Lin](https://chendilin.com/),
32
+ [Heshan Liu](),
33
+ [Qunshu Lin](),
34
+ [Zachary Bright](),
35
+ [Shitao Tang](),
36
+ [Yihui He](),
37
+ [Minghao Liu](),
38
+ [Ling Zhu](),
39
+ [Cindy Le]()
40
+
41
+ Objaverse++ is a dataset that labels 3D modeling objects in terms of quality score and other important traits for machine learning researchers. We meticulously curated a collection of Objaverse objects and developed an effective classifier capable of scoring the entire Objaverse. Our extensive annotation system considers geometric structure and texture information, enabling researchers to filter training data according to specific requirements.
42
+ <p align="center">
43
+ <img src="https://github.com/user-attachments/assets/cc886ae2-1a06-42d2-8db7-93d6353d2ff0" width="700">
44
+ </p>
45
+
46
+
47
+ Less is more. We proved that, with only the high-quality objects in a 3D dataset, you can perform generative AI tasks like text-to-3D and image-to-3D better and faster.
48
+
49
+
50
+
51
+ ## Overview
52
+
53
+ To address the prevalence of low-quality models in Objaverse, we:
54
+ 1. Manually annotated 10,000 3D objects with quality and characteristic attributes;
55
+ 2. Trained a neural network capable of annotating tags for the rest of the Objaverse dataset;
56
+ 3. Created a curated subset of approximately 500,000 high-quality 3D models.
57
+
58
+ Our experiments show that:
59
+ - Models trained on our quality-focused subset achieve better performance than those trained on the larger Objaverse dataset in image-to-3D generation tasks;
60
+ - Higher data quality leads to faster training loss convergence;
61
+ - Careful curation and rich annotation can compensate for raw dataset size.
62
+
63
+ ## Quality and Attribute Annotations
64
+
65
+ ### Quality Score
66
+ We define quality score as a metric to assess how useful a 3D object is for machine learning training:
67
+
68
+ - **Low Quality**: No semantic meaning. Objects that annotators cannot identify or are corrupted.
69
+ - **Medium Quality**: Identifiable objects missing basic material texture and color information.
70
+ - **High Quality**: Acceptable quality with clear object identity, properly textured with material and color details.
71
+ - **Superior Quality**: Excellent quality with high semantic clarity and professional texturing with strong aesthetic harmony.
72
+
73
+ ### Binary Traits
74
+ In addition to quality scores, we annotate several binary tags:
75
+
76
+ - **Transparency**: Identifies models with see-through parts.
77
+ - **Scene**: Identifies whether the model represents a scenario/environment rather than a standalone object.
78
+ - **Single Color**: Tags models that are unintentionally monochromatic.
79
+ - **Not a Single Object**: Marks models consisting of multiple separate components.
80
+ - **Figure**: Indicates if the model represents a character, person, or figure.
81
+
82
+ ## File structure
83
+
84
+ After downloading and unzipping, two files are included:
85
+
86
+ ```
87
+ annotated_500k.json
88
+ annotated_network.pth
89
+ ```
90
+
91
+ `annotated_500k.json` contains the annotated results of 500k objects from Objaverse. This will be expanded to cover the whole Objaverse in the near future.
92
+ `annotated_network.pth` is the pre-trained annotation network weights that can be used to
93
+ annotate customized 3D object dataset. The detailed instruction is included [here](https://github.com/TCXX/ObjaversePlusPlus/blob/main/annotation_model/Readme.md).
94
+
95
+ ## Citation
96
+
97
+ If you find this work useful for your research, please cite our paper:
98
+
99
+ ```
100
+ @misc{lin2025objaversecurated3dobject,
101
+ title={Objaverse++: Curated 3D Object Dataset with Quality Annotations},
102
+ author={Chendi Lin and Heshan Liu and Qunshu Lin and Zachary Bright and Shitao Tang and Yihui He and Minghao Liu and Ling Zhu and Cindy Le},
103
+ year={2025},
104
+ eprint={2504.07334},
105
+ archivePrefix={arXiv},
106
+ primaryClass={cs.CV},
107
+ url={https://arxiv.org/abs/2504.07334},
108
+ }
109
+ ```
110
+
111
+ ## Acknowledgments
112
+
113
+ We gratefully acknowledge Exascale Labs and Zillion Network for providing the computational resources and supporting our training infrastructure that made this research possible. We thank Abaka AI for their valuable assistance with data labeling. Special thanks to Ang Cao and Liam Fang for their technical and artistic insights that significantly enhanced our understanding of 3D model quality assessment.