File size: 9,372 Bytes
4180b70
 
 
 
 
 
 
9b9b883
 
4180b70
 
 
 
 
 
616341a
 
 
4180b70
 
 
da1a1f9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8ccd1bf
0d27753
8ccd1bf
 
 
 
 
 
da1a1f9
01f4fb7
da1a1f9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e42ddbe
 
 
 
da1a1f9
40b402f
 
 
 
 
 
 
 
d4acdbc
40b402f
 
 
 
 
 
d4acdbc
da1a1f9
 
 
 
 
 
 
 
 
 
 
 
 
8ae802d
 
 
 
 
 
 
 
 
 
 
 
 
 
bedbc31
 
da1a1f9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
831f8c4
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
---
license: other
license_name: odc-by-v1.0
license_link: https://opendatacommons.org/licenses/by/1-0/
task_categories:
- text-to-3d
- image-to-3d
datasets:
- allenai/objaverse
language:
- en
tags:
- 3d
- 3d modeling
- quality
- art
- objaverse
- ai3d
pretty_name: ObjaversePlusPlus
size_categories:
- 100K<n<1M
---

# Objaverse++: Curated 3D Object Dataset with Quality Annotations

<a href="https://arxiv.org/abs/2504.07334" class="btn btn-light" role="button" aria-pressed="true">
    <svg class="btn-content" style="height: 1.5rem" aria-hidden="true" focusable="false" data-prefix="fas" data-icon="file-pdf" role="img" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 384 512" data-fa-i2svg=""><path fill="currentColor" d="M181.9 256.1c-5-16-4.9-46.9-2-46.9 8.4 0 7.6 36.9 2 46.9zm-1.7 47.2c-7.7 20.2-17.3 43.3-28.4 62.7 18.3-7 39-17.2 62.9-21.9-12.7-9.6-24.9-23.4-34.5-40.8zM86.1 428.1c0 .8 13.2-5.4 34.9-40.2-6.7 6.3-29.1 24.5-34.9 40.2zM248 160h136v328c0 13.3-10.7 24-24 24H24c-13.3 0-24-10.7-24-24V24C0 10.7 10.7 0 24 0h200v136c0 13.2 10.8 24 24 24zm-8 171.8c-20-12.2-33.3-29-42.7-53.8 4.5-18.5 11.6-46.6 6.2-64.2-4.7-29.4-42.4-26.5-47.8-6.8-5 18.3-.4 44.1 8.1 77-11.6 27.6-28.7 64.6-40.8 85.8-.1 0-.1.1-.2.1-27.1 13.9-73.6 44.5-54.5 68 5.6 6.9 16 10 21.5 10 17.9 0 35.7-18 61.1-61.8 25.8-8.5 54.1-19.1 79-23.2 21.7 11.8 47.1 19.5 64 19.5 29.2 0 31.2-32 19.7-43.4-13.9-13.6-54.3-9.7-73.6-7.2zM377 105L279 7c-4.5-4.5-10.6-7-17-7h-6v128h128v-6.1c0-6.3-2.5-12.4-7-16.9zm-74.1 255.3c4.1-2.7-2.5-11.9-42.8-9 37.1 15.8 42.8 9 42.8 9z"></path></svg>
    <span class="btn-content">Paper</span>
</a>
<a href="https://github.com/TCXX/ObjaversePlusPlus/" class="btn btn-light" role="button" aria-pressed="true">
    <svg class="btn-content" style="height: 1.5rem" aria-hidden="true" focusable="false" data-prefix="fab" data-icon="github" role="img" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 496 512" data-fa-i2svg=""><path fill="currentColor" d="M165.9 397.4c0 2-2.3 3.6-5.2 3.6-3.3.3-5.6-1.3-5.6-3.6 0-2 2.3-3.6 5.2-3.6 3-.3 5.6 1.3 5.6 3.6zm-31.1-4.5c-.7 2 1.3 4.3 4.3 4.9 2.6 1 5.6 0 6.2-2s-1.3-4.3-4.3-5.2c-2.6-.7-5.5.3-6.2 2.3zm44.2-1.7c-2.9.7-4.9 2.6-4.6 4.9.3 2 2.9 3.3 5.9 2.6 2.9-.7 4.9-2.6 4.6-4.6-.3-1.9-3-3.2-5.9-2.9zM244.8 8C106.1 8 0 113.3 0 252c0 110.9 69.8 205.8 169.5 239.2 12.8 2.3 17.3-5.6 17.3-12.1 0-6.2-.3-40.4-.3-61.4 0 0-70 15-84.7-29.8 0 0-11.4-29.1-27.8-36.6 0 0-22.9-15.7 1.6-15.4 0 0 24.9 2 38.6 25.8 21.9 38.6 58.6 27.5 72.9 20.9 2.3-16 8.8-27.1 16-33.7-55.9-6.2-112.3-14.3-112.3-110.5 0-27.5 7.6-41.3 23.6-58.9-2.6-6.5-11.1-33.3 2.6-67.9 20.9-6.5 69 27 69 27 20-5.6 41.5-8.5 62.8-8.5s42.8 2.9 62.8 8.5c0 0 48.1-33.6 69-27 13.7 34.7 5.2 61.4 2.6 67.9 16 17.7 25.8 31.5 25.8 58.9 0 96.5-58.9 104.2-114.8 110.5 9.2 7.9 17 22.9 17 46.4 0 33.7-.3 75.4-.3 83.6 0 6.5 4.6 14.4 17.3 12.1C428.2 457.8 496 362.9 496 252 496 113.3 383.5 8 244.8 8zM97.2 352.9c-1.3 1-1 3.3.7 5.2 1.6 1.6 3.9 2.3 5.2 1 1.3-1 1-3.3-.7-5.2-1.6-1.6-3.9-2.3-5.2-1zm-10.8-8.1c-.7 1.3.3 2.9 2.3 3.9 1.6 1 3.6.7 4.3-.7.7-1.3-.3-2.9-2.3-3.9-2-.6-3.6-.3-4.3.7zm32.4 35.6c-1.6 1.3-1 4.3 1.3 6.2 2.3 2.3 5.2 2.6 6.5 1 1.3-1.3.7-4.3-1.3-6.2-2.2-2.3-5.2-2.6-6.5-1zm-11.4-14.7c-1.6 1-1.6 3.6 0 5.9 1.6 2.3 4.3 3.3 5.6 2.3 1.6-1.3 1.6-3.9 0-6.2-1.4-2.3-4-3.3-5.6-2z"></path></svg>
    <span class="btn-content">Code</span>
</a>


[Chendi Lin](https://chendilin.com/), 
[Heshan Liu](https://www.linkedin.com/in/heshan-liu/), 
[Qunshu Lin](https://www.linkedin.com/in/jack-d-lin/), 
Zachary Bright,
[Shitao Tang](https://scholar.google.com/citations?user=JKVeJSwAAAAJ&hl=en),
[Yihui He](https://scholar.google.com/citations?user=2yAMJ1YAAAAJ&hl=en),
Minghao Liu,
Ling Zhu,
[Cindy Le](https://beacons.ai/tcxx)

Objaverse++ is a dataset that labels 3D modeling objects in terms of quality score and other important traits for machine learning researchers. We meticulously curated a collection of Objaverse objects and developed an effective classifier capable of scoring the entire [Objaverse](https://huggingface.co/datasets/allenai/objaverse). Our extensive annotation system considers geometric structure and texture information, enabling researchers to filter training data according to specific requirements.
<p align="center">
<img src="https://github.com/user-attachments/assets/cc886ae2-1a06-42d2-8db7-93d6353d2ff0" width="700">
</p>


Less is more. We proved that, with only the high-quality objects in a 3D dataset, you can perform generative AI tasks like text-to-3D and image-to-3D better and faster.



## Overview

To address the prevalence of low-quality models in Objaverse, we:
1. Manually annotated 10,000 3D objects with quality and characteristic attributes;
2. Trained a neural network capable of annotating tags for the rest of the Objaverse dataset;
3. Created a curated subset of approximately 500,000 high-quality 3D models.

Our experiments show that:
- Models trained on our quality-focused subset achieve better performance than those trained on the larger Objaverse dataset in image-to-3D generation tasks;
- Higher data quality leads to faster training loss convergence;
- Careful curation and rich annotation can compensate for raw dataset size.

## Quality and Attribute Annotations

### Quality Score
We define quality score as a metric to assess how useful a 3D object is for machine learning training:

- **Low Quality**: No semantic meaning. Objects that annotators cannot identify or are corrupted.
- **Medium Quality**: Identifiable objects missing basic material texture and color information.
- **High Quality**: Acceptable quality with clear object identity, properly textured with material and color details.
- **Superior Quality**: Excellent quality with high semantic clarity and professional texturing with strong aesthetic harmony.

### Binary Traits

- **Transparency**: Identifies models with see-through parts.
- **Scene**: Identifies whether the model represents a scenario/environment rather than a standalone object.
- **Single Color**: Tags models that are unintentionally monochromatic.
- **Not a Single Object**: Marks models consisting of multiple separate components.
- **Figure**: Indicates if the model represents a character, person, or figure.
  
<p align="center">
<img src="https://github.com/user-attachments/assets/fb802ae1-23d2-4040-be50-67bf2cdfb9d1" width="700">
</p>

### Art style (Experimental)
- **Scanned**: With little pieces from real-world camera, usually super high poly.
- **Arcade**: Includes distinct patterns as if in the old games, sometimes has repeating patterns.
- **Sci-Fi**: Dark color with lighting, usually metallic.
- **Cartoon**: Colorful Western cartoon style.
- **Anime**: Color Japanese anime style.
- **Realistic**: With non-repeating textural details, but still handcrafted by artists.
- **Other**: Unable to be categorized or of awful quality.

<p align="center">
<img src="https://github.com/user-attachments/assets/e25f2761-538d-4351-a3d7-6abf31b92455" width="700">
</p>

### Density (Experimental)
Different levels of polygon counts are adopted in different use cases in the 3D modeling and gaming industries. 

## File structure

After downloading and unzipping, two files are included:

```
annotated_500k.json
annotated_network.pth
```

`annotated_500k.json` contains the annotated results of 500k objects from Objaverse. This will be expanded to cover the whole Objaverse in the near future.
`annotated_network.pth` is the pre-trained annotation network weights that can be used to
 annotate customized 3D object dataset. The detailed instruction is included [here](https://github.com/TCXX/ObjaversePlusPlus/blob/main/annotation_model/Readme.md).

## Dataset Evaluation

We set up an image-to-3D generation task to evaluate our dataset using OpenLRM. We compared:

- A randomly sampled subset of 100,000 objects from Objaverse (Training Set A)
- A quality-filtered dataset of ~50,000 high-quality objects (Training Set B)

Our key findings:

- Better Generation Quality: User study shows significant preference for models trained on our curated dataset.
- Faster Convergence: Our model demonstrates faster convergence on a carefully curated dataset.

For more details, please read our [paper](https://arxiv.org/abs/2504.07334), which was peer reviewed at CVPR workshop (2025).

Note: Art style and density data are experimental and not included in the paper.

## Citation

If you find this work useful for your research, please cite our paper:

```
@misc{lin2025objaversecurated3dobject,
      title={Objaverse++: Curated 3D Object Dataset with Quality Annotations}, 
      author={Chendi Lin and Heshan Liu and Qunshu Lin and Zachary Bright and Shitao Tang and Yihui He and Minghao Liu and Ling Zhu and Cindy Le},
      year={2025},
      eprint={2504.07334},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2504.07334}, 
}
```

## Acknowledgments

We gratefully acknowledge [Exascale Labs](https://www.exascalelabs.ai/) and [Zillion Network](https://zillion.network/) for providing the computational resources and supporting our training infrastructure that made this research possible. We thank [Abaka AI](https://www.abaka.ai/) for their valuable assistance with data labeling. Special thanks to Ang Cao and Liam Fang for their technical and artistic insights that significantly enhanced our understanding of 3D model quality assessment.