ryokamoi commited on
Commit
438b513
·
verified ·
1 Parent(s): eff2d60

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +148 -0
README.md ADDED
@@ -0,0 +1,148 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - expert-generated
6
+ language:
7
+ - en
8
+ license: gpl-3.0
9
+ multilinguality:
10
+ - monolingual
11
+ size_categories:
12
+ - n<1K
13
+ source_datasets:
14
+ - original
15
+ task_categories:
16
+ - multiple-choice
17
+ - question-answering
18
+ - visual-question-answering
19
+ task_ids:
20
+ - multiple-choice-qa
21
+ - visual-question-answering
22
+ - multi-class-classification
23
+ tags:
24
+ - multi-modal-qa
25
+ - figure-qa
26
+ - vqa
27
+ - scientific-figure
28
+ - geometry-diagram
29
+ - chart
30
+ - chemistry
31
+ ---
32
+ # VisOnlyQA
33
+
34
+ <p align="center">
35
+ 🌐 <a href="https://visonlyqa.github.io/">Project Website</a> | 📄 <a href="https://arxiv.org/abs/2412.00947">Paper</a> | 🤗 <a href="https://huggingface.co/collections/ryokamoi/visonlyqa-674e86c7ec384b629bb97bc3">Dataset</a> | 🔥 <a href="https://github.com/open-compass/VLMEvalKit">VLMEvalKit</a>
36
+ </p>
37
+
38
+ This repository contains the code and data for the paper "VisOnlyQA: Large Vision Language Models Still Struggle with Visual Perception of Geometric Information".
39
+
40
+ VisOnlyQA is designed to evaluate the visual perception capability of large vision language models (LVLMs) on geometric information of scientific figures. The evaluation set includes 1,200 mlutiple choice questions in 12 visual perception tasks on 4 categories of scientific figures. We also provide a training dataset consisting of 70k instances.
41
+
42
+ * Datasets:
43
+ * Eval-Real: [https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Eval_Real_v1.1](https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Eval_Real_v1.1)
44
+ * Eval-Synthetic: [https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Eval_Synthetic](https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Eval_Synthetic)
45
+ * Train: [https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Train](https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Train)
46
+ * Code: [https://github.com/psunlpgroup/VisOnlyQA](https://github.com/psunlpgroup/VisOnlyQA)
47
+
48
+ <p align="center">
49
+ <img src="readme_figures/accuracy_radar_chart.png" width="500">
50
+ </p>
51
+
52
+ ```bibtex
53
+ @misc{kamoi2024visonlyqa,
54
+ title={VisOnlyQA: Large Vision Language Models Still Struggle with Visual Perception of Geometric Information},
55
+ author={Ryo Kamoi and Yusen Zhang and Sarkar Snigdha Sarathi Das and Ranran Haoran Zhang and Rui Zhang},
56
+ year={2024},
57
+ }
58
+ ```
59
+
60
+ ## Update
61
+
62
+ * v1.1
63
+ * Increased the number of instances in the Real split.
64
+
65
+ ## Dataset
66
+
67
+ The dataset is provided in Hugging Face Dataset.
68
+
69
+ * Eval-Real: [https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Eval_Real_v1.1](https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Eval_Real_v1.1)
70
+ * 900 instances for questions on figures in existing datasets (e.g., MathVista, MMMU, and CharXiv)
71
+ * Eval-Synthetic: [https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Eval_Synthetic](https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Eval_Synthetic)
72
+ * 700 instances for questions on synthetic figures
73
+ * Train: [https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Train](https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Train)
74
+ * 70,000 instances for training (synthetic figures)
75
+
76
+ [dataset](https://github.com/psunlpgroup/VisOnlyQA/tree/main/dataset) folder of the GitHub repository includes identical datasets, except for the training data.
77
+
78
+ ### Examples
79
+
80
+ <p align="center">
81
+ <img src="readme_figures/examples.png" width="800">
82
+ </p>
83
+
84
+ ### Usage
85
+
86
+ ```python
87
+ from datasets import load_dataset
88
+
89
+ real_eval = load_dataset("ryokamoi/VisOnlyQA_Eval_Real_v1.1")
90
+ real_synthetic = load_dataset("ryokamoi/VisOnlyQA_Eval_Synthetic")
91
+
92
+ # Splits
93
+ print(real_eval.keys())
94
+ # dict_keys(['geometry__triangle', 'geometry__quadrilateral', 'geometry__length', 'geometry__angle', 'geometry__area', 'geometry__diameter_radius', 'chemistry__shape_single', 'chemistry__shape_multi', 'charts__extraction', 'charts__intersection'])
95
+
96
+ print(real_synthetic.keys())
97
+ # dict_keys(['syntheticgeometry__triangle', 'syntheticgeometry__quadrilateral', 'syntheticgeometry__length', 'syntheticgeometry__angle', 'syntheticgeometry__area', '3d__size', '3d__angle'])
98
+
99
+ # Prompt
100
+ print(real_eval['geometry__triangle'][0]['prompt_no_reasoning'])
101
+ # There is no triangle ADP in the figure. True or False?
102
+
103
+ # A triangle is a polygon with three edges and three vertices, which are explicitly connected in the figure.
104
+
105
+ # Your response should only include the final answer (True, False). Do not include any reasoning or explanation in your response.
106
+
107
+ # Image
108
+ print(real_eval['geometry__triangle'][0]['decoded_image'])
109
+ # <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=103x165 at 0x7FB4F83236A0>
110
+
111
+ # Answer
112
+ print(real_eval['geometry__triangle'][0]['answer'])
113
+ # False
114
+ ```
115
+
116
+ ### Data Format
117
+
118
+ Each instance of VisOnlyQA dataset has the following attributes:
119
+
120
+ #### Features
121
+ * `decoded_image`: [PIL.Image] Input image
122
+ * `question`: [string] Question (without instruction)
123
+ * `prompt_reasoning`: [string] Prompt with intstruction to use chain-of-thought
124
+ * `prompt_no_reasoning`: [string] Prompt with intstruction **not** to use chain-of-thought
125
+ * `answer`: [string] Correct answer (e.g., `True`, `a`)
126
+
127
+ #### Metadata
128
+ * `image_path`: [string] Path to the image file
129
+ * `image_category`: [string] Category of the image (e.g., `geometry`, `chemistry`)
130
+ * `question_type`: [string] `single_answer` or `multiple answers`
131
+ * `task_category`: [string] Category of the task (e.g., `triangle`)
132
+ * `response_options`: [List[string]] Multiple choice options (e.g., `['True', 'False']`, `['a', 'b', 'c', 'd', 'e']`)
133
+ * `source`: [string] Source dataset
134
+ * `id`: [string] Unique ID
135
+
136
+ ### Statistics
137
+
138
+ <p align="center">
139
+ <img src="readme_figures/stats.png" width="800">
140
+ </p>
141
+
142
+ ## License
143
+
144
+ Please refer to [LICENSE.md](./LICENSE.md).
145
+
146
+ ## Contact
147
+
148
+ If you have any questions, feel free to open an issue or reach out directly to [Ryo Kamoi](https://ryokamoi.github.io/) (ryokamoi@psu.edu).