Datasets:

Modalities:
Image
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
License:
IndicMMVet / README.md
akshat-krutrim's picture
Data Commit
561799b
metadata
dataset_info:
  features:
    - name: id
      dtype: string
    - name: image
      dtype: image
    - name: question
      dtype: string
    - name: answer
      dtype: string
    - name: capability
      sequence: string
    - name: imagesource
      dtype: string
  splits:
    - name: Assamese
      num_bytes: 70756682
      num_examples: 218
    - name: Bengali
      num_bytes: 70758785
      num_examples: 218
    - name: English
      num_bytes: 70625304
      num_examples: 218
    - name: Gujarati
      num_bytes: 70751428
      num_examples: 218
    - name: Hindi
      num_bytes: 70753807
      num_examples: 218
    - name: Kannada
      num_bytes: 70782185
      num_examples: 218
    - name: Malayalam
      num_bytes: 70798447
      num_examples: 218
    - name: Marathi
      num_bytes: 70761412
      num_examples: 218
    - name: Odia
      num_bytes: 70770809
      num_examples: 218
    - name: Sanskrit
      num_bytes: 70783403
      num_examples: 218
    - name: Tamil
      num_bytes: 70810556
      num_examples: 218
    - name: Telugu
      num_bytes: 70775865
      num_examples: 218
  download_size: 759913301
  dataset_size: 849128683
configs:
  - config_name: default
    data_files:
      - split: Assamese
        path: data/Assamese-*
      - split: Bengali
        path: data/Bengali-*
      - split: English
        path: data/English-*
      - split: Gujarati
        path: data/Gujarati-*
      - split: Hindi
        path: data/Hindi-*
      - split: Kannada
        path: data/Kannada-*
      - split: Malayalam
        path: data/Malayalam-*
      - split: Marathi
        path: data/Marathi-*
      - split: Odia
        path: data/Odia-*
      - split: Sanskrit
        path: data/Sanskrit-*
      - split: Tamil
        path: data/Tamil-*
      - split: Telugu
        path: data/Telugu-*
license: other
license_name: krutrim-community-license-agreement-version-1.0
license_link: LICENSE.md
extra_gated_heading: Acknowledge license to accept the repository
extra_gated_button_content: Acknowledge license
language:
  - as
  - hi
  - gu
  - ml
  - te
  - ta
  - kn
  - or
  - bn
  - en
  - mr
  - sa

IndicMMVet: Indian Multilingual Translated Dataset For Evaluating Large Vision Language Models for Integrated Capabilities

  • You can find the performance of Chitrarth on IndicMMVet here : Paper | Github | HuggingFace
  • Evaluation Scripts of BharatBench is available here : Github

1. Introduction

IndicMMVet is a new dataset designed for evaluating Large Vision-Language Models (LVLMs) on Visual Question Answering (VQA) tasks and focuses on the integration of multiple core VL capabilities. These include recognition, OCR, knowledge, language generation, spatial awareness, and math.

This dataset is built upon MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities (GitHub).


2. Dataset Details

IndicMMVet consists of 218 samples spanning 12 Indic languages along with English. Each sample includes:

  • Question: The question about the image.
  • Capability: The type of sampling used (recognition, OCR, knowledge, language generation, spatial awareness, and math).
  • Answer: The answer.

Supported Languages

  • Assamese
  • Bengali
  • English
  • Gujarati
  • Hindi
  • Kannada
  • Malayalam
  • Marathi
  • Odia
  • Sanskrit
  • Tamil
  • Telugu

3. How to Use and Run

You can load the dataset using the datasets library:

from datasets import load_dataset

dataset = load_dataset("krutrim-ai-labs/IndicMMVet")
print(dataset)

4. License

This code repository and the model weights are licensed under the Krutrim Community License.

5. Citation

@article{khan2025chitrarth,
  title={Chitrarth: Bridging Vision and Language for a Billion People},
  author={Shaharukh Khan, Ayush Tarun, Abhinav Ravi, Ali Faraz, Akshat Patidar, Praveen Kumar Pokala, Anagha Bhangare, Raja Kolla, Chandra Khatri, Shubham Agarwal},
  journal={arXiv preprint arXiv:2502.15392},
  year={2025}
}

@misc{liu2023improvedllava,
      title={Improved Baselines with Visual Instruction Tuning}, 
      author={Liu, Haotian and Li, Chunyuan and Li, Yuheng and Lee, Yong Jae},
      publisher={arXiv:2310.03744},
      year={2023},
}

@article{yu2023mm,
  title={Mm-vet: Evaluating large multimodal models for integrated capabilities},
  author={Yu, Weihao and Yang, Zhengyuan and Li, Linjie and Wang, Jianfeng and Lin, Kevin and Liu, Zicheng and Wang, Xinchao and Wang, Lijuan},
  journal={arXiv preprint arXiv:2308.02490},
  year={2023}
}

@article{gala2023indictrans2,
  title={Indictrans2: Towards high-quality and accessible machine translation models for all 22 scheduled indian languages},
  author={Gala, Jay and Chitale, Pranjal A and AK, Raghavan and Gumma, Varun and Doddapaneni, Sumanth and Kumar, Aswanth and Nawale, Janki and Sujatha, Anupama and Puduppully, Ratish and Raghavan, Vivek and others},
  journal={arXiv preprint arXiv:2305.16307},
  year={2023}
}

6. Contact

Contributions are welcome! If you have any improvements or suggestions, feel free to submit a pull request on GitHub.

7. Acknowledgement

IndicMMVet is built with reference to the code of the following projects: MM-Vet, and LLaVA-1.5. Thanks for their awesome work!