Datasets:
mteb
/

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
License:
Samoed commited on
Commit
24df91a
·
verified ·
1 Parent(s): f2899a0

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +43 -1
README.md CHANGED
@@ -34,7 +34,7 @@ citation_bibtex: "@inproceedings{upadhyay2023xnli,\n title={XNLI 2.0:
34
  \ IEEE 8th International Conference for Convergence in Technology (I2CT)},\n \
35
  \ pages={1--6},\n year={2023},\n organization={IEEE}\n\
36
  \ }\n "
37
- dataset_summary: 'This is subset of ''XNLI 2.0: Improving XNLI dataset and performance
38
  on Cross Lingual Understanding'' with languages that were not part of the original
39
  XNLI plus three (verified) languages that are not strongly covered in MTEB'
40
  dataset_reference: https://arxiv.org/pdf/2301.06527
@@ -184,16 +184,38 @@ descritptive_stats: "{\n \"test\": {\n \"num_samples\": 17745,\n
184
  \ \"0\": {\n \"count\": 683\n \
185
  \ },\n \"1\": {\n \"count\": 682\n\
186
  \ }\n }\n }\n }\n }\n}"
 
 
187
  ---
188
  <!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md -->
 
189
 
190
  This is subset of 'XNLI 2.0: Improving XNLI dataset and performance on Cross Lingual Understanding' with languages that were not part of the original XNLI plus three (verified) languages that are not strongly covered in MTEB
191
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
192
  <!-- Datasets want link to arxiv in readme to autolink dataset with paper -->
193
  Reference: https://arxiv.org/pdf/2301.06527
194
 
195
  ## Citation
196
 
 
 
197
  ```bibtex
198
  @inproceedings{upadhyay2023xnli,
199
  title={XNLI 2.0: Improving XNLI dataset and performance on Cross Lingual Understanding (XLU)},
@@ -204,6 +226,26 @@ Reference: https://arxiv.org/pdf/2301.06527
204
  organization={IEEE}
205
  }
206
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
207
  ```
208
 
209
  # Dataset Statistics
 
34
  \ IEEE 8th International Conference for Convergence in Technology (I2CT)},\n \
35
  \ pages={1--6},\n year={2023},\n organization={IEEE}\n\
36
  \ }\n "
37
+ dataset_description: 'This is subset of ''XNLI 2.0: Improving XNLI dataset and performance
38
  on Cross Lingual Understanding'' with languages that were not part of the original
39
  XNLI plus three (verified) languages that are not strongly covered in MTEB'
40
  dataset_reference: https://arxiv.org/pdf/2301.06527
 
184
  \ \"0\": {\n \"count\": 683\n \
185
  \ },\n \"1\": {\n \"count\": 682\n\
186
  \ }\n }\n }\n }\n }\n}"
187
+ dataset_task_name: XNLIV2
188
+ category: t2t
189
  ---
190
  <!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md -->
191
+ # XNLIV2
192
 
193
  This is subset of 'XNLI 2.0: Improving XNLI dataset and performance on Cross Lingual Understanding' with languages that were not part of the original XNLI plus three (verified) languages that are not strongly covered in MTEB
194
 
195
+ > This dataset is included as a task in [`mteb`](https://github.com/embeddings-benchmark/mteb).
196
+
197
+ - Task category: t2t
198
+ - Domains: ['Non-fiction', 'Fiction', 'Government', 'Written']
199
+
200
+ ## How to evaluate on this task
201
+
202
+ ```python
203
+ import mteb
204
+
205
+ task = mteb.get_tasks(["XNLIV2"])
206
+ evaluator = mteb.MTEB(task)
207
+
208
+ model = mteb.get_model(YOUR_MODEL)
209
+ evaluator.run(model)
210
+ ```
211
+
212
  <!-- Datasets want link to arxiv in readme to autolink dataset with paper -->
213
  Reference: https://arxiv.org/pdf/2301.06527
214
 
215
  ## Citation
216
 
217
+ If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb).
218
+
219
  ```bibtex
220
  @inproceedings{upadhyay2023xnli,
221
  title={XNLI 2.0: Improving XNLI dataset and performance on Cross Lingual Understanding (XLU)},
 
226
  organization={IEEE}
227
  }
228
 
229
+
230
+ @article{enevoldsen2025mmtebmassivemultilingualtext,
231
+ title={MMTEB: Massive Multilingual Text Embedding Benchmark},
232
+ author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff},
233
+ publisher = {arXiv},
234
+ journal={arXiv preprint arXiv:2502.13595},
235
+ year={2025},
236
+ url={https://arxiv.org/abs/2502.13595},
237
+ doi = {10.48550/arXiv.2502.13595},
238
+ }
239
+
240
+ @article{muennighoff2022mteb,
241
+ author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils},
242
+ title = {MTEB: Massive Text Embedding Benchmark},
243
+ publisher = {arXiv},
244
+ journal={arXiv preprint arXiv:2210.07316},
245
+ year = {2022}
246
+ url = {https://arxiv.org/abs/2210.07316},
247
+ doi = {10.48550/ARXIV.2210.07316},
248
+ }
249
  ```
250
 
251
  # Dataset Statistics