File size: 1,759 Bytes
3c71629 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 |
---
configs:
- config_name: default
data_files:
- split: all
path: data.csv
language:
- en
size_categories:
- n<1K
license: cc
---
This dataset is a curated subset of 631 speeches selected from the [ibm-research/debate_speeches](https://huggingface.co/datasets/ibm-research/debate_speeches) corpus. In our work, *[Debatable Intelligence: Benchmarking LLM Judges via Debate Speech Evaluation](https://arxiv.org/pdf/2506.05062)*, we use this subset to benchmark LLM judges on the task of **debate speech evaluation**.
**Data fields**
* `id`: The unique identifier for the speech.
* `topic_id`: The unique identifier for the topic.
* `topic`: The topic of the debate speech (e.g., "Community service should be mandatory").
* `source`: The speech source (e.g., "Human-expert" for human authored speeches).
* `text`: The text of the speech.
* `goodopeningspeech`: A list of human rating for the speech (a number between 1 and 5 for each annotator).
* `#labelers`: The number of human annotators who rated the speech.
* `labeler_ids`: A list of the unique identifiers for the human annotators who rated the speech.
**Bibtex**
```bibtex
@misc{sternlicht2025debatableintelligencebenchmarkingllm,
title={Debatable Intelligence: Benchmarking LLM Judges via Debate Speech Evaluation},
author={Noy Sternlicht and Ariel Gera and Roy Bar-Haim and Tom Hope and Noam Slonim},
year={2025},
eprint={2506.05062},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2506.05062},
}
```
**Quick links**
- 🌐 [Project](https://noy-sternlicht.github.io/Debatable-Intelligence-Web)
- 📃 [Paper](https://arxiv.org/pdf/2506.05062)
- 🛠️ [Code](https://github.com/noy-sternlicht/Debatable-Intelligence) |