--- configs: - config_name: default data_files: - split: all path: data.csv language: - en size_categories: - n<1K license: cc --- This dataset is a curated subset of 631 speeches selected from the [ibm-research/debate_speeches](https://huggingface.co/datasets/ibm-research/debate_speeches) corpus. In our work, *[Debatable Intelligence: Benchmarking LLM Judges via Debate Speech Evaluation](https://arxiv.org/pdf/2506.05062)*, we use this subset to benchmark LLM judges on the task of **debate speech evaluation**. **Data fields** * `id`: The unique identifier for the speech. * `topic_id`: The unique identifier for the topic. * `topic`: The topic of the debate speech (e.g., "Community service should be mandatory"). * `source`: The speech source (e.g., "Human-expert" for human authored speeches). * `text`: The text of the speech. * `goodopeningspeech`: A list of human rating for the speech (a number between 1 and 5 for each annotator). * `#labelers`: The number of human annotators who rated the speech. * `labeler_ids`: A list of the unique identifiers for the human annotators who rated the speech. **Bibtex** ```bibtex @misc{sternlicht2025debatableintelligencebenchmarkingllm, title={Debatable Intelligence: Benchmarking LLM Judges via Debate Speech Evaluation}, author={Noy Sternlicht and Ariel Gera and Roy Bar-Haim and Tom Hope and Noam Slonim}, year={2025}, eprint={2506.05062}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2506.05062}, } ``` **Quick links** - 🌐 [Project](https://noy-sternlicht.github.io/Debatable-Intelligence-Web) - 📃 [Paper](https://arxiv.org/pdf/2506.05062) - 🛠️ [Code](https://github.com/noy-sternlicht/Debatable-Intelligence)