Datasets:

Modalities:
Audio
Text
Formats:
parquet
Size:
< 1K
ArXiv:
Libraries:
Datasets
Dask
License:
wikitongues / README.md
wanchichen's picture
Update README.md
b60799d verified
---
license: cc-by-nc-sa-4.0
dataset_info:
features:
- name: id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
splits:
- name: train
num_bytes: 6731807325
num_examples: 820
download_size: 6611613572
dataset_size: 6731807325
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
language:
- multilingual
- en
task_categories:
- audio-to-audio
---
The WikiTongues speech corpus is a collection of conversational audio across 700+ languages.
It can be used for spoken language modelling or speech representation learning.
This dataset includes the raw unsegmented audio in a 16kHz single channel format.
Each clip is usually 2-10 minutes long, and contains one or more speakers conversing in their language(s).
Sometimes, a speaker may switch languages within a single clip.
The total dataset size is around 70 hours.
**The current version of the dataset does not include labels for the language(s) being spoken in each clip. This information will be included in an update in the near future**
This dataset was crawled from the [WikiTongues project](https://wikitongues.org/), which collected the original recordings.
We use this corpus to train [XEUS](https://huggingface.co/espnet/xeus), a multilingual speech encoder for 4000+ languages. For more details about the dataset and its usage, please refer to our [paper](https://wanchichen.github.io/pdf/xeus.pdf) or [project page](https://www.wavlab.org/activities/2024/xeus/).
License and Acknowledgement
WikiTongues is released under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 license.
If you use this dataset, we ask that you cite our paper:
```
@misc{chen2024robustspeechrepresentationlearning,
title={Towards Robust Speech Representation Learning for Thousands of Languages},
author={William Chen and Wangyou Zhang and Yifan Peng and Xinjian Li and Jinchuan Tian and Jiatong Shi and Xuankai Chang and Soumi Maiti and Karen Livescu and Shinji Watanabe},
year={2024},
eprint={2407.00837},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2407.00837},
}
```
And credit the original creators of the audio.