--- license: cc-by-4.0 task_categories: - automatic-speech-recognition language: - ca tags: - central size_categories: - 10K>data['clean_train_short'][0]['identifier'] 2753976_90753a8d81888d998484_405.96_411.15999999999997 ``` - Audio ```python >>data['clean_train_short'][0]['audio'] {'path': '/Users/sarahsolito/.cache/huggingface/datasets/downloads/extracted/9f760c175adf0af8127242f9468e48120f7682b20cf5c5813bfe481a108524bf/parlament_parla_v3/corpus/speech/2753976/2753976_90753a8d81888d998484_405.96_411.15999999999997.wav', 'array': array([-1.07421875e-02, -1.33972168e-02, -1.62353516e-02, ..., 1.64794922e-03, 3.05175781e-05, -4.02832031e-03]), 'sampling_rate': 16000} ``` - Relative Path ```python >>data['clean_train_short'][0]['segment_path'] data/parlament_parla_v3/output_segment/2753976/2753976_90753a8d81888d998484_405.96_411.15999999999997.wav ``` - Transcription ```python >>data['clean_train_short'][0]['text']) idò jo em tragaré el salmó oh uh no hi pensava ``` ### Data Fields - "identifier" : (string) → the unique audio identificator - "segment_path": (string) → the path to the audio - "audio": datasets.Audio(sampling_rate=16000) → the decoded audio array, and the sampling rate. - "text": (string) → clean version of the transcription ### Data Splits The dataset consists of a train, dev and test splits. The stat details are as follows: | Subcorpus | Duration | |------------------ |-----------| | other_test_short | 13:42:44 | | other_dev_short | 13:13:45 | | other_train_short | 507:27:34 | |*other total_short*| 534:24:03 | | clean_test_short | 10:44:19 | | clean_dev_short | 10:23:30 | | clean_train_short | 390:19:12 | |*clean total_short*| 411:27:03 | |*Total* | 945:51:06 | | Subcorpus | Duration | |-------------------|-----------| | other_test_long | 01:41:29 | | other_dev_long | 01:51:30 | | other_train_long | 72:35:10 | |*other total_long* | 76:08:10 | | clean_test_long | 00:50:15 | | clean_dev_long | 00:46:44 | | clean_train_long | 36:11:46 | |*clean total_long* | 37:48:47 | |*Total* | 113:56:58 | :04 ### Example Usage To load a specific split ,for example, the training split do: ```python from datasets import load_dataset data = load_dataset("projecte-aina/parlament_parla_v3",split="clean_train_short") ``` ## Dataset Creation ### Curation Rationale The directory called "speech" contains all the speech files of the corpus. The files in the speech directory are divided into the "clean" and the "other" directories. ### Source Data The content belongs to the Catalan Parliament and the data is released conforming their [terms of use](https://www.parlament.cat/pcat/serveis-parlament/avis-legal/). ### Data Collection and Processing The dataset's transcriptions are released in a clean version. The clean versions have been normalized at an orthographic level in lower-case. The normalization process was performed removing punctuation marks and characters that are not present in the Catalan alphabet. Number expansion was also perfomed. In order to obtain a corpus of the highest possible quality, we also apply automatic language detection processes to each segment to prevent code-switching, and evaluate the quality of the transcriptions to eliminate both low quality segments and those that are not in Catalan. ### Who are the source data producers? The content belongs to the Catalan Parliament and the data is released conforming their [terms of use](https://www.parlament.cat/pcat/serveis-parlament/avis-legal/). ### Annotations The dataset doesn't contain any additional annotation. ### Personal and Sensitive Information The dataset consists of Catalan parliamentary speeches and their transcription. The dataset contains no personal information except for speech, which is considered personal data. Consequently, the speakers' voices in this corpus have been subjected to anonymization treatment in compliance with applicable regulations, such as the General Data Protection Regulation (GDPR) in the European Union. You agree to not attempt to determine the identity of speakers in this dataset. ### Citation ``` @misc{bscib32024, title={ParlamentParla v3 - Speech Corpus of Catalan Parliamentary Sessions}, author={Baybars, Kulebi}, publisher={Barcelona Supercomputing Center}, year={2024}, url={}, } ``` ## Considerations for Using the Data ### Social Impact of Dataset ParlamentParla_v3 is a source of speech data that will be valuable in development of speech technologies for Catalan language and its varieties. ### Discussion of Biases No specific bias mitigation strategies were applied to this dataset. Inherent biases may exist within the data. ### Other Known Limitations Speakers, their gender and age are not identified and one or more speakers could be speaking in the same recording. For these reasons, we don't know the total number of speakers in the corpus and their gender/age.