Update README.md
Browse files
README.md
CHANGED
@@ -133,7 +133,7 @@ img {
|
|
133 |
| [](#datasets)
|
134 |
| [](#datasets)
|
135 |
|
136 |
-
This model was specifically designed for a submission in the Bilingual Basque Spanish Speech to Text Challenge from the IBERSPEECH 2024 Albayzin evalutaions chalenges section. The train was fitted for a good performance on the challenge's evaluation splits, therefore, the performance in other splits is worse.
|
137 |
|
138 |
This model transcribes speech in lowercase Spanish alphabet including spaces, and was trained on a composite dataset comprising of 1462 hours of Spanish and Basque speech. The model was fine-tuned from a pre-trained Basque [stt_eu_conformer_transducer_large](https://huggingface.co/HiTZ/stt_eu_conformer_transducer_large) model using the [Nvidia NeMo](https://github.com/NVIDIA/NeMo) toolkit. It is an autoregressive "large" variant of Conformer, with around 119 million parameters.
|
139 |
See the [model architecture](#model-architecture) section and [NeMo documentation](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#conformer-transducer) for complete architecture details.
|
|
|
133 |
| [](#datasets)
|
134 |
| [](#datasets)
|
135 |
|
136 |
+
This model was specifically designed for a submission in the [BBS-S2TC (Bilingual Basque Spanish Speech to Text Challenge)](https://www.isca-archive.org/iberspeech_2024/herranz24_iberspeech.html) from the IBERSPEECH 2024 Albayzin evalutaions chalenges section. The train was fitted for a good performance on the challenge's evaluation splits, therefore, the performance in other splits is worse.
|
137 |
|
138 |
This model transcribes speech in lowercase Spanish alphabet including spaces, and was trained on a composite dataset comprising of 1462 hours of Spanish and Basque speech. The model was fine-tuned from a pre-trained Basque [stt_eu_conformer_transducer_large](https://huggingface.co/HiTZ/stt_eu_conformer_transducer_large) model using the [Nvidia NeMo](https://github.com/NVIDIA/NeMo) toolkit. It is an autoregressive "large" variant of Conformer, with around 119 million parameters.
|
139 |
See the [model architecture](#model-architecture) section and [NeMo documentation](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#conformer-transducer) for complete architecture details.
|