Spaces:
Running
Running
Lakoc
Leaderboard split into 4 categories, updates of the logic and GT added, simplified CER for Mandarin
7fc9a28
""" | |
This file contains the text content for the leaderboard client. | |
""" | |
HEADER_MARKDOWN = """ | |
# EMMA JSALT25 Benchmark β Multi-Talker ASR Evaluation | |
Welcome to the official leaderboard for benchmarking **multi-talker ASR systems**, hosted by the **EMMA JSALT25 team**. | |
""" | |
LEADERBOARD_TAB_TITLE_MARKDOWN = """ | |
## Leaderboard | |
Below youβll find the latest results submitted to the benchmark. Models are evaluated using **`meeteval`** with **TCP-WER [%] (collar=5s)**. | |
For AISHELL-4 and AliMeeting conversion to simplified Mandarin is applied, and tcpCER [%] is used. | |
""" | |
SUBMISSION_TAB_TITLE_MARKDOWN = """ | |
## Submit Your Model | |
To submit your MT-ASR hypothesis to the benchmark, complete the form below: | |
- **Submitted by**: Your name or team identifier. | |
- **Model ID**: A unique identifier for your submission (used to track models on the leaderboard). | |
- **Hypothesis File**: Upload a **SegLST `.json` file** that includes **all segments across datasets** in a single list. | |
- **Task**: Choose the evaluation task (e.g., single-channel ground-truth diarization). | |
- **Datasets**: Select one or more datasets you wish to evaluate on. | |
π© To enable submission, please [email the EMMA team](mailto:ipoloka@fit.vut.cz) to receive a **submission token**. | |
After clicking **Submit**, your model will be evaluated and results displayed in the leaderboard. | |
""" | |
ADDITIONAL_NOTES_MARKDOWN = """ | |
### Reference/Hypothesis File Format | |
π οΈ Reference annotations were constructed via the `prepare_gt.sh` script. To add a new dataset, please create a pull request modifying `prepare_gt.sh`. | |
π For details about SegLST format, please see the [SegLST documentation in MeetEval](https://github.com/fgnt/meeteval?tab=readme-ov-file#segment-wise-long-form-speech-transcription-annotation-seglst). | |
π By default, **Chime-8 normalization** is applied during evaluation for both references and hypotheses. | |
You can choose to disable this using the checkbox above. | |
""" | |
LEADERBOARD_CSS = """ | |
#leaderboard-table th .header-content { | |
white-space: nowrap; | |
} | |
""" |