File size: 2,066 Bytes
605b3ec
 
 
 
 
 
4b86b2a
605b3ec
 
 
 
 
7fc9a28
 
 
605b3ec
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4b86b2a
605b3ec
 
4b86b2a
605b3ec
4b86b2a
 
605b3ec
4b86b2a
 
605b3ec
 
 
4b86b2a
7fc9a28
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
"""
This file contains the text content for the leaderboard client.
"""
HEADER_MARKDOWN = """
# EMMA JSALT25 Benchmark – Multi-Talker ASR Evaluation

Welcome to the official leaderboard for benchmarking **multi-talker ASR systems**, hosted by the **EMMA JSALT25 team**.
"""

LEADERBOARD_TAB_TITLE_MARKDOWN = """
## Leaderboard

Below you’ll find the latest results submitted to the benchmark. Models are evaluated using **`meeteval`** with **TCP-WER [%] (collar=5s)**. 

For AISHELL-4 and AliMeeting conversion to simplified Mandarin is applied, and tcpCER [%] is used.
"""

SUBMISSION_TAB_TITLE_MARKDOWN = """
## Submit Your Model

To submit your MT-ASR hypothesis to the benchmark, complete the form below:

- **Submitted by**: Your name or team identifier.
- **Model ID**: A unique identifier for your submission (used to track models on the leaderboard).
- **Hypothesis File**: Upload a **SegLST `.json` file** that includes **all segments across datasets** in a single list.
- **Task**: Choose the evaluation task (e.g., single-channel ground-truth diarization).
- **Datasets**: Select one or more datasets you wish to evaluate on.

πŸ“© To enable submission, please [email the EMMA team](mailto:ipoloka@fit.vut.cz) to receive a **submission token**.

After clicking **Submit**, your model will be evaluated and results displayed in the leaderboard.
"""

ADDITIONAL_NOTES_MARKDOWN = """


### Reference/Hypothesis File Format

πŸ› οΈ Reference annotations were constructed via the `prepare_gt.sh` script. To add a new dataset, please create a pull request modifying `prepare_gt.sh`.  
πŸ“š For details about SegLST format, please see the [SegLST documentation in MeetEval](https://github.com/fgnt/meeteval?tab=readme-ov-file#segment-wise-long-form-speech-transcription-annotation-seglst).

πŸ”„ By default, **Chime-8 normalization** is applied during evaluation for both references and hypotheses.  
You can choose to disable this using the checkbox above.

"""


LEADERBOARD_CSS = """
#leaderboard-table th .header-content {
    white-space: nowrap;
}
"""