readme: add initial version
Browse files
README.md
CHANGED
@@ -1,10 +1,57 @@
|
|
1 |
---
|
2 |
title: README
|
3 |
-
emoji:
|
4 |
-
colorFrom:
|
5 |
-
colorTo:
|
6 |
sdk: static
|
7 |
pinned: false
|
8 |
---
|
9 |
|
10 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
title: README
|
3 |
+
emoji: 📚
|
4 |
+
colorFrom: indigo
|
5 |
+
colorTo: purple
|
6 |
sdk: static
|
7 |
pinned: false
|
8 |
---
|
9 |
|
10 |
+
# hmBERT
|
11 |
+
|
12 |
+
Historical Multilingual Language Models for Named Entity Recognition. The following languages are covered by hmBERT:
|
13 |
+
|
14 |
+
* English (British Library Corpus - Books)
|
15 |
+
* German (Europeana Newspaper)
|
16 |
+
* French (Europeana Newspaper)
|
17 |
+
* Finnish (Europeana Newspaper)
|
18 |
+
* Swedish (Europeana Newspaper)
|
19 |
+
|
20 |
+
More details can be found in [our GitHub repository](https://github.com/dbmdz/clef-hipe) and in our
|
21 |
+
[hmBERT paper](https://ceur-ws.org/Vol-3180/paper-87.pdf).
|
22 |
+
|
23 |
+
# Leaderboard
|
24 |
+
|
25 |
+
We test our pretrained language models on various datasets from HIPE-2020, HIPE-2022 and Europeana.
|
26 |
+
The following table shows an overview of used datasets:
|
27 |
+
|
28 |
+
| Language | Datasets |
|
29 |
+
|----------|------------------------------------------------------------------|
|
30 |
+
| English | [AjMC] - [TopRes19th] |
|
31 |
+
| German | [AjMC] - [NewsEye] - [HIPE-2020] |
|
32 |
+
| French | [AjMC] - [ICDAR-Europeana] - [LeTemps] - [NewsEye] - [HIPE-2020] |
|
33 |
+
| Finnish | [NewsEye] |
|
34 |
+
| Swedish | [NewsEye] |
|
35 |
+
| Dutch | [ICDAR-Europeana] |
|
36 |
+
|
37 |
+
[AjMC]: https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md
|
38 |
+
[NewsEye]: https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md
|
39 |
+
[TopRes19th]: https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-topres19th.md
|
40 |
+
[ICDAR-Europeana]: https://github.com/stefan-it/historic-domain-adaptation-icdar
|
41 |
+
[LeTemps]: https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-letemps.md
|
42 |
+
[HIPE-2020]: https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md
|
43 |
+
|
44 |
+
Results:
|
45 |
+
|
46 |
+
| Model | English AjMC | German AjMC | French AjMC | German NewsEye | French NewsEye | Finnish NewsEye | Swedish NewsEye | Dutch ICDAR | French ICDAR | French LeTemps | English TopRes19th | German HIPE-2020 | French HIPE-2020 | Avg. |
|
47 |
+
|---------------------------------------------------------------------------|--------------|--------------|--------------|----------------|----------------|-----------------|-----------------|--------------|--------------|----------------|--------------------|------------------|------------------|-----------|
|
48 |
+
| hmBERT (32k) [Schweter et al.](https://ceur-ws.org/Vol-3180/paper-87.pdf) | 85.36 ± 0.94 | 89.08 ± 0.09 | 85.10 ± 0.60 | 39.65 ± 1.01 | 81.47 ± 0.36 | 77.28 ± 0.37 | 82.85 ± 0.83 | 82.11 ± 0.61 | 77.21 ± 0.16 | 65.73 ± 0.56 | 80.94 ± 0.86 | 79.18 ± 0.38 | 83.47 ± 0.80 | 77.65 |
|
49 |
+
| [hmTEAMS](https://huggingface.co/hmteams) | 86.41 ± 0.36 | 88.64 ± 0.42 | 85.41 ± 0.67 | 41.51 ± 2.82 | 83.20 ± 0.79 | 79.27 ± 1.88 | 82.78 ± 0.60 | 88.21 ± 0.39 | 78.03 ± 0.39 | 66.71 ± 0.46 | 81.36 ± 0.59 | 80.15 ± 0.60 | 86.07 ± 0.49 | **79.06** |
|
50 |
+
|
51 |
+
# Acknowledgements
|
52 |
+
|
53 |
+
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
|
54 |
+
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
|
55 |
+
|
56 |
+
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
|
57 |
+
Many Thanks for providing access to the TPUs ❤️
|