Fill-Mask
Safetensors
new
custom_code
izhx commited on
Commit
c824f2f
·
verified ·
1 Parent(s): 4dca103

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +160 -3
README.md CHANGED
@@ -1,3 +1,160 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - allenai/c4
5
+ - wikimedia/wikipedia
6
+ - Skywork/SkyPile-150B
7
+ - uonlp/CulturaX
8
+ language:
9
+ - af
10
+ - ar
11
+ - az
12
+ - be
13
+ - bg
14
+ - bn
15
+ - ca
16
+ - ceb
17
+ - cs
18
+ - cy
19
+ - da
20
+ - de
21
+ - el
22
+ - en
23
+ - es
24
+ - et
25
+ - eu
26
+ - fa
27
+ - fi
28
+ - fr
29
+ - gl
30
+ - gu
31
+ - he
32
+ - hi
33
+ - hr
34
+ - ht
35
+ - hu
36
+ - hy
37
+ - id
38
+ - is
39
+ - it
40
+ - ja
41
+ - jv
42
+ - ka
43
+ - kk
44
+ - km
45
+ - kn
46
+ - ko
47
+ - ky
48
+ - lo
49
+ - lt
50
+ - lv
51
+ - mk
52
+ - ml
53
+ - mn
54
+ - mr
55
+ - ms
56
+ - my
57
+ - ne
58
+ - nl
59
+ - no
60
+ - pa
61
+ - pl
62
+ - pt
63
+ - qu
64
+ - ro
65
+ - ru
66
+ - si
67
+ - sk
68
+ - sl
69
+ - so
70
+ - sq
71
+ - sr
72
+ - sv
73
+ - sw
74
+ - ta
75
+ - te
76
+ - th
77
+ - tl
78
+ - tr
79
+ - uk
80
+ - ur
81
+ - vi
82
+ - yo
83
+ - zh
84
+ pipeline_tag: fill-mask
85
+ ---
86
+
87
+ ## gte-multilingual-mlm-base
88
+
89
+
90
+ We introduce `mGTE` series, new generalized text encoder, embedding and reranking models that support 75 languages and the context length of up to 8192.
91
+ The models are built upon the transformer++ encoder backbone (BERT + RoPE + GLU, code refer to [Alibaba-NLP/new-impl](https://huggingface.co/Alibaba-NLP/new-impl))
92
+ as well as the vocabulary of `XLM-R`.
93
+
94
+ This text encoder (`mGTE-MLM-8192` in our [paper](https://arxiv.org/pdf/2407.19669)) outperforms the same-sized previous state-of-the-art [XLM-R-base](https://huggingface.co/FacebookAI/xlm-roberta-base)
95
+ in both [GLUE](https://aclanthology.org/W18-5446.pdf) and [XTREME-R](https://aclanthology.org/2021.emnlp-main.802.pdf).
96
+
97
+ - **Developed by**: Institute for Intelligent Computing, Alibaba Group
98
+ - **Model type**: Text Encoder
99
+ - **Paper**: [mGTE: Generalized Long-Context Text Representation and Reranking
100
+ Models for Multilingual Text Retrieval](https://arxiv.org/pdf/2407.19669).
101
+
102
+ ### Model list
103
+ | Models | Language | Model Size | Max Seq. Length | GLUE | XTREME-R |
104
+ |:-----: | :-----: |:-----: |:-----: |:-----: | :-----: |
105
+ |[`gte-multilingual-mlm-base`](https://huggingface.co/Alibaba-NLP/gte-multilingual-mlm-base)| Multiple | 306M | 8192 | 83.47 | 64.44 |
106
+ |[`gte-en-mlm-base`](https://huggingface.co/Alibaba-NLP/gte-en-mlm-base) | English | - | 8192 | 85.61 | - |
107
+ |[`gte-en-mlm-large`](https://huggingface.co/Alibaba-NLP/gte-en-mlm-large) | English | - | 8192 | 87.58| - |
108
+
109
+
110
+ ## Training Details
111
+
112
+ ### Training Data
113
+
114
+ - Masked language modeling (MLM): `c4-en`, `mc4`, `skypile`, `Wikipedia`, `CulturaX`, etc (refer to [paper appendix A.1](https://arxiv.org/pdf/2407.19669))
115
+
116
+ ### Training Procedure
117
+
118
+ To enable the backbone model to support a context length of 8192, we adopted a multi-stage training strategy.
119
+ The model first undergoes preliminary MLM pre-training on shorter lengths.
120
+ And then, we resample the data, reducing the proportion of short texts, and continue the MLM pre-training.
121
+
122
+ The entire training process is as follows:
123
+ - MLM-2048: lr 2e-4, mlm_probability 0.3, batch_size 8192, num_steps 250k, rope_base 10000
124
+ - MLM-8192: lr 5e-5, mlm_probability 0.3, batch_size 2048, num_steps 30k, rope_base 160000
125
+
126
+
127
+
128
+ ## Evaluation
129
+
130
+ | Models | Language | Model Size | Max Seq. Length | GLUE | XTREME-R |
131
+ |:-----: | :-----: |:-----: |:-----: |:-----: | :-----: |
132
+ |**[`gte-multilingual-mlm-base`](https://huggingface.co/Alibaba-NLP/gte-multilingual-mlm-base)**| Multiple | 306M | 8192 | 83.47 | 64.44 |
133
+ |**[`gte-en-mlm-base`](https://huggingface.co/Alibaba-NLP/gte-en-mlm-base)** | English | - | 8192 | 85.61 | - |
134
+ |**[`gte-en-mlm-large`](https://huggingface.co/Alibaba-NLP/gte-en-mlm-large)** | English | - | 8192 | 87.58| - |
135
+ |[`MosaicBERT-base`](https://huggingface.co/mosaicml/mosaic-bert-base)| English| 137M | 128 | 85.4 | - |
136
+ |[`MosaicBERT-base-2048`](https://huggingface.co/mosaicml/mosaic-bert-base-seqlen-2048)| English| 137M | 2048 | 85 | - |
137
+ |`JinaBERT-base`| English| 137M | 512 | 85 | - |
138
+ |[`nomic-bert-2048`](https://huggingface.co/nomic-ai/nomic-bert-2048)| English| 137M | 2048 | 85 | 84 |
139
+ |`MosaicBERT-large`| English | 434M | 128 | 86.1 | - |
140
+ |`JinaBERT-large`| English | 434M | 512 | 83.7 | - |
141
+ |[`XLM-R-base`](https://huggingface.co/FacebookAI/xlm-roberta-base) | Multiple | 279M | 512 | 80.44 | 62.02 |
142
+ |[`RoBERTa-base`](https://huggingface.co/FacebookAI/roberta-base)| English | 125M | 512 | 86.4 | - |
143
+ |[`RoBERTa-large`](https://huggingface.co/FacebookAI/roberta-large)| English | 355M | 512 | 88.9 | - |
144
+
145
+
146
+ ## Citation
147
+
148
+ If you find our paper or models helpful, please consider citing them as follows:
149
+
150
+ ```
151
+ @misc{zhang2024mgtegeneralizedlongcontexttext,
152
+ title={mGTE: Generalized Long-Context Text Representation and Reranking Models for Multilingual Text Retrieval},
153
+ author={Xin Zhang and Yanzhao Zhang and Dingkun Long and Wen Xie and Ziqi Dai and Jialong Tang and Huan Lin and Baosong Yang and Pengjun Xie and Fei Huang and Meishan Zhang and Wenjie Li and Min Zhang},
154
+ year={2024},
155
+ eprint={2407.19669},
156
+ archivePrefix={arXiv},
157
+ primaryClass={cs.CL},
158
+ url={https://arxiv.org/abs/2407.19669},
159
+ }
160
+ ```