End of training
Browse files- README.md +30 -17
- config.json +1 -2
- model.safetensors +1 -1
- special_tokens_map.json +5 -35
- tokenizer.json +66 -66
- tokenizer_config.json +4 -4
- training_args.bin +1 -1
- vocab.txt +63 -63
README.md
CHANGED
@@ -1,8 +1,10 @@
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
-
base_model:
|
4 |
tags:
|
5 |
- generated_from_trainer
|
|
|
|
|
6 |
model-index:
|
7 |
- name: fge-robos-qa-model
|
8 |
results: []
|
@@ -13,9 +15,20 @@ should probably proofread and complete it, then remove this comment. -->
|
|
13 |
|
14 |
# fge-robos-qa-model
|
15 |
|
16 |
-
This model is a fine-tuned version of [
|
17 |
It achieves the following results on the evaluation set:
|
18 |
-
- Loss: 1.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
19 |
|
20 |
## Model description
|
21 |
|
@@ -35,8 +48,8 @@ More information needed
|
|
35 |
|
36 |
The following hyperparameters were used during training:
|
37 |
- learning_rate: 2e-05
|
38 |
-
- train_batch_size:
|
39 |
-
- eval_batch_size:
|
40 |
- seed: 42
|
41 |
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
|
42 |
- lr_scheduler_type: linear
|
@@ -45,18 +58,18 @@ The following hyperparameters were used during training:
|
|
45 |
|
46 |
### Training results
|
47 |
|
48 |
-
| Training Loss | Epoch | Step | Validation Loss |
|
49 |
-
|
50 |
-
| No log | 1.0 |
|
51 |
-
| No log | 2.0 |
|
52 |
-
|
|
53 |
-
|
|
54 |
-
|
|
55 |
-
|
|
56 |
-
|
|
57 |
-
|
|
58 |
-
| 0.
|
59 |
-
| 0.
|
60 |
|
61 |
|
62 |
### Framework versions
|
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
+
base_model: mrm8488/bert-base-spanish-wwm-cased-finetuned-spa-squad2-es
|
4 |
tags:
|
5 |
- generated_from_trainer
|
6 |
+
metrics:
|
7 |
+
- f1
|
8 |
model-index:
|
9 |
- name: fge-robos-qa-model
|
10 |
results: []
|
|
|
15 |
|
16 |
# fge-robos-qa-model
|
17 |
|
18 |
+
This model is a fine-tuned version of [mrm8488/bert-base-spanish-wwm-cased-finetuned-spa-squad2-es](https://huggingface.co/mrm8488/bert-base-spanish-wwm-cased-finetuned-spa-squad2-es) on an unknown dataset.
|
19 |
It achieves the following results on the evaluation set:
|
20 |
+
- Loss: 1.0088
|
21 |
+
- Model Preparation Time: 0.0077
|
22 |
+
- Exact: 55.7377
|
23 |
+
- F1: 82.8805
|
24 |
+
- Total: 915
|
25 |
+
- Hasans Exact: 55.7377
|
26 |
+
- Hasans F1: 82.8805
|
27 |
+
- Hasans Total: 915
|
28 |
+
- Best Exact: 55.7377
|
29 |
+
- Best Exact Thresh: 0.0
|
30 |
+
- Best F1: 82.8805
|
31 |
+
- Best F1 Thresh: 0.0
|
32 |
|
33 |
## Model description
|
34 |
|
|
|
48 |
|
49 |
The following hyperparameters were used during training:
|
50 |
- learning_rate: 2e-05
|
51 |
+
- train_batch_size: 64
|
52 |
+
- eval_batch_size: 64
|
53 |
- seed: 42
|
54 |
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
|
55 |
- lr_scheduler_type: linear
|
|
|
58 |
|
59 |
### Training results
|
60 |
|
61 |
+
| Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Exact | F1 | Total | Hasans Exact | Hasans F1 | Hasans Total | Best Exact | Best Exact Thresh | Best F1 | Best F1 Thresh |
|
62 |
+
|:-------------:|:-----:|:----:|:---------------:|:----------------------:|:-------:|:-------:|:-----:|:------------:|:---------:|:------------:|:----------:|:-----------------:|:-------:|:--------------:|
|
63 |
+
| No log | 1.0 | 58 | 0.8963 | 0.0077 | 54.2077 | 81.4281 | 915 | 54.2077 | 81.4281 | 915 | 54.2077 | 0.0 | 81.4281 | 0.0 |
|
64 |
+
| No log | 2.0 | 116 | 0.9578 | 0.0077 | 54.9727 | 82.3694 | 915 | 54.9727 | 82.3694 | 915 | 54.9727 | 0.0 | 82.3694 | 0.0 |
|
65 |
+
| No log | 3.0 | 174 | 1.0088 | 0.0077 | 55.7377 | 82.8805 | 915 | 55.7377 | 82.8805 | 915 | 55.7377 | 0.0 | 82.8805 | 0.0 |
|
66 |
+
| No log | 4.0 | 232 | 1.0865 | 0.0077 | 54.4262 | 81.7459 | 915 | 54.4262 | 81.7459 | 915 | 54.4262 | 0.0 | 81.7459 | 0.0 |
|
67 |
+
| No log | 5.0 | 290 | 1.2034 | 0.0077 | 53.7705 | 81.5328 | 915 | 53.7705 | 81.5328 | 915 | 53.7705 | 0.0 | 81.5328 | 0.0 |
|
68 |
+
| No log | 6.0 | 348 | 1.2822 | 0.0077 | 54.2077 | 81.9985 | 915 | 54.2077 | 81.9985 | 915 | 54.2077 | 0.0 | 81.9985 | 0.0 |
|
69 |
+
| No log | 7.0 | 406 | 1.3357 | 0.0077 | 54.2077 | 81.7294 | 915 | 54.2077 | 81.7294 | 915 | 54.2077 | 0.0 | 81.7294 | 0.0 |
|
70 |
+
| No log | 8.0 | 464 | 1.3738 | 0.0077 | 54.6448 | 81.9526 | 915 | 54.6448 | 81.9526 | 915 | 54.6448 | 0.0 | 81.9526 | 0.0 |
|
71 |
+
| 0.4292 | 9.0 | 522 | 1.4215 | 0.0077 | 54.7541 | 81.7385 | 915 | 54.7541 | 81.7385 | 915 | 54.7541 | 0.0 | 81.7385 | 0.0 |
|
72 |
+
| 0.4292 | 10.0 | 580 | 1.4342 | 0.0077 | 53.4426 | 81.3729 | 915 | 53.4426 | 81.3729 | 915 | 53.4426 | 0.0 | 81.3729 | 0.0 |
|
73 |
|
74 |
|
75 |
### Framework versions
|
config.json
CHANGED
@@ -1,11 +1,10 @@
|
|
1 |
{
|
2 |
-
"_name_or_path": "
|
3 |
"architectures": [
|
4 |
"BertForQuestionAnswering"
|
5 |
],
|
6 |
"attention_probs_dropout_prob": 0.1,
|
7 |
"classifier_dropout": null,
|
8 |
-
"gradient_checkpointing": false,
|
9 |
"hidden_act": "gelu",
|
10 |
"hidden_dropout_prob": 0.1,
|
11 |
"hidden_size": 768,
|
|
|
1 |
{
|
2 |
+
"_name_or_path": "mrm8488/bert-base-spanish-wwm-cased-finetuned-spa-squad2-es",
|
3 |
"architectures": [
|
4 |
"BertForQuestionAnswering"
|
5 |
],
|
6 |
"attention_probs_dropout_prob": 0.1,
|
7 |
"classifier_dropout": null,
|
|
|
8 |
"hidden_act": "gelu",
|
9 |
"hidden_dropout_prob": 0.1,
|
10 |
"hidden_size": 768,
|
model.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 437070648
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:cc95f206a6a7e8fdaed4433af6fb5de672810c648563c455792dfa0832b0698e
|
3 |
size 437070648
|
special_tokens_map.json
CHANGED
@@ -1,37 +1,7 @@
|
|
1 |
{
|
2 |
-
"cls_token":
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
"single_word": false
|
8 |
-
},
|
9 |
-
"mask_token": {
|
10 |
-
"content": "[MASK]",
|
11 |
-
"lstrip": false,
|
12 |
-
"normalized": false,
|
13 |
-
"rstrip": false,
|
14 |
-
"single_word": false
|
15 |
-
},
|
16 |
-
"pad_token": {
|
17 |
-
"content": "[PAD]",
|
18 |
-
"lstrip": false,
|
19 |
-
"normalized": false,
|
20 |
-
"rstrip": false,
|
21 |
-
"single_word": false
|
22 |
-
},
|
23 |
-
"sep_token": {
|
24 |
-
"content": "[SEP]",
|
25 |
-
"lstrip": false,
|
26 |
-
"normalized": false,
|
27 |
-
"rstrip": false,
|
28 |
-
"single_word": false
|
29 |
-
},
|
30 |
-
"unk_token": {
|
31 |
-
"content": "[UNK]",
|
32 |
-
"lstrip": false,
|
33 |
-
"normalized": false,
|
34 |
-
"rstrip": false,
|
35 |
-
"single_word": false
|
36 |
-
}
|
37 |
}
|
|
|
1 |
{
|
2 |
+
"cls_token": "[CLS]",
|
3 |
+
"mask_token": "[MASK]",
|
4 |
+
"pad_token": "[PAD]",
|
5 |
+
"sep_token": "[SEP]",
|
6 |
+
"unk_token": "[UNK]"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
7 |
}
|
tokenizer.json
CHANGED
@@ -3,7 +3,7 @@
|
|
3 |
"truncation": {
|
4 |
"direction": "Right",
|
5 |
"max_length": 384,
|
6 |
-
"strategy": "
|
7 |
"stride": 128
|
8 |
},
|
9 |
"padding": {
|
@@ -67,8 +67,8 @@
|
|
67 |
"type": "BertNormalizer",
|
68 |
"clean_text": true,
|
69 |
"handle_chinese_chars": true,
|
70 |
-
"strip_accents":
|
71 |
-
"lowercase":
|
72 |
},
|
73 |
"pre_tokenizer": {
|
74 |
"type": "BertPreTokenizer"
|
@@ -1100,69 +1100,69 @@
|
|
1100 |
"[unused932]": 938,
|
1101 |
"[unused933]": 939,
|
1102 |
"[unused934]": 940,
|
1103 |
-
"
|
1104 |
-
"
|
1105 |
-
"
|
1106 |
-
"
|
1107 |
-
"
|
1108 |
-
"
|
1109 |
-
"
|
1110 |
-
"
|
1111 |
-
"
|
1112 |
-
"
|
1113 |
-
"
|
1114 |
-
"
|
1115 |
-
"
|
1116 |
-
"
|
1117 |
-
"
|
1118 |
-
"
|
1119 |
-
"
|
1120 |
-
"]": 958,
|
1121 |
-
"
|
1122 |
-
"[": 960,
|
1123 |
-
"
|
1124 |
-
"
|
1125 |
-
"
|
1126 |
-
"
|
1127 |
-
"
|
1128 |
-
"
|
1129 |
-
"
|
1130 |
-
"
|
1131 |
-
"
|
1132 |
-
"
|
1133 |
-
"
|
1134 |
-
"
|
1135 |
-
"
|
1136 |
-
"
|
1137 |
-
"
|
1138 |
-
"
|
1139 |
-
"
|
1140 |
-
"
|
1141 |
-
"
|
1142 |
-
"
|
1143 |
-
"
|
1144 |
-
"
|
1145 |
-
"
|
1146 |
-
"
|
1147 |
-
"
|
1148 |
-
"
|
1149 |
-
"
|
1150 |
-
"
|
1151 |
-
"
|
1152 |
-
"
|
1153 |
-
"
|
1154 |
-
"
|
1155 |
-
"
|
1156 |
-
"
|
1157 |
-
"
|
1158 |
-
"
|
1159 |
-
"
|
1160 |
-
"
|
1161 |
-
"
|
1162 |
-
"
|
1163 |
-
"
|
1164 |
-
"
|
1165 |
-
"
|
1166 |
"w": 1004,
|
1167 |
"W": 1005,
|
1168 |
"##de": 1006,
|
|
|
3 |
"truncation": {
|
4 |
"direction": "Right",
|
5 |
"max_length": 384,
|
6 |
+
"strategy": "OnlySecond",
|
7 |
"stride": 128
|
8 |
},
|
9 |
"padding": {
|
|
|
67 |
"type": "BertNormalizer",
|
68 |
"clean_text": true,
|
69 |
"handle_chinese_chars": true,
|
70 |
+
"strip_accents": null,
|
71 |
+
"lowercase": true
|
72 |
},
|
73 |
"pre_tokenizer": {
|
74 |
"type": "BertPreTokenizer"
|
|
|
1100 |
"[unused932]": 938,
|
1101 |
"[unused933]": 939,
|
1102 |
"[unused934]": 940,
|
1103 |
+
"[unused935]": 941,
|
1104 |
+
"[unused936]": 942,
|
1105 |
+
"[unused937]": 943,
|
1106 |
+
"[unused938]": 944,
|
1107 |
+
"[unused939]": 945,
|
1108 |
+
"[unused940]": 946,
|
1109 |
+
"[unused941]": 947,
|
1110 |
+
"[unused942]": 948,
|
1111 |
+
"[unused943]": 949,
|
1112 |
+
"[unused944]": 950,
|
1113 |
+
"[unused945]": 951,
|
1114 |
+
"[unused946]": 952,
|
1115 |
+
"[unused947]": 953,
|
1116 |
+
"[unused948]": 954,
|
1117 |
+
"[unused949]": 955,
|
1118 |
+
"[unused950]": 956,
|
1119 |
+
"[unused951]": 957,
|
1120 |
+
"[unused952]": 958,
|
1121 |
+
"[unused953]": 959,
|
1122 |
+
"[unused954]": 960,
|
1123 |
+
"[unused955]": 961,
|
1124 |
+
"[unused956]": 962,
|
1125 |
+
"[unused957]": 963,
|
1126 |
+
"[unused958]": 964,
|
1127 |
+
"[unused959]": 965,
|
1128 |
+
"[unused960]": 966,
|
1129 |
+
"[unused961]": 967,
|
1130 |
+
"[unused962]": 968,
|
1131 |
+
"[unused963]": 969,
|
1132 |
+
"[unused964]": 970,
|
1133 |
+
"[unused965]": 971,
|
1134 |
+
"[unused966]": 972,
|
1135 |
+
"[unused967]": 973,
|
1136 |
+
"[unused968]": 974,
|
1137 |
+
"[unused969]": 975,
|
1138 |
+
"[unused970]": 976,
|
1139 |
+
"[unused971]": 977,
|
1140 |
+
"[unused972]": 978,
|
1141 |
+
"[unused973]": 979,
|
1142 |
+
"[unused974]": 980,
|
1143 |
+
"[unused975]": 981,
|
1144 |
+
"[unused976]": 982,
|
1145 |
+
"[unused977]": 983,
|
1146 |
+
"[unused978]": 984,
|
1147 |
+
"[unused979]": 985,
|
1148 |
+
"[unused980]": 986,
|
1149 |
+
"[unused981]": 987,
|
1150 |
+
"[unused982]": 988,
|
1151 |
+
"[unused983]": 989,
|
1152 |
+
"[unused984]": 990,
|
1153 |
+
"[unused985]": 991,
|
1154 |
+
"[unused986]": 992,
|
1155 |
+
"[unused987]": 993,
|
1156 |
+
"[unused988]": 994,
|
1157 |
+
"[unused989]": 995,
|
1158 |
+
"[unused990]": 996,
|
1159 |
+
"[unused991]": 997,
|
1160 |
+
"[unused992]": 998,
|
1161 |
+
"[unused993]": 999,
|
1162 |
+
"[unused994]": 1000,
|
1163 |
+
"[unused995]": 1001,
|
1164 |
+
"[unused996]": 1002,
|
1165 |
+
"[unused997]": 1003,
|
1166 |
"w": 1004,
|
1167 |
"W": 1005,
|
1168 |
"##de": 1006,
|
tokenizer_config.json
CHANGED
@@ -41,17 +41,17 @@
|
|
41 |
"special": true
|
42 |
}
|
43 |
},
|
44 |
-
"clean_up_tokenization_spaces":
|
45 |
"cls_token": "[CLS]",
|
46 |
"do_basic_tokenize": true,
|
47 |
-
"do_lower_case":
|
48 |
"extra_special_tokens": {},
|
49 |
"mask_token": "[MASK]",
|
50 |
-
"model_max_length":
|
51 |
"never_split": null,
|
52 |
"pad_token": "[PAD]",
|
53 |
"sep_token": "[SEP]",
|
54 |
-
"strip_accents":
|
55 |
"tokenize_chinese_chars": true,
|
56 |
"tokenizer_class": "BertTokenizer",
|
57 |
"unk_token": "[UNK]"
|
|
|
41 |
"special": true
|
42 |
}
|
43 |
},
|
44 |
+
"clean_up_tokenization_spaces": true,
|
45 |
"cls_token": "[CLS]",
|
46 |
"do_basic_tokenize": true,
|
47 |
+
"do_lower_case": true,
|
48 |
"extra_special_tokens": {},
|
49 |
"mask_token": "[MASK]",
|
50 |
+
"model_max_length": 1000000000000000019884624838656,
|
51 |
"never_split": null,
|
52 |
"pad_token": "[PAD]",
|
53 |
"sep_token": "[SEP]",
|
54 |
+
"strip_accents": null,
|
55 |
"tokenize_chinese_chars": true,
|
56 |
"tokenizer_class": "BertTokenizer",
|
57 |
"unk_token": "[UNK]"
|
training_args.bin
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 5304
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:4561a5c8ee9d15a87632b0372ae2f9221a0bb7c44395affa595398b178fb37cc
|
3 |
size 5304
|
vocab.txt
CHANGED
@@ -939,69 +939,69 @@
|
|
939 |
[unused932]
|
940 |
[unused933]
|
941 |
[unused934]
|
942 |
-
|
943 |
-
|
944 |
-
|
945 |
-
|
946 |
-
|
947 |
-
|
948 |
-
|
949 |
-
|
950 |
-
|
951 |
-
|
952 |
-
|
953 |
-
|
954 |
-
|
955 |
-
|
956 |
-
|
957 |
-
|
958 |
-
|
959 |
-
]
|
960 |
-
|
961 |
-
[
|
962 |
-
|
963 |
-
|
964 |
-
|
965 |
-
|
966 |
-
|
967 |
-
|
968 |
-
|
969 |
-
|
970 |
-
|
971 |
-
|
972 |
-
|
973 |
-
|
974 |
-
|
975 |
-
|
976 |
-
|
977 |
-
|
978 |
-
|
979 |
-
|
980 |
-
|
981 |
-
|
982 |
-
|
983 |
-
|
984 |
-
|
985 |
-
|
986 |
-
|
987 |
-
|
988 |
-
|
989 |
-
|
990 |
-
|
991 |
-
|
992 |
-
|
993 |
-
|
994 |
-
|
995 |
-
|
996 |
-
|
997 |
-
|
998 |
-
|
999 |
-
|
1000 |
-
|
1001 |
-
|
1002 |
-
|
1003 |
-
|
1004 |
-
|
1005 |
w
|
1006 |
W
|
1007 |
##de
|
|
|
939 |
[unused932]
|
940 |
[unused933]
|
941 |
[unused934]
|
942 |
+
[unused935]
|
943 |
+
[unused936]
|
944 |
+
[unused937]
|
945 |
+
[unused938]
|
946 |
+
[unused939]
|
947 |
+
[unused940]
|
948 |
+
[unused941]
|
949 |
+
[unused942]
|
950 |
+
[unused943]
|
951 |
+
[unused944]
|
952 |
+
[unused945]
|
953 |
+
[unused946]
|
954 |
+
[unused947]
|
955 |
+
[unused948]
|
956 |
+
[unused949]
|
957 |
+
[unused950]
|
958 |
+
[unused951]
|
959 |
+
[unused952]
|
960 |
+
[unused953]
|
961 |
+
[unused954]
|
962 |
+
[unused955]
|
963 |
+
[unused956]
|
964 |
+
[unused957]
|
965 |
+
[unused958]
|
966 |
+
[unused959]
|
967 |
+
[unused960]
|
968 |
+
[unused961]
|
969 |
+
[unused962]
|
970 |
+
[unused963]
|
971 |
+
[unused964]
|
972 |
+
[unused965]
|
973 |
+
[unused966]
|
974 |
+
[unused967]
|
975 |
+
[unused968]
|
976 |
+
[unused969]
|
977 |
+
[unused970]
|
978 |
+
[unused971]
|
979 |
+
[unused972]
|
980 |
+
[unused973]
|
981 |
+
[unused974]
|
982 |
+
[unused975]
|
983 |
+
[unused976]
|
984 |
+
[unused977]
|
985 |
+
[unused978]
|
986 |
+
[unused979]
|
987 |
+
[unused980]
|
988 |
+
[unused981]
|
989 |
+
[unused982]
|
990 |
+
[unused983]
|
991 |
+
[unused984]
|
992 |
+
[unused985]
|
993 |
+
[unused986]
|
994 |
+
[unused987]
|
995 |
+
[unused988]
|
996 |
+
[unused989]
|
997 |
+
[unused990]
|
998 |
+
[unused991]
|
999 |
+
[unused992]
|
1000 |
+
[unused993]
|
1001 |
+
[unused994]
|
1002 |
+
[unused995]
|
1003 |
+
[unused996]
|
1004 |
+
[unused997]
|
1005 |
w
|
1006 |
W
|
1007 |
##de
|