dantedgp commited on
Commit
9ece9a1
·
verified ·
1 Parent(s): 2a68710

End of training

Browse files
Files changed (2) hide show
  1. README.md +70 -0
  2. generation_config.json +6 -0
README.md ADDED
@@ -0,0 +1,70 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: google/flan-t5-small
4
+ tags:
5
+ - generated_from_trainer
6
+ metrics:
7
+ - rouge
8
+ model-index:
9
+ - name: flan-t5-small-finetuned-question-generation
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ # flan-t5-small-finetuned-question-generation
17
+
18
+ This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on an unknown dataset.
19
+ It achieves the following results on the evaluation set:
20
+ - Loss: 1.5103
21
+ - Rouge1: 0.4692
22
+ - Rouge2: 0.2472
23
+ - Rougel: 0.4300
24
+ - Rougelsum: 0.4314
25
+
26
+ ## Model description
27
+
28
+ More information needed
29
+
30
+ ## Intended uses & limitations
31
+
32
+ More information needed
33
+
34
+ ## Training and evaluation data
35
+
36
+ More information needed
37
+
38
+ ## Training procedure
39
+
40
+ ### Training hyperparameters
41
+
42
+ The following hyperparameters were used during training:
43
+ - learning_rate: 5.6e-05
44
+ - train_batch_size: 8
45
+ - eval_batch_size: 8
46
+ - seed: 42
47
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
+ - lr_scheduler_type: linear
49
+ - num_epochs: 8
50
+
51
+ ### Training results
52
+
53
+ | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
54
+ |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
55
+ | 1.9748 | 1.0 | 561 | 1.6071 | 0.4531 | 0.2304 | 0.4114 | 0.4114 |
56
+ | 1.7823 | 2.0 | 1122 | 1.5643 | 0.4561 | 0.2361 | 0.4197 | 0.4202 |
57
+ | 1.692 | 3.0 | 1683 | 1.5422 | 0.4582 | 0.2342 | 0.4210 | 0.4212 |
58
+ | 1.6226 | 4.0 | 2244 | 1.5243 | 0.4655 | 0.2447 | 0.4288 | 0.4301 |
59
+ | 1.5668 | 5.0 | 2805 | 1.5146 | 0.4625 | 0.2402 | 0.4257 | 0.4261 |
60
+ | 1.5281 | 6.0 | 3366 | 1.5083 | 0.4651 | 0.2423 | 0.4293 | 0.4304 |
61
+ | 1.5058 | 7.0 | 3927 | 1.5100 | 0.4670 | 0.2456 | 0.4290 | 0.4302 |
62
+ | 1.4834 | 8.0 | 4488 | 1.5103 | 0.4692 | 0.2472 | 0.4300 | 0.4314 |
63
+
64
+
65
+ ### Framework versions
66
+
67
+ - Transformers 4.42.4
68
+ - Pytorch 2.3.1
69
+ - Datasets 2.20.0
70
+ - Tokenizers 0.19.1
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "decoder_start_token_id": 0,
3
+ "eos_token_id": 1,
4
+ "pad_token_id": 0,
5
+ "transformers_version": "4.42.4"
6
+ }