KasuleTrevor commited on
Commit
78ab3ae
·
verified ·
1 Parent(s): 39f05c9

End of training

Browse files
Files changed (2) hide show
  1. README.md +81 -0
  2. model.safetensors +1 -1
README.md ADDED
@@ -0,0 +1,81 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ base_model: KasuleTrevor/wav2vec2-xls-r-300m-nyn_filtered-yogera-v3
4
+ tags:
5
+ - generated_from_trainer
6
+ metrics:
7
+ - accuracy
8
+ - precision
9
+ - recall
10
+ - f1
11
+ model-index:
12
+ - name: Luganda_speech_to_intent_nyn_xlsr
13
+ results: []
14
+ ---
15
+
16
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
+ should probably proofread and complete it, then remove this comment. -->
18
+
19
+ # Luganda_speech_to_intent_nyn_xlsr
20
+
21
+ This model is a fine-tuned version of [KasuleTrevor/wav2vec2-xls-r-300m-nyn_filtered-yogera-v3](https://huggingface.co/KasuleTrevor/wav2vec2-xls-r-300m-nyn_filtered-yogera-v3) on an unknown dataset.
22
+ It achieves the following results on the evaluation set:
23
+ - Loss: 0.1401
24
+ - Accuracy: 0.9757
25
+ - Precision: 0.9761
26
+ - Recall: 0.9757
27
+ - F1: 0.9755
28
+
29
+ ## Model description
30
+
31
+ More information needed
32
+
33
+ ## Intended uses & limitations
34
+
35
+ More information needed
36
+
37
+ ## Training and evaluation data
38
+
39
+ More information needed
40
+
41
+ ## Training procedure
42
+
43
+ ### Training hyperparameters
44
+
45
+ The following hyperparameters were used during training:
46
+ - learning_rate: 0.0001
47
+ - train_batch_size: 32
48
+ - eval_batch_size: 8
49
+ - seed: 42
50
+ - gradient_accumulation_steps: 2
51
+ - total_train_batch_size: 64
52
+ - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
53
+ - lr_scheduler_type: cosine
54
+ - lr_scheduler_warmup_steps: 500
55
+ - num_epochs: 30
56
+ - mixed_precision_training: Native AMP
57
+
58
+ ### Training results
59
+
60
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
61
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
62
+ | 2.9405 | 1.0 | 131 | 2.3617 | 0.5163 | 0.4655 | 0.5163 | 0.4450 |
63
+ | 1.9336 | 2.0 | 262 | 0.1540 | 0.9859 | 0.9865 | 0.9859 | 0.9858 |
64
+ | 0.3581 | 3.0 | 393 | 0.0748 | 0.9902 | 0.9904 | 0.9902 | 0.9902 |
65
+ | 0.1253 | 4.0 | 524 | 0.0730 | 0.9881 | 0.9884 | 0.9881 | 0.9881 |
66
+ | 0.1166 | 5.0 | 655 | 0.0609 | 0.9913 | 0.9915 | 0.9913 | 0.9913 |
67
+ | 0.1071 | 6.0 | 786 | 0.0667 | 0.9913 | 0.9915 | 0.9913 | 0.9913 |
68
+ | 0.0836 | 7.0 | 917 | 0.0601 | 0.9902 | 0.9904 | 0.9902 | 0.9902 |
69
+ | 0.0736 | 8.0 | 1048 | 0.0611 | 0.9913 | 0.9915 | 0.9913 | 0.9913 |
70
+ | 0.0612 | 9.0 | 1179 | 0.0633 | 0.9902 | 0.9904 | 0.9902 | 0.9902 |
71
+ | 0.0553 | 10.0 | 1310 | 0.0657 | 0.9902 | 0.9904 | 0.9902 | 0.9902 |
72
+ | 0.0478 | 11.0 | 1441 | 0.0650 | 0.9913 | 0.9915 | 0.9913 | 0.9913 |
73
+ | 0.0392 | 12.0 | 1572 | 0.0681 | 0.9902 | 0.9904 | 0.9902 | 0.9902 |
74
+
75
+
76
+ ### Framework versions
77
+
78
+ - Transformers 4.51.3
79
+ - Pytorch 2.1.0+cu118
80
+ - Datasets 3.6.0
81
+ - Tokenizers 0.21.2
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:bf7f1fcbb48bfe511c82d765bdbe11bacf7c6252065c7073f30615773b3dbc6f
3
  size 1267137848
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e701d05e6c319dccc9f52cc99b0800df86e3694be01d334c968acd25e34f53bd
3
  size 1267137848