wav2vec2-pretraining-demo

This model is a fine-tuned version of facebook/wav2vec2-base-960h on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 902.7379
  • Constrast Loss: 891.6586
  • Div Loss: 110.7930

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 10
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Constrast Loss Div Loss
845.359 0.6289 100 955.3924 941.3030 140.8944
774.0615 1.2579 200 921.1964 908.8786 123.1788
839.9893 1.8868 300 932.0772 920.1297 119.4754
994.4874 2.5157 400 885.8154 874.5105 113.0493
772.1645 3.1447 500 933.2662 921.5520 117.1420
1298.7873 3.7736 600 920.5379 909.0538 114.8405
775.2367 4.4025 700 904.8465 893.6350 112.1154
1198.9295 5.0314 800 870.4927 859.6562 108.3643
732.0786 5.6604 900 911.9991 900.8566 111.4251
1380.1144 6.2893 1000 917.1681 905.9143 112.5378
776.9523 6.9182 1100 867.9667 857.2078 107.5892
1467.2303 7.5472 1200 897.4872 886.5748 109.1243
715.7438 8.1761 1300 885.5976 874.7515 108.4605
1226.158 8.8050 1400 927.9101 916.7936 111.1654
624.3603 9.4340 1500 902.7379 891.6586 110.7930

Framework versions

  • Transformers 4.53.1
  • Pytorch 2.7.1+cu126
  • Datasets 4.0.0
  • Tokenizers 0.21.1
Downloads last month
1
Safetensors
Model size
3.42M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for nguyencaong/wav2vec2-pretraining-demo

Finetuned
(166)
this model