indobert-post-training-fin-sa

This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3027
  • Accuracy: 0.9505

Model description

This model is an attempt to recreate the results obtained from the paper arXiv:2310.09736 [cs.CL] by post-training the model indobert-base-p1 on the (unprocessed) Financial News Articles dataset and fine-tuning on the Indonesian Financial Phrasebank dataset (80% train-test split). It achieves the following results on the testing set:

  • Loss: 0.2315
  • Accuracy: 0.9470
  • Epoch: 2.7451

Intended uses & limitations

The dataset used for post-training this model has not yet been cleaned. Specifically, the major problems I identified are:

  • The column contains entire article bodies as entires. When tokenizing the dataset, each entries is truncated to 512 tokens in order to fit BERT's context window, thus losing most of the data within the entries.
  • The text entries are not properly cleaned. Specifically, article header/location info, recommendation modal texts (occurs as "Baca Juga"), and standard footer about Google News are still included.

The follow-up model is post-trained after addressing these problems in the dataset.

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 8
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Accuracy
0.5935 0.1961 10 0.5789 0.7363
0.4291 0.3922 20 0.2914 0.9121
0.3427 0.5882 30 0.2236 0.9451
0.2135 0.7843 40 0.1849 0.9451
0.1754 0.9804 50 0.1987 0.9286
0.1782 1.1765 60 0.1769 0.9451
0.1243 1.3725 70 0.1814 0.9505
0.0647 1.5686 80 0.1863 0.9396
0.142 1.7647 90 0.1948 0.9396
0.0937 1.9608 100 0.1896 0.9396
0.042 2.1569 110 0.2223 0.9286
0.0339 2.3529 120 0.2156 0.9505
0.0277 2.5490 130 0.2604 0.9451
0.0942 2.7451 140 0.3027 0.9505

Framework versions

  • Transformers 4.51.3
  • Pytorch 2.6.0+cu124
  • Datasets 3.6.0
  • Tokenizers 0.21.1

Testing results

{'eval_loss': 0.23147933185100555, 'eval_accuracy': 0.9470198675496688, 'eval_runtime': 1.4549, 'eval_samples_per_second': 311.351, 'eval_steps_per_second': 10.31, 'epoch': 2.7450980392156863}

Downloads last month
9
Safetensors
Model size
124M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support