abarbosa's picture
Pushing fine-tuned model to Hugging Face Hub
5ab2d74 verified
[2025-07-04 03:56:18,799][__main__][INFO] - cache_dir: /tmp/
dataset:
name: kamel-usp/aes_enem_dataset
split: JBCS2025
training_params:
seed: 42
num_train_epochs: 20
logging_steps: 100
metric_for_best_model: QWK
bf16: true
bootstrap:
enabled: true
n_bootstrap: 10000
bootstrap_seed: 42
metrics:
- QWK
- Macro_F1
- Weighted_F1
post_training_results:
model_path: /workspace/jbcs2025/outputs/2025-03-24/20-42-59
experiments:
model:
name: meta-llama/Llama-3.1-8B
type: llama31_classification_lora
num_labels: 6
output_dir: ./results/
logging_dir: ./logs/
best_model_dir: ./results/best_model
lora_r: 8
lora_dropout: 0.05
lora_alpha: 16
lora_target_modules: all-linear
tokenizer:
name: meta-llama/Llama-3.1-8B
dataset:
grade_index: 0
use_full_context: true
training_params:
weight_decay: 0.01
warmup_ratio: 0.1
learning_rate: 5.0e-05
train_batch_size: 8
eval_batch_size: 4
gradient_accumulation_steps: 2
gradient_checkpointing: true
[2025-07-04 03:56:22,701][__main__][INFO] - GPU 0: NVIDIA H200 | TDP 700 W
[2025-07-04 03:56:22,702][__main__][INFO] - Starting the Fine Tuning training process.
[2025-07-04 03:56:26,836][transformers.tokenization_utils_base][INFO] - loading file tokenizer.json from cache at /tmp/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/tokenizer.json
[2025-07-04 03:56:26,836][transformers.tokenization_utils_base][INFO] - loading file tokenizer.model from cache at None
[2025-07-04 03:56:26,836][transformers.tokenization_utils_base][INFO] - loading file added_tokens.json from cache at None
[2025-07-04 03:56:26,836][transformers.tokenization_utils_base][INFO] - loading file special_tokens_map.json from cache at /tmp/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/special_tokens_map.json
[2025-07-04 03:56:26,837][transformers.tokenization_utils_base][INFO] - loading file tokenizer_config.json from cache at /tmp/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/tokenizer_config.json
[2025-07-04 03:56:26,837][transformers.tokenization_utils_base][INFO] - loading file chat_template.jinja from cache at None
[2025-07-04 03:56:27,105][transformers.tokenization_utils_base][INFO] - Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
[2025-07-04 03:56:27,110][__main__][INFO] - Tokenizer function parameters- Padding:longest; Truncation: False; Use Full Context: True
[2025-07-04 03:56:28,599][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /tmp/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json
[2025-07-04 03:56:28,601][transformers.configuration_utils][INFO] - Model config LlamaConfig {
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 128000,
"eos_token_id": 128001,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 4096,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1",
"2": "LABEL_2",
"3": "LABEL_3",
"4": "LABEL_4",
"5": "LABEL_5"
},
"initializer_range": 0.02,
"intermediate_size": 14336,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1,
"LABEL_2": 2,
"LABEL_3": 3,
"LABEL_4": 4,
"LABEL_5": 5
},
"max_position_embeddings": 131072,
"mlp_bias": false,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": {
"factor": 8.0,
"high_freq_factor": 4.0,
"low_freq_factor": 1.0,
"original_max_position_embeddings": 8192,
"rope_type": "llama3"
},
"rope_theta": 500000.0,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.53.0",
"use_cache": true,
"vocab_size": 128256
}
[2025-07-04 03:56:28,738][transformers.modeling_utils][INFO] - loading weights file model.safetensors from cache at /tmp/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/model.safetensors.index.json
[2025-07-04 03:56:28,739][transformers.modeling_utils][INFO] - Will use torch_dtype=torch.bfloat16 as defined in model's config object
[2025-07-04 03:56:28,739][transformers.modeling_utils][INFO] - Instantiating LlamaForSequenceClassification model under default dtype torch.bfloat16.
[2025-07-04 03:56:41,477][transformers.modeling_utils][INFO] - Some weights of the model checkpoint at meta-llama/Llama-3.1-8B were not used when initializing LlamaForSequenceClassification: ['lm_head.weight']
- This IS expected if you are initializing LlamaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing LlamaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
[2025-07-04 03:56:41,478][transformers.modeling_utils][WARNING] - Some weights of LlamaForSequenceClassification were not initialized from the model checkpoint at meta-llama/Llama-3.1-8B and are newly initialized: ['score.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
[2025-07-04 03:56:42,555][__main__][INFO] - Initialized new PEFT model for ce loss
[2025-07-04 03:56:42,558][__main__][INFO] - None
[2025-07-04 03:56:42,559][transformers.training_args][INFO] - PyTorch: setting up devices
[2025-07-04 03:56:42,594][__main__][INFO] - Total steps: 620. Number of warmup steps: 62
[2025-07-04 03:56:42,602][transformers.trainer][INFO] - You have loaded a model on multiple GPUs. `is_model_parallel` attribute will be force-set to `True` to avoid any unexpected behavior such as device placement mismatching.
[2025-07-04 03:56:42,641][transformers.trainer][INFO] - Using auto half precision backend
[2025-07-04 03:56:42,642][transformers.trainer][WARNING] - No label_names provided for model class `PeftModelForSequenceClassification`. Since `PeftModel` hides base models input arguments, if label_names is not given, label_names can't be set automatically within `Trainer`. Note that empty label_names list will be used instead.
[2025-07-04 03:56:42,644][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: prompt, essay_year, essay_text, id_prompt, supporting_text, id, reference, grades. If prompt, essay_year, essay_text, id_prompt, supporting_text, id, reference, grades are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message.
[2025-07-04 03:56:42,663][transformers.trainer][INFO] -
***** Running Evaluation *****
[2025-07-04 03:56:42,663][transformers.trainer][INFO] - Num examples = 132
[2025-07-04 03:56:42,663][transformers.trainer][INFO] - Batch size = 4
[2025-07-04 03:57:04,119][transformers.trainer][INFO] - The following columns in the Training set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: prompt, essay_year, essay_text, id_prompt, supporting_text, id, reference, grades. If prompt, essay_year, essay_text, id_prompt, supporting_text, id, reference, grades are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message.
[2025-07-04 03:57:04,173][transformers.trainer][INFO] - ***** Running training *****
[2025-07-04 03:57:04,173][transformers.trainer][INFO] - Num examples = 500
[2025-07-04 03:57:04,173][transformers.trainer][INFO] - Num Epochs = 20
[2025-07-04 03:57:04,173][transformers.trainer][INFO] - Instantaneous batch size per device = 8
[2025-07-04 03:57:04,173][transformers.trainer][INFO] - Total train batch size (w. parallel, distributed & accumulation) = 16
[2025-07-04 03:57:04,173][transformers.trainer][INFO] - Gradient Accumulation steps = 2
[2025-07-04 03:57:04,173][transformers.trainer][INFO] - Total optimization steps = 640
[2025-07-04 03:57:04,176][transformers.trainer][INFO] - Number of trainable parameters = 20,996,096
[2025-07-04 03:57:04,254][transformers.models.llama.modeling_llama][WARNING] - `use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`.
[2025-07-04 04:01:30,816][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: prompt, essay_year, essay_text, id_prompt, supporting_text, id, reference, grades. If prompt, essay_year, essay_text, id_prompt, supporting_text, id, reference, grades are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message.
[2025-07-04 04:01:30,820][transformers.trainer][INFO] -
***** Running Evaluation *****
[2025-07-04 04:01:30,821][transformers.trainer][INFO] - Num examples = 132
[2025-07-04 04:01:30,821][transformers.trainer][INFO] - Batch size = 4
[2025-07-04 04:01:51,881][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-07-04/03-56-18/results/checkpoint-32
[2025-07-04 04:01:52,541][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /workspace/.hf_home/hub/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json
[2025-07-04 04:01:52,542][transformers.configuration_utils][INFO] - Model config LlamaConfig {
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 128000,
"eos_token_id": 128001,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 131072,
"mlp_bias": false,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": {
"factor": 8.0,
"high_freq_factor": 4.0,
"low_freq_factor": 1.0,
"original_max_position_embeddings": 8192,
"rope_type": "llama3"
},
"rope_theta": 500000.0,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.53.0",
"use_cache": true,
"vocab_size": 128256
}
[2025-07-04 04:06:19,104][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: prompt, essay_year, essay_text, id_prompt, supporting_text, id, reference, grades. If prompt, essay_year, essay_text, id_prompt, supporting_text, id, reference, grades are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message.
[2025-07-04 04:06:19,108][transformers.trainer][INFO] -
***** Running Evaluation *****
[2025-07-04 04:06:19,108][transformers.trainer][INFO] - Num examples = 132
[2025-07-04 04:06:19,108][transformers.trainer][INFO] - Batch size = 4
[2025-07-04 04:06:40,142][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-07-04/03-56-18/results/checkpoint-64
[2025-07-04 04:06:40,479][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /workspace/.hf_home/hub/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json
[2025-07-04 04:06:40,479][transformers.configuration_utils][INFO] - Model config LlamaConfig {
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 128000,
"eos_token_id": 128001,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 131072,
"mlp_bias": false,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": {
"factor": 8.0,
"high_freq_factor": 4.0,
"low_freq_factor": 1.0,
"original_max_position_embeddings": 8192,
"rope_type": "llama3"
},
"rope_theta": 500000.0,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.53.0",
"use_cache": true,
"vocab_size": 128256
}
[2025-07-04 04:11:06,869][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: prompt, essay_year, essay_text, id_prompt, supporting_text, id, reference, grades. If prompt, essay_year, essay_text, id_prompt, supporting_text, id, reference, grades are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message.
[2025-07-04 04:11:06,873][transformers.trainer][INFO] -
***** Running Evaluation *****
[2025-07-04 04:11:06,873][transformers.trainer][INFO] - Num examples = 132
[2025-07-04 04:11:06,873][transformers.trainer][INFO] - Batch size = 4
[2025-07-04 04:11:27,904][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-07-04/03-56-18/results/checkpoint-96
[2025-07-04 04:11:28,235][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /workspace/.hf_home/hub/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json
[2025-07-04 04:11:28,236][transformers.configuration_utils][INFO] - Model config LlamaConfig {
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 128000,
"eos_token_id": 128001,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 131072,
"mlp_bias": false,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": {
"factor": 8.0,
"high_freq_factor": 4.0,
"low_freq_factor": 1.0,
"original_max_position_embeddings": 8192,
"rope_type": "llama3"
},
"rope_theta": 500000.0,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.53.0",
"use_cache": true,
"vocab_size": 128256
}
[2025-07-04 04:11:28,652][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-07-04/03-56-18/results/checkpoint-32] due to args.save_total_limit
[2025-07-04 04:11:28,666][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-07-04/03-56-18/results/checkpoint-64] due to args.save_total_limit
[2025-07-04 04:15:54,552][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: prompt, essay_year, essay_text, id_prompt, supporting_text, id, reference, grades. If prompt, essay_year, essay_text, id_prompt, supporting_text, id, reference, grades are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message.
[2025-07-04 04:15:54,558][transformers.trainer][INFO] -
***** Running Evaluation *****
[2025-07-04 04:15:54,558][transformers.trainer][INFO] - Num examples = 132
[2025-07-04 04:15:54,558][transformers.trainer][INFO] - Batch size = 4
[2025-07-04 04:16:15,604][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-07-04/03-56-18/results/checkpoint-128
[2025-07-04 04:16:16,005][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /workspace/.hf_home/hub/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json
[2025-07-04 04:16:16,006][transformers.configuration_utils][INFO] - Model config LlamaConfig {
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 128000,
"eos_token_id": 128001,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 131072,
"mlp_bias": false,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": {
"factor": 8.0,
"high_freq_factor": 4.0,
"low_freq_factor": 1.0,
"original_max_position_embeddings": 8192,
"rope_type": "llama3"
},
"rope_theta": 500000.0,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.53.0",
"use_cache": true,
"vocab_size": 128256
}
[2025-07-04 04:16:16,353][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-07-04/03-56-18/results/checkpoint-96] due to args.save_total_limit
[2025-07-04 04:20:42,438][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: prompt, essay_year, essay_text, id_prompt, supporting_text, id, reference, grades. If prompt, essay_year, essay_text, id_prompt, supporting_text, id, reference, grades are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message.
[2025-07-04 04:20:42,442][transformers.trainer][INFO] -
***** Running Evaluation *****
[2025-07-04 04:20:42,442][transformers.trainer][INFO] - Num examples = 132
[2025-07-04 04:20:42,442][transformers.trainer][INFO] - Batch size = 4
[2025-07-04 04:21:03,469][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-07-04/03-56-18/results/checkpoint-160
[2025-07-04 04:21:03,810][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /workspace/.hf_home/hub/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json
[2025-07-04 04:21:03,811][transformers.configuration_utils][INFO] - Model config LlamaConfig {
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 128000,
"eos_token_id": 128001,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 131072,
"mlp_bias": false,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": {
"factor": 8.0,
"high_freq_factor": 4.0,
"low_freq_factor": 1.0,
"original_max_position_embeddings": 8192,
"rope_type": "llama3"
},
"rope_theta": 500000.0,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.53.0",
"use_cache": true,
"vocab_size": 128256
}
[2025-07-04 04:21:04,238][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-07-04/03-56-18/results/checkpoint-128] due to args.save_total_limit
[2025-07-04 04:25:30,205][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: prompt, essay_year, essay_text, id_prompt, supporting_text, id, reference, grades. If prompt, essay_year, essay_text, id_prompt, supporting_text, id, reference, grades are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message.
[2025-07-04 04:25:30,209][transformers.trainer][INFO] -
***** Running Evaluation *****
[2025-07-04 04:25:30,209][transformers.trainer][INFO] - Num examples = 132
[2025-07-04 04:25:30,209][transformers.trainer][INFO] - Batch size = 4
[2025-07-04 04:25:51,251][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-07-04/03-56-18/results/checkpoint-192
[2025-07-04 04:25:51,594][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /workspace/.hf_home/hub/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json
[2025-07-04 04:25:51,594][transformers.configuration_utils][INFO] - Model config LlamaConfig {
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 128000,
"eos_token_id": 128001,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 131072,
"mlp_bias": false,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": {
"factor": 8.0,
"high_freq_factor": 4.0,
"low_freq_factor": 1.0,
"original_max_position_embeddings": 8192,
"rope_type": "llama3"
},
"rope_theta": 500000.0,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.53.0",
"use_cache": true,
"vocab_size": 128256
}
[2025-07-04 04:25:51,943][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-07-04/03-56-18/results/checkpoint-160] due to args.save_total_limit
[2025-07-04 04:30:17,840][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: prompt, essay_year, essay_text, id_prompt, supporting_text, id, reference, grades. If prompt, essay_year, essay_text, id_prompt, supporting_text, id, reference, grades are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message.
[2025-07-04 04:30:17,844][transformers.trainer][INFO] -
***** Running Evaluation *****
[2025-07-04 04:30:17,844][transformers.trainer][INFO] - Num examples = 132
[2025-07-04 04:30:17,844][transformers.trainer][INFO] - Batch size = 4
[2025-07-04 04:30:38,874][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-07-04/03-56-18/results/checkpoint-224
[2025-07-04 04:30:39,258][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /workspace/.hf_home/hub/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json
[2025-07-04 04:30:39,258][transformers.configuration_utils][INFO] - Model config LlamaConfig {
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 128000,
"eos_token_id": 128001,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 131072,
"mlp_bias": false,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": {
"factor": 8.0,
"high_freq_factor": 4.0,
"low_freq_factor": 1.0,
"original_max_position_embeddings": 8192,
"rope_type": "llama3"
},
"rope_theta": 500000.0,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.53.0",
"use_cache": true,
"vocab_size": 128256
}
[2025-07-04 04:30:39,637][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-07-04/03-56-18/results/checkpoint-192] due to args.save_total_limit
[2025-07-04 04:35:05,727][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: prompt, essay_year, essay_text, id_prompt, supporting_text, id, reference, grades. If prompt, essay_year, essay_text, id_prompt, supporting_text, id, reference, grades are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message.
[2025-07-04 04:35:05,731][transformers.trainer][INFO] -
***** Running Evaluation *****
[2025-07-04 04:35:05,731][transformers.trainer][INFO] - Num examples = 132
[2025-07-04 04:35:05,731][transformers.trainer][INFO] - Batch size = 4
[2025-07-04 04:35:26,777][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-07-04/03-56-18/results/checkpoint-256
[2025-07-04 04:35:27,107][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /workspace/.hf_home/hub/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json
[2025-07-04 04:35:27,107][transformers.configuration_utils][INFO] - Model config LlamaConfig {
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 128000,
"eos_token_id": 128001,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 131072,
"mlp_bias": false,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": {
"factor": 8.0,
"high_freq_factor": 4.0,
"low_freq_factor": 1.0,
"original_max_position_embeddings": 8192,
"rope_type": "llama3"
},
"rope_theta": 500000.0,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.53.0",
"use_cache": true,
"vocab_size": 128256
}
[2025-07-04 04:39:53,505][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: prompt, essay_year, essay_text, id_prompt, supporting_text, id, reference, grades. If prompt, essay_year, essay_text, id_prompt, supporting_text, id, reference, grades are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message.
[2025-07-04 04:39:53,509][transformers.trainer][INFO] -
***** Running Evaluation *****
[2025-07-04 04:39:53,509][transformers.trainer][INFO] - Num examples = 132
[2025-07-04 04:39:53,509][transformers.trainer][INFO] - Batch size = 4
[2025-07-04 04:40:14,520][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-07-04/03-56-18/results/checkpoint-288
[2025-07-04 04:40:14,857][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /workspace/.hf_home/hub/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json
[2025-07-04 04:40:14,857][transformers.configuration_utils][INFO] - Model config LlamaConfig {
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 128000,
"eos_token_id": 128001,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 131072,
"mlp_bias": false,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": {
"factor": 8.0,
"high_freq_factor": 4.0,
"low_freq_factor": 1.0,
"original_max_position_embeddings": 8192,
"rope_type": "llama3"
},
"rope_theta": 500000.0,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.53.0",
"use_cache": true,
"vocab_size": 128256
}
[2025-07-04 04:40:15,234][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-07-04/03-56-18/results/checkpoint-256] due to args.save_total_limit
[2025-07-04 04:44:41,198][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: prompt, essay_year, essay_text, id_prompt, supporting_text, id, reference, grades. If prompt, essay_year, essay_text, id_prompt, supporting_text, id, reference, grades are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message.
[2025-07-04 04:44:41,202][transformers.trainer][INFO] -
***** Running Evaluation *****
[2025-07-04 04:44:41,203][transformers.trainer][INFO] - Num examples = 132
[2025-07-04 04:44:41,203][transformers.trainer][INFO] - Batch size = 4
[2025-07-04 04:45:02,223][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-07-04/03-56-18/results/checkpoint-320
[2025-07-04 04:45:02,702][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /workspace/.hf_home/hub/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json
[2025-07-04 04:45:02,703][transformers.configuration_utils][INFO] - Model config LlamaConfig {
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 128000,
"eos_token_id": 128001,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 131072,
"mlp_bias": false,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": {
"factor": 8.0,
"high_freq_factor": 4.0,
"low_freq_factor": 1.0,
"original_max_position_embeddings": 8192,
"rope_type": "llama3"
},
"rope_theta": 500000.0,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.53.0",
"use_cache": true,
"vocab_size": 128256
}
[2025-07-04 04:45:03,101][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-07-04/03-56-18/results/checkpoint-288] due to args.save_total_limit
[2025-07-04 04:49:29,181][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: prompt, essay_year, essay_text, id_prompt, supporting_text, id, reference, grades. If prompt, essay_year, essay_text, id_prompt, supporting_text, id, reference, grades are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message.
[2025-07-04 04:49:29,185][transformers.trainer][INFO] -
***** Running Evaluation *****
[2025-07-04 04:49:29,185][transformers.trainer][INFO] - Num examples = 132
[2025-07-04 04:49:29,185][transformers.trainer][INFO] - Batch size = 4
[2025-07-04 04:49:50,225][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-07-04/03-56-18/results/checkpoint-352
[2025-07-04 04:49:50,588][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /workspace/.hf_home/hub/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json
[2025-07-04 04:49:50,589][transformers.configuration_utils][INFO] - Model config LlamaConfig {
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 128000,
"eos_token_id": 128001,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 131072,
"mlp_bias": false,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": {
"factor": 8.0,
"high_freq_factor": 4.0,
"low_freq_factor": 1.0,
"original_max_position_embeddings": 8192,
"rope_type": "llama3"
},
"rope_theta": 500000.0,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.53.0",
"use_cache": true,
"vocab_size": 128256
}
[2025-07-04 04:49:50,937][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-07-04/03-56-18/results/checkpoint-320] due to args.save_total_limit
[2025-07-04 04:54:16,952][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: prompt, essay_year, essay_text, id_prompt, supporting_text, id, reference, grades. If prompt, essay_year, essay_text, id_prompt, supporting_text, id, reference, grades are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message.
[2025-07-04 04:54:16,956][transformers.trainer][INFO] -
***** Running Evaluation *****
[2025-07-04 04:54:16,956][transformers.trainer][INFO] - Num examples = 132
[2025-07-04 04:54:16,956][transformers.trainer][INFO] - Batch size = 4
[2025-07-04 04:54:37,987][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-07-04/03-56-18/results/checkpoint-384
[2025-07-04 04:54:38,297][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /workspace/.hf_home/hub/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json
[2025-07-04 04:54:38,298][transformers.configuration_utils][INFO] - Model config LlamaConfig {
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 128000,
"eos_token_id": 128001,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 131072,
"mlp_bias": false,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": {
"factor": 8.0,
"high_freq_factor": 4.0,
"low_freq_factor": 1.0,
"original_max_position_embeddings": 8192,
"rope_type": "llama3"
},
"rope_theta": 500000.0,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.53.0",
"use_cache": true,
"vocab_size": 128256
}
[2025-07-04 04:54:38,628][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-07-04/03-56-18/results/checkpoint-352] due to args.save_total_limit
[2025-07-04 04:54:38,640][transformers.trainer][INFO] -
Training completed. Do not forget to share your model on huggingface.co/models =)
[2025-07-04 04:54:38,640][transformers.trainer][INFO] - Loading best model from /workspace/jbcs2025/outputs/2025-07-04/03-56-18/results/checkpoint-224 (score: 0.5911708253358925).
[2025-07-04 04:54:38,769][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-07-04/03-56-18/results/checkpoint-384] due to args.save_total_limit
[2025-07-04 04:54:38,786][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: prompt, essay_year, essay_text, id_prompt, supporting_text, id, reference, grades. If prompt, essay_year, essay_text, id_prompt, supporting_text, id, reference, grades are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message.
[2025-07-04 04:54:38,789][transformers.trainer][INFO] -
***** Running Evaluation *****
[2025-07-04 04:54:38,790][transformers.trainer][INFO] - Num examples = 132
[2025-07-04 04:54:38,790][transformers.trainer][INFO] - Batch size = 4
[2025-07-04 04:54:59,824][__main__][INFO] - Training completed successfully.
[2025-07-04 04:54:59,824][__main__][INFO] - Running on Test
[2025-07-04 04:54:59,824][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: prompt, essay_year, essay_text, id_prompt, supporting_text, id, reference, grades. If prompt, essay_year, essay_text, id_prompt, supporting_text, id, reference, grades are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message.
[2025-07-04 04:54:59,828][transformers.trainer][INFO] -
***** Running Evaluation *****
[2025-07-04 04:54:59,828][transformers.trainer][INFO] - Num examples = 138
[2025-07-04 04:54:59,828][transformers.trainer][INFO] - Batch size = 4
[2025-07-04 04:55:22,006][__main__][INFO] - Test metrics: {'eval_loss': 1.1187604665756226, 'eval_model_preparation_time': 0.0134, 'eval_accuracy': 0.5579710144927537, 'eval_RMSE': 30.072376462244492, 'eval_QWK': 0.5690952762209768, 'eval_HDIV': 0.007246376811594235, 'eval_Macro_F1': 0.3562924218759459, 'eval_Micro_F1': 0.5579710144927537, 'eval_Weighted_F1': 0.5532273763594029, 'eval_TP_0': 0, 'eval_TN_0': 137, 'eval_FP_0': 0, 'eval_FN_0': 1, 'eval_TP_1': 0, 'eval_TN_1': 138, 'eval_FP_1': 0, 'eval_FN_1': 0, 'eval_TP_2': 8, 'eval_TN_2': 108, 'eval_FP_2': 20, 'eval_FN_2': 2, 'eval_TP_3': 31, 'eval_TN_3': 61, 'eval_FP_3': 11, 'eval_FN_3': 35, 'eval_TP_4': 37, 'eval_TN_4': 60, 'eval_FP_4': 27, 'eval_FN_4': 14, 'eval_TP_5': 1, 'eval_TN_5': 125, 'eval_FP_5': 3, 'eval_FN_5': 9, 'eval_runtime': 22.165, 'eval_samples_per_second': 6.226, 'eval_steps_per_second': 1.579, 'epoch': 12.0}
[2025-07-04 04:55:22,007][transformers.trainer][INFO] - Saving model checkpoint to ./results/best_model
[2025-07-04 04:55:22,315][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /workspace/.hf_home/hub/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json
[2025-07-04 04:55:22,315][transformers.configuration_utils][INFO] - Model config LlamaConfig {
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 128000,
"eos_token_id": 128001,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 131072,
"mlp_bias": false,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": {
"factor": 8.0,
"high_freq_factor": 4.0,
"low_freq_factor": 1.0,
"original_max_position_embeddings": 8192,
"rope_type": "llama3"
},
"rope_theta": 500000.0,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.53.0",
"use_cache": true,
"vocab_size": 128256
}
[2025-07-04 04:55:22,470][transformers.tokenization_utils_base][INFO] - tokenizer config file saved in ./results/best_model/tokenizer_config.json
[2025-07-04 04:55:22,470][transformers.tokenization_utils_base][INFO] - Special tokens file saved in ./results/best_model/special_tokens_map.json
[2025-07-04 04:55:22,598][__main__][INFO] - Model and tokenizer saved to ./results/best_model
[2025-07-04 04:55:22,630][__main__][INFO] - Fine Tuning Finished.
[2025-07-04 04:55:23,139][__main__][INFO] - Total emissions: 0.0452 kg CO2eq