LS-W4-T5-SM-Emotions
Model Description
This is a fine-tuned version of the google/flan-t5-small model, trained for the specific task of emotion classification from text.
The model takes a text input and generates a single word indicating the primary emotion.
It was fine-tuned on the dair-ai/emotion dataset.
- Developer: Linkspreed x Web4 AI
- Base Model: google/flan-t5-small
- Model Type: Encoder-Decoder (Text-to-Text)
Intended Use
This model is intended for research and educational purposes.
It can be used to classify the sentiment of short texts, such as social media posts, comments, or short sentences, into one of six categories:
- joy
- sadness
- anger
- love
- fear
- surprise
Training Data
The model was fine-tuned on the dair-ai/emotion dataset, which contains 20,000 English social media messages.
- Training set: 16,000 examples
- Validation set: 2,000 examples
- Test set: 2,000 examples
โ ๏ธ Note:
The training data is highly imbalanced, with joy and anger being the most frequent emotions.
This may lead to a bias where the model over-predicts these two classes and performs poorly on the less frequent ones.
Training Details
Training Hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: AdamW (torch fused) with betas=(0.9, 0.999), epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
Training Setup
- Framework: PyTorch & Hugging Face Transformers
- Hardware: NVIDIA T4 GPU
How to Use
You can use this model directly with the Hugging Face pipeline
for quick inference:
from transformers import pipeline
model_id = "Web4/LS-W4-T5-SM-Emotions"
analyst = pipeline("text2text-generation", model=model_id)
text_to_analyze = "sentiment: I am so happy about my new job!"
result = analyst(text_to_analyze)
print(result)
# Example output:
# [{'generated_text': 'joy'}]
- Downloads last month
- 1
Model tree for Web4/LS-W4-T5-SM-Emotions
Base model
google/flan-t5-small