mervp commited on
Commit
08ff42f
·
verified ·
1 Parent(s): cd0f567

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +60 -3
README.md CHANGED
@@ -1,3 +1,60 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ license: apache-2.0
4
+ tags:
5
+ - sentiment-analysis
6
+ - text-classification
7
+ - bert
8
+ - transformers
9
+ - news
10
+ - reviews
11
+ ---
12
+
13
+ # Sentify-BERT — Fine-tuned BERT for Sentiment Classification (Positive, Neutral, Negative)
14
+
15
+ **Sentify-BERT** is a BERT-based model specifically fine-tuned for **sentiment classification of sentences** into three categories: **Positive**, **Negative**, and **Neutral**.
16
+
17
+ This model has been trained on a **large and diverse dataset of news articles** across a wide range of categories. It achieves **over 86% accuracy** and demonstrates a strong understanding of sentence-level sentiment, even in nuanced or mixed-context cases.
18
+
19
+ ---
20
+
21
+ ## 🔍 Model Highlights
22
+
23
+ - **Base model**: `bert-base-uncased`
24
+ - **Fine-tuned for**: Sentiment classification (3-class)
25
+ - **Accuracy**: > 86%
26
+ - **Classes**: Positive, Neutral, Negative
27
+ - **Language**: English
28
+ - **Format**: `safetensors`
29
+ - **Tokenizer**: Compatible with `bert-base-uncased`
30
+
31
+ ---
32
+
33
+ ## 💼 Applications
34
+
35
+ This model is well-suited for:
36
+
37
+ - **News article sentiment analysis**
38
+ - **Amazon product review analysis**
39
+ - **Customer support or service feedback systems**
40
+ - **General-purpose opinion mining**
41
+
42
+ ---
43
+
44
+ ## 🚀 Usage Example
45
+
46
+ ```python
47
+ from transformers import AutoTokenizer, AutoModelForSequenceClassification
48
+ import torch
49
+
50
+ model = AutoModelForSequenceClassification.from_pretrained("your-username/Sentify-BERT")
51
+ tokenizer = AutoTokenizer.from_pretrained("your-username/Sentify-BERT")
52
+
53
+ text = "The government’s response to the crisis was surprisingly effective."
54
+ inputs = tokenizer(text, return_tensors="pt")
55
+
56
+ with torch.no_grad():
57
+ logits = model(**inputs).logits
58
+
59
+ predicted_class = torch.argmax(logits, dim=1).item()
60
+ print(["Negative", "Neutral", "Positive"][predicted_class])