Update README.md
Browse files
README.md
CHANGED
@@ -2,7 +2,7 @@
|
|
2 |
license: mit
|
3 |
---
|
4 |
|
5 |
-
|
6 |
Multi-language sentiment classification model developed over the Microsoft (DeBERTa-v3 base model)[https://huggingface.co/microsoft/deberta-v3-base]. In order to train the model the following dataset where used:
|
7 |
- (tyqiangz/multilingual-sentiments)[tyqiangz/multilingual-sentiments]
|
8 |
- (cardiffnlp/tweet_sentiment_multilingual)[cardiffnlp/tweet_sentiment_multilingual]
|
@@ -11,22 +11,26 @@ Multi-language sentiment classification model developed over the Microsoft (DeBE
|
|
11 |
- ABSC amazon review
|
12 |
- SST2
|
13 |
|
14 |
-
Evaluation and comparison with GPT-4o model:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
15 |
|
16 |
-
|
17 |
-
|
18 |
-
GPT-4 0.6113 0.8605
|
19 |
-
sent-eng Our 0.6289 0.6470
|
20 |
-
GPT-4 0.4611 0.5870
|
21 |
-
sent-twi Our 0.3368 0.3488
|
22 |
-
GPT-4 0.5049 0.5385
|
23 |
-
mixed Our 0.5644 0.7786
|
24 |
-
GPT-4 0.5336 0.6863
|
25 |
-
absc-laptop Our 0.5513 0.6682
|
26 |
-
GPT-4 0.6679 0.7642
|
27 |
-
absc-rest Our 0.6149 0.7726
|
28 |
-
GPT-4 0.7057 0.8385
|
29 |
-
stanford Our 0.8352 0.8353
|
30 |
-
GPT-4 0.8045 0.8032
|
31 |
-
amazon-var Our 0.6432 0.9647
|
32 |
-
GPT-4 - 0.9450
|
|
|
2 |
license: mit
|
3 |
---
|
4 |
|
5 |
+
# Model
|
6 |
Multi-language sentiment classification model developed over the Microsoft (DeBERTa-v3 base model)[https://huggingface.co/microsoft/deberta-v3-base]. In order to train the model the following dataset where used:
|
7 |
- (tyqiangz/multilingual-sentiments)[tyqiangz/multilingual-sentiments]
|
8 |
- (cardiffnlp/tweet_sentiment_multilingual)[cardiffnlp/tweet_sentiment_multilingual]
|
|
|
11 |
- ABSC amazon review
|
12 |
- SST2
|
13 |
|
14 |
+
# Evaluation and comparison with GPT-4o model:
|
15 |
+
|
16 |
+
| Dataset | Model | F1 | Accuracy |
|
17 |
+
|------------------|--------|--------|----------|
|
18 |
+
| **sst2** | Our | 0.6161 | 0.9231 |
|
19 |
+
| | GPT-4 | 0.6113 | 0.8605 |
|
20 |
+
| **sent-eng** | Our | 0.6289 | 0.6470 |
|
21 |
+
| | GPT-4 | 0.4611 | 0.5870 |
|
22 |
+
| **sent-twi** | Our | 0.3368 | 0.3488 |
|
23 |
+
| | GPT-4 | 0.5049 | 0.5385 |
|
24 |
+
| **mixed** | Our | 0.5644 | 0.7786 |
|
25 |
+
| | GPT-4 | 0.5336 | 0.6863 |
|
26 |
+
| **absc-laptop** | Our | 0.5513 | 0.6682 |
|
27 |
+
| | GPT-4 | 0.6679 | 0.7642 |
|
28 |
+
| **absc-rest** | Our | 0.6149 | 0.7726 |
|
29 |
+
| | GPT-4 | 0.7057 | 0.8385 |
|
30 |
+
| **stanford** | Our | 0.8352 | 0.8353 |
|
31 |
+
| | GPT-4 | 0.8045 | 0.8032 |
|
32 |
+
| **amazon-var** | Our | 0.6432 | 0.9647 |
|
33 |
+
| | GPT-4 | 0.0000 | 0.9450 |
|
34 |
|
35 |
+
# Reference
|
36 |
+
TBA
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|