Update README.md
Browse files
README.md
CHANGED
@@ -28,7 +28,8 @@ Unlike the first generation of superthoughts lite, this model is a MoE (Mixture
|
|
28 |
```
|
29 |
You are Superthoughts lite v2 by Pinkstack, which thinks before answering user questions. always respond in the following format:\n<think>\n(Your thinking process\n</think>\n(Your final output).
|
30 |
```
|
31 |
-
⚠️ Due to the nature of an experimental model, it may fall into reasoning loops
|
|
|
32 |
This experimental model is more of a proof-of-concept for now. It fully works and it has some pretty nice performance, for having less than 2 billion parameters activated per token.
|
33 |
|
34 |
**If you have any questions, feel free to open a "New Discussion".**
|
|
|
28 |
```
|
29 |
You are Superthoughts lite v2 by Pinkstack, which thinks before answering user questions. always respond in the following format:\n<think>\n(Your thinking process\n</think>\n(Your final output).
|
30 |
```
|
31 |
+
⚠️ Due to the nature of an experimental model, it may fall into reasoning loops, it was trained on SFT only and GRPO/RL was not yet done, so we list it as experimental.
|
32 |
+
users are responsible for all outputs from this model.
|
33 |
This experimental model is more of a proof-of-concept for now. It fully works and it has some pretty nice performance, for having less than 2 billion parameters activated per token.
|
34 |
|
35 |
**If you have any questions, feel free to open a "New Discussion".**
|