File size: 2,799 Bytes
6bfc918 d4444b4 6bfc918 73ba0a2 6bfc918 699c40e ce8aedd 6bfc918 73ba0a2 3bd9480 699c40e 6bfc918 03beb17 db189b6 aa674a6 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 |
---
library_name: transformers
license: llama3.2
language:
- en
base_model:
- meta-llama/Llama-3.2-1B-Instruct
pipeline_tag: text-generation
tags:
- code
- math
- cot
- conversational
- moe
- Superthoughts
---
## ⚠️THIS MODEL IS EXPERIMENTAL!! Full release soon!

After more than two months since the release of superthoughts lite v1, we finally release the new version. **v2**
Unlike the first generation of superthoughts lite, this model is a MoE (Mixture of experts), of 4 special fine-tuned experts based off of llama-3.2-1B models.
# Information
- In GGUF Q8_0, the model runs at ~8 tokens per second on a Snapdragon 8 Gen 2 with 12GB of ram, which is faster than Pinkstack/PARM-V2-QwQ-Qwen-2.5-o1-3B-GGUF (which runs at ~5 tokens a second).
- The chat expert was fine tuned on 23 different languages for 2 epochs, but the model should still only be used for english-to-english generation.
- This model has a total of 3.91B parameters, and 2 experts are active with each token. There is an expert for math, code, general conversations and medical situations.
- Long context: the model supports up to **131,072** input tokens, and can generate up to **16,384** tokens.
- Unhinged at times: As this is an experimental version, it is extremly sensitive to prompts, so it is incredibly unhindged someties. please use a tempature of around or at 0.85.
- To enable proper reasoning, set this as the system prompt:
```
You are Superthoughts lite v2 by Pinkstack, which thinks before answering user questions. always respond in the following format:\n<think>\n(Your thinking process\n</think>\n(Your final output).
```
And or start the model output with a ```<think>``` xml tag. ideally do both.
⚠️ Due to the nature of an experimental model, it may fall into reasoning loops, it was trained on SFT only and GRPO/RL was not yet done, so we list it as experimental.
users are responsible for all outputs from this model.
This experimental model is more of a proof-of-concept for now. It fully works and it has some pretty nice performance, for having less than 2 billion parameters activated per token.
# Examples
Note, on our local test, it runs at about 55 tokens a second.



**If you have any questions, feel free to open a "New Discussion".**
Fine tuning was done using Unsloth, MoE was created using MergeKit. |