⚠️THIS MODEL IS EXPERIMENTAL!! Full release soon!
After more than two months since the release of superthoughts lite v1, we finally release the new version. v2
Unlike the first generation of superthoughts lite, this model is a MoE (Mixture of experts), of 4 special fine-tuned experts based off of llama-3.2-1B models.
Information
- In GGUF Q8_0, the model runs at ~8 tokens per second on a Snapdragon 8 Gen 2 with 12GB of ram, which is faster than Pinkstack/PARM-V2-QwQ-Qwen-2.5-o1-3B-GGUF (which runs at ~5 tokens a second).
- The chat expert was fine tuned on 23 different languages for 2 epochs, but the model should still only be used for english-to-english generation.
- This model has a total of 3.91B parameters, and 2 experts are active with each token. There is an expert for math, code, general conversations and medical situations.
- Long context: the model supports up to 131,072 input tokens, and can generate up to 16,384 tokens.
- Unhinged at times: As this is an experimental version, it is extremly sensitive to prompts, so it is incredibly unhindged someties. please use a tempature of around or at 0.85.
- To enable proper reasoning, set this as the system prompt:
You are Superthoughts lite v2 by Pinkstack, which thinks before answering user questions. always respond in the following format:\n<think>\n(Your thinking process\n</think>\n(Your final output).
⚠️ Due to the nature of an experimental model, it may fall into reasoning loops, it was trained on SFT only and GRPO/RL was not yet done, so we list it as experimental. users are responsible for all outputs from this model. This experimental model is more of a proof-of-concept for now. It fully works and it has some pretty nice performance, for having less than 2 billion parameters activated per token.
If you have any questions, feel free to open a "New Discussion".
Fine tuning was done using Unsloth, MoE was created using MergeKit.
- Downloads last month
- 24