--- library_name: transformers license: llama3.2 language: - en base_model: - meta-llama/Llama-3.2-1B-Instruct pipeline_tag: text-generation tags: - code - math - cot - conversational - moe - Superthoughts new_version: Pinkstack/Superthoughts-lite-v2-MOE-Llama3.2 --- ## ⚠️THIS MODEL IS EXPERIMENTAL!! Please use the fully released Pinkstack/Superthoughts-lite-v2-MOE-Llama3.2 instead. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6710ba6af1279fe0dfe33afe/zeYeNGUPCXFPUodR2Gj8u.png) After more than two months since the release of superthoughts lite v1, we finally release the new version. **v2** Unlike the first generation of superthoughts lite, this model is a MoE (Mixture of experts), of 4 special fine-tuned experts based off of llama-3.2-1B models. # Information - In GGUF Q8_0, the model runs at ~8 tokens per second on a Snapdragon 8 Gen 2 with 12GB of ram, which is faster than Pinkstack/PARM-V2-QwQ-Qwen-2.5-o1-3B-GGUF (which runs at ~5 tokens a second). - The chat expert was fine tuned on 23 different languages for 2 epochs, but the model should still only be used for english-to-english generation. - This model has a total of 3.91B parameters, and 2 experts are active with each token. There is an expert for math, code, general conversations and medical situations. - Long context: the model supports up to **131,072** input tokens, and can generate up to **16,384** tokens. - Unhinged at times: As this is an experimental version, it is extremly sensitive to prompts, so it is incredibly unhindged someties. please use a tempature of around or at 0.85. - To enable proper reasoning, set this as the system prompt: ``` You are Superthoughts lite v2 by Pinkstack, which thinks before answering user questions. always respond in the following format:\n\n(Your thinking process\n\n(Your final output). ``` And or start the model output with a `````` xml tag. ideally do both. ⚠️ Due to the nature of an experimental model, it may fall into reasoning loops, it was trained on SFT only and GRPO/RL was not yet done, so we list it as experimental. users are responsible for all outputs from this model. This experimental model is more of a proof-of-concept for now. It fully works and it has some pretty nice performance, for having less than 2 billion parameters activated per token. # Examples Note, on our local test, it runs at about 55 tokens a second. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6710ba6af1279fe0dfe33afe/_HflqH71T5oB4aignEZyF.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6710ba6af1279fe0dfe33afe/qfV8HeyHHTWWjadmuXAkt.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6710ba6af1279fe0dfe33afe/fxRZ3jT2fgECsydMy1SA2.png) **If you have any questions, feel free to open a "New Discussion".** Fine tuning was done using Unsloth, MoE was created using MergeKit.