Mintaka-Qwen3-1.6B-V3.1-GGUF
Mintaka-Qwen3-1.6B-V3.1 is a high-efficiency, science-focused reasoning model based on Qwen-1.6B and trained on DeepSeek v3.1 synthetic traces (10,000 entries). It is optimized for random event simulation, logical-problem analysis, and structured scientific reasoning. The model balances symbolic precision with lightweight deployment, making it suitable for researchers, educators, and developers seeking efficient reasoning under constrained compute.
Model Files
File Name | Quant Type | File Size |
---|---|---|
Mintaka-Qwen3-1.6B-V3.1.BF16.gguf | BF16 | 3.45 GB |
Mintaka-Qwen3-1.6B-V3.1.F16.gguf | F16 | 3.45 GB |
Mintaka-Qwen3-1.6B-V3.1.F32.gguf | F32 | 6.89 GB |
Mintaka-Qwen3-1.6B-V3.1.Q2_K.gguf | Q2_K | 778 MB |
Mintaka-Qwen3-1.6B-V3.1.Q3_K_L.gguf | Q3_K_L | 1 GB |
Mintaka-Qwen3-1.6B-V3.1.Q3_K_M.gguf | Q3_K_M | 940 MB |
Mintaka-Qwen3-1.6B-V3.1.Q3_K_S.gguf | Q3_K_S | 867 MB |
Mintaka-Qwen3-1.6B-V3.1.Q4_0.gguf | Q4_0 | 1.05 GB |
Mintaka-Qwen3-1.6B-V3.1.Q4_1.gguf | Q4_1 | 1.14 GB |
Mintaka-Qwen3-1.6B-V3.1.Q4_K.gguf | Q4_K | 1.11 GB |
Mintaka-Qwen3-1.6B-V3.1.Q4_K_M.gguf | Q4_K_M | 1.11 GB |
Mintaka-Qwen3-1.6B-V3.1.Q4_K_S.gguf | Q4_K_S | 1.06 GB |
Mintaka-Qwen3-1.6B-V3.1.Q5_0.gguf | Q5_0 | 1.23 GB |
Mintaka-Qwen3-1.6B-V3.1.Q5_1.gguf | Q5_1 | 1.32 GB |
Mintaka-Qwen3-1.6B-V3.1.Q5_K.gguf | Q5_K | 1.26 GB |
Mintaka-Qwen3-1.6B-V3.1.Q5_K_M.gguf | Q5_K_M | 1.26 GB |
Mintaka-Qwen3-1.6B-V3.1.Q5_K_S.gguf | Q5_K_S | 1.23 GB |
Mintaka-Qwen3-1.6B-V3.1.Q6_K.gguf | Q6_K | 1.42 GB |
Mintaka-Qwen3-1.6B-V3.1.Q8_0.gguf | Q8_0 | 1.83 GB |
Quants Usage
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):
- Downloads last month
- 1,716
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit
32-bit