KaraKaraWitch commited on
Commit
7afee7f
·
verified ·
1 Parent(s): 4b95cf5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -5
README.md CHANGED
@@ -3,12 +3,17 @@ license: apache-2.0
3
  library_name: transformers
4
  ---
5
 
6
- # Qwerky-QwQ-32B
7
 
8
- The following is a model converted from Qwen 32B QWQ, to the RWKV based architecture.
9
- For existing details of the process from our previous release, find it [here]: https://huggingface.co/recursal/QRWKV6-32B-Instruct-Preview-v0.1
10
 
11
- Benchmarks for Qwerky-QwQ-32B and the Qwerky-72B models
 
 
 
 
 
 
12
 
13
  | Tasks | Metric | Qwerky-QwQ-32B | Qwen/QwQ-32B | Qwerky-72B | Qwen2.5-72B-Instruct |
14
  |:---:|:---:|:---:|:---:|:---:|:---:|
@@ -21,4 +26,4 @@ Benchmarks for Qwerky-QwQ-32B and the Qwerky-72B models
21
  | winogrande | acc | **0.7324** | 0.7048 | **0.7956** | 0.7632 |
22
  | mmlu | acc | 0.7431 | **0.7985** | 0.7746 | **0.8338** |
23
 
24
- > All benchmark's besides MMLU are 0 n-shot, and is version 1, MMLU is version 2
 
3
  library_name: transformers
4
  ---
5
 
6
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/633e85093a17ab61de8d9073/OufWyNMKYRozfC8j8S-M8.png)
7
 
8
+ Linear models offer a promising approach to significantly reduce computational costs at scale, particularly for large context lengths. Enabling a >1000x improvement in inference costs, enabling o1 inference time thinking and wider AI accessibility.
 
9
 
10
+ As demonstrated with our Qwerky-72B-Preview and prior models such as QRWKV6-32B Instruct Preview, we have successfully converted Qwen 2.5 QwQ 32B into a RWKV variant without requiring a pretrain on the base model or retraining the model from scratch. Enabling us to test and validate the more efficient RWKV Linear attention with a much smaller budget. Since our preview, we have continued to refine our technique and managed to improve the model over the preview model iteration.
11
+
12
+ As with our previous models, the model's inherent knowledge and dataset training are inherited from its "parent" model. Consequently, unlike previous RWKV models trained on over 100+ languages, the QRWKV model is limited to approximately 30 languages supported by the Qwen line of models.
13
+
14
+ You may find our details of the process from our previous release, [here](https://huggingface.co/recursal/QRWKV6-32B-Instruct-Preview-v0.1).
15
+
16
+ Benchmarks is as follows for both Qwerky-QwQ-32B and Qwerky-72B models:
17
 
18
  | Tasks | Metric | Qwerky-QwQ-32B | Qwen/QwQ-32B | Qwerky-72B | Qwen2.5-72B-Instruct |
19
  |:---:|:---:|:---:|:---:|:---:|:---:|
 
26
  | winogrande | acc | **0.7324** | 0.7048 | **0.7956** | 0.7632 |
27
  | mmlu | acc | 0.7431 | **0.7985** | 0.7746 | **0.8338** |
28
 
29
+ > *Note: All benchmarks except MMLU are 0-shot and Version 1. For MMLU, it's Version 2.*