RichardErkhov commited on
Commit
104cb0f
·
verified ·
1 Parent(s): 9af73c6

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +36 -0
README.md ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ TinyLlama-NoPE-HeadScale8k - AWQ
11
+ - Model creator: https://huggingface.co/AntNLP/
12
+ - Original model: https://huggingface.co/AntNLP/TinyLlama-NoPE-HeadScale8k/
13
+
14
+
15
+
16
+
17
+ Original model description:
18
+ ---
19
+ license: mit
20
+ ---
21
+
22
+ # TinyLlama-NoPE-HeadScale8k
23
+
24
+ ## Citation
25
+
26
+ ```
27
+ @misc{wang2024length,
28
+ title={Length Generalization of Causal Transformers without Position Encoding},
29
+ author={Jie Wang and Tao Ji and Yuanbin Wu and Hang Yan and Tao Gui and Qi Zhang and Xuanjing Huang and Xiaoling Wang},
30
+ year={2024},
31
+ eprint={2404.12224},
32
+ archivePrefix={arXiv},
33
+ primaryClass={cs.CL}
34
+ }
35
+ ```
36
+