metadata
license: apache-2.0
language:
- en
- zh
- ja
- ko
- fr
- ar
- es
- pt
metrics:
- accuracy
base_model:
- BlinkDL/rwkv7-g1
pipeline_tag: text-generation
rwkv7-0.4B-g1
This is RWKV-7 model under flash-linear attention format.
Model Details
Model Description
- Developed by: Bo Peng, Yu Zhang, Songlin Yang, Ruichong Zhang
- Funded by: RWKV Project (Under LF AI & Data Foundation)
- Model type: RWKV7
- Language(s) (NLP): English
- License: Apache-2.0
- Parameter count: 450M
- Tokenizer: RWKV World tokenizer
- Vocabulary size: 65,536
Model Sources
- Repository: https://github.com/fla-org/flash-linear-attention ; https://github.com/BlinkDL/RWKV-LM
- Paper: https://arxiv.org/abs/2503.14456
Uses
Install flash-linear-attention
and the latest version of transformers
before using this model:
pip install git+https://github.com/fla-org/flash-linear-attention
pip install 'transformers>=4.48.0'
Direct Use
You can use this model just as any other HuggingFace models:
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained('fla-hub/rwkv7-0.4B-g1', trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained('fla-hub/rwkv7-0.4B-g1', trust_remote_code=True)
Training Data
This model is trained on the World v3 with a total of 3.119 trillion tokens.
Training Hyperparameters
- Token Count: 1.1T + 2T + 2T
FAQ
Q: safetensors metadata is none.
A: upgrade transformers to >=4.48.0: pip install 'transformers>=4.48.0'