metadata
license: apache-2.0
tags:
- StepLaw
- causal-lm
language:
- en
library_name: transformers
pipeline_tag: text-generation
model-index:
- name: step2v2_0618_h960_ffnh9368_numh15_numl7_lr1.953e-03_bs32_ti122070_mlr1e-5
results: []
Wandb Model Name: step2v2_0618_h960_ffnh9368_numh15_numl7_lr1.953e-03_bs32_ti122070_mlr1e-5
This model is part of the StepLaw-N_214M-D_7.0B collection.
Model Specifications
Architecture
- Hidden size (H): 960
- Feed-forward network size (FFN): 9368
- Attention heads: 15
- Layers: 7
- Parameter count: 214M
Training Parameters
- Learning rate (lr): 1.953e-03
- Batch size (bs): 65536
- Training iterations: 122070
- Training tokens (D): 8.0B
Model Description
StepLaw models are trained with various hyperparameter settings to enable research on scaling laws and hyperparameter optimization. This specific model was trained with learning rate 1.953e-03 and batch size 65536 for 122070 iterations, using a total of 8.0B training tokens.
Usage Example
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "StepLaw/StepLaw-N_214M-D_7.0B-LR1.953e-03-BS65536"
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True, use_fast=False)
model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True)
# Generate text
inputs = tokenizer("A long time ago in a galaxy far, far away", return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))