zephyr_0.1

The DPO-trained model from alignment-handbook/zephyr-7b-sft-full using 10% data of HuggingFaceH4/ultrafeedback_binarized, as in the "Weak-to-Strong Extrapolation Expedites Alignment" paper.

Downloads last month
13
Safetensors
Model size
7.24B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Spaces using chujiezheng/zephyr_0.1 6

Collection including chujiezheng/zephyr_0.1