LLaMA model finetuned using LoRA (1 epoch) on the Stanford Alpaca training data set and quantized to 4bit.

Because this model contains the merged LLaMA weights it is subject to their license restrictions.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train nealchandra/alpaca-13b-hf-int4