Transformers documentation
Llama3
This model was released on 2024-04-18 and added to Hugging Face Transformers on 2024-04-24.
Llama3
import transformers
import torch
model_id = "meta-llama/Meta-Llama-3-8B"
pipeline = transformers.pipeline("text-generation", model=model_id, model_kwargs={"dtype": torch.bfloat16}, device_map="auto")
pipeline("Hey how are you doing today?")
Overview
The Llama3 model was proposed in Introducing Meta Llama 3: The most capable openly available LLM to date by the meta AI team.
The abstract from the blogpost is the following:
Today, we’re excited to share the first two models of the next generation of Llama, Meta Llama 3, available for broad use. This release features pretrained and instruction-fine-tuned language models with 8B and 70B parameters that can support a broad range of use cases. This next generation of Llama demonstrates state-of-the-art performance on a wide range of industry benchmarks and offers new capabilities, including improved reasoning. We believe these are the best open source models of their class, period. In support of our longstanding open approach, we’re putting Llama 3 in the hands of the community. We want to kickstart the next wave of innovation in AI across the stack—from applications to developer tools to evals to inference optimizations and more. We can’t wait to see what you build and look forward to your feedback.
Checkout all Llama3 model checkpoints here. The original code of the authors can be found here.
Usage tips
The
Llama3
models were trained usingbfloat16
, but the original inference usesfloat16
. The checkpoints uploaded on the Hub usedtype = 'float16'
, which will be used by theAutoModel
API to cast the checkpoints fromtorch.float32
totorch.float16
.The
dtype
of the online weights is mostly irrelevant unless you are usingdtype="auto"
when initializing a model usingmodel = AutoModelForCausalLM.from_pretrained("path", dtype = "auto")
. The reason is that the model will first be downloaded ( using thedtype
of the checkpoints online), then it will be casted to the defaultdtype
oftorch
(becomestorch.float32
), and finally, if there is adtype
ortorch_dtype
provided in the config, it will be used.Training the model in
float16
is not recommended and is known to producenan
; as such, the model should be trained inbfloat16
.
Tips:
Weights for the Llama3 models can be obtained by filling out this form
The architecture is exactly the same as Llama2.
The tokenizer is a BPE model based on tiktoken (vs the one based on sentencepiece implementation for Llama2). The main difference that it ignores BPE merge rules when an input token is part of the vocab. This means that if no merge exist to produce
"hugging"
, instead of having the smallest units, like["hug","ging"] form 2 tokens, if
“hugging”` is part of the vocab, it will be automatically returned as a token.The original model uses
pad_id = -1
which means that there is no padding token. We can’t have the same logic, make sure to add a padding token usingtokenizer.add_special_tokens({"pad_token":"<pad>"})
and resize the token embedding accordingly. You should also set themodel.config.pad_token_id
. Theembed_tokens
layer of the model is initialized withself.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size, self.config.padding_idx)
, which makes sure that encoding the padding token will output zeros, so passing it when initializing is recommended.The original checkpoint can be converted using the conversion script. The script can be called with the following (example) command:
python src/transformers/models/llama/convert_llama_weights_to_hf.py \ --input_dir /path/to/downloaded/llama/weights --model_size 7B --output_dir /output/path --llama_version 3
After conversion, the model and tokenizer can be loaded via:
from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("/output/path") model = AutoModelForCausalLM.from_pretrained("/output/path")
Note that executing the script requires enough CPU RAM to host the whole model in float16 precision (even if the biggest versions come in several checkpoints they each contain a part of each weight of the model, so we need to load them all in RAM). For the 75B model, it’s thus 145GB of RAM needed.
When using Flash Attention 2 via
attn_implementation="flash_attention_2"
, don’t passdtype
to thefrom_pretrained
class method and use Automatic Mixed-Precision training. When usingTrainer
, it is simply specifying eitherfp16
orbf16
toTrue
. Otherwise, make sure you are usingtorch.autocast
. This is required because the Flash Attention only supportfp16
andbf16
data type.
Resources
A ton of cool resources are already available on the documentation page of Llama2, inviting contributors to add new resources curated for Llama3 here! 🤗
Update on GitHub