--- license: apache-2.0 --- # Mistral 7B V0.1 Implementation of Mistral 7B model by the *phospho* team. You can test it directly in the HuggingFace space. ## Use in transformers ```python import torch from transformers import LlamaForCausalLM, LlamaTokenizer, pipeline, TextStreamer tokenizer = LlamaTokenizer.from_pretrained("phospho-app/mistral_7b_V0.1") model = LlamaForCausalLM.from_pretrained("phospho-app/mistral_7b_V0.1", torch_dtype=torch.bfloat16) pipe = pipeline("text-generation", model=model, tokenizer=tokenizer) ```