GGUF
llama.cpp
juliendenize patrickvonplaten commited on
Commit
dca3799
·
verified ·
0 Parent(s):

Super-squash branch 'main' using huggingface_hub

Browse files

Co-authored-by: patrickvonplaten <patrickvonplaten@users.noreply.huggingface.co>

.gitattributes ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ Magistral-Small-2509-BF16.gguf filter=lfs diff=lfs merge=lfs -text
37
+ Magistral-Small-2509-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
38
+ Magistral-Small-2509-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
39
+ Magistral-Small-2509-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
Magistral-Small-2509-BF16.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0d4e5604e1ee1432d48f263332c9e7a146be45950070eaaecf77f868650df0dc
3
+ size 47154041856
Magistral-Small-2509-Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d3a024d29e0e8f35d9353f5d4f08fc3715406835c8ae1328ce2f3bd212a43434
3
+ size 14334432256
Magistral-Small-2509-Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ebfae8b455d37ac8a19362ec659115e225b302c950f93b13c2a4fa03d6bbc45c
3
+ size 16764507136
Magistral-Small-2509-Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d78f43dc517c744f65183a7f1e3abdc15a52bda352ffd6b07d544f72fb0dd7ef
3
+ size 25055302656
README.md ADDED
@@ -0,0 +1,205 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ - fr
5
+ - de
6
+ - es
7
+ - pt
8
+ - it
9
+ - ja
10
+ - ko
11
+ - ru
12
+ - zh
13
+ - ar
14
+ - fa
15
+ - id
16
+ - ms
17
+ - ne
18
+ - pl
19
+ - ro
20
+ - sr
21
+ - sv
22
+ - tr
23
+ - uk
24
+ - vi
25
+ - hi
26
+ - bn
27
+ license: apache-2.0
28
+ library_name: llama.cpp
29
+ inference: false
30
+ base_model:
31
+ - mistralai/Magistral-Small-2509
32
+ extra_gated_description: >-
33
+ If you want to learn more about how we process your personal data, please read
34
+ our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
35
+ ---
36
+
37
+ > [!Note]
38
+ > To make our models more accesible to everyone, this repo provides a basic GGUF checkpoint compatible with llama.cpp
39
+ > and mistral-common.
40
+ >
41
+ > In addition to using this GGUF checkpoint, we encourage the community to use other GGUF variants, *e.g.*
42
+ > from [Unsloth](https://huggingface.co/unsloth), [LM Studio](https://huggingface.co/lmstudio-community), ...
43
+ >
44
+ > If you encounter any problems with the provided checkpoints here, please open a discussion or pull request.
45
+
46
+
47
+ # Magistral Small 1.2 (GGUF)
48
+
49
+ Building upon [Mistral Small 3.2 (2506)](https://huggingface.co/mistralai/Mistral-Small-3.2-24B-Instruct-2506), **with added reasoning capabilities**, undergoing SFT from Magistral Medium traces and RL on top, it's a small, efficient reasoning model with 24B parameters.
50
+
51
+ Magistral Small can be deployed locally, fitting within a single RTX 4090 or a 32GB RAM MacBook once quantized.
52
+
53
+ This is the GGUF version of the [Magistral-Small-2509](https://huggingface.co/mistralai/Magistral-Small-2509) model. We released the BF16 weights as well as the
54
+ following quantized format:
55
+ - Q8_0
56
+ - Q5_K_M
57
+ - Q4_K_M
58
+
59
+ We do **not** release alongside our GGUF files:
60
+ - An **official chat template**. Instead, we recommend using [`mistral-common`](#usage), which serves as our source of truth for tokenization and detokenization. Llama.cpp automatically loads a chat template, but it is in most likelihood **incorrect** for Magistral.
61
+ - The **vision encoder**, since our recommended usage does not involve multimodality.
62
+
63
+
64
+ ## Updates compared with [Magistral Small 1.1](https://huggingface.co/mistralai/Magistral-Small-2507)
65
+
66
+ - **Multimodality**: The model now has a vision encoder and can take multimodal inputs, extending its reasoning capabilities to vision.
67
+ - **Performance upgrade**: Magistral Small 1.2 should give you significatively better performance than Magistral Small 1.1 as seen in the [benchmark results](#benchmark-results).
68
+ - **Better tone and persona**: You should experiment better LaTeX and Markdown formatting, and shorter answers on easy general prompts.
69
+ - **Finite generation**: The model is less likely to enter infinite generation loops.
70
+ - **Special think tokens**: [THINK] and [/THINK] special tokens encapsulate the reasoning content in a thinking chunk. This makes it easier to parse the reasoning trace and prevents confusion when the '[THINK]' token is given as a string in the prompt.
71
+ - **Reasoning prompt**: The reasoning prompt is given in the system prompt.
72
+
73
+ ## Key Features
74
+ - **Reasoning:** Capable of long chains of reasoning traces before providing an answer.
75
+ - **Multilingual:** Supports dozens of languages, including English, French, German, Greek, Hindi, Indonesian, Italian, Japanese, Korean, Malay, Nepali, Polish, Portuguese, Romanian, Russian, Serbian, Spanish, Turkish, Ukrainian, Vietnamese, Arabic, Bengali, Chinese, and Farsi.
76
+ - **Vision**: Vision capabilities enable the model to analyze images and reason based on visual content in addition to text available with our main model [Magistral-Small-2509](https://huggingface.co/mistralai/Magistral-Small-2509).
77
+ - **Apache 2.0 License:** Open license allowing usage and modification for both commercial and non-commercial purposes.
78
+ - **Context Window:** A 128k context window. Performance might degrade past **40k** but Magistral should still give good results. Hence we recommend to leave the maximum model length to 128k and only lower if you encounter low performance.
79
+
80
+ ## Usage
81
+
82
+ We recommend to use Magistral Small 1.2 GGUF with **[llama.cpp](https://github.com/ggml-org/llama.cpp/tree/master)** along with **[mistral-common >= 1.8.5](https://mistralai.github.io/mistral-common/) server**. See [here](https://mistralai.github.io/mistral-common/usage/experimental/) for the documentation of `mistral-common` server.
83
+
84
+ This recommended usage does **not support vision**.
85
+
86
+ > [!Note]
87
+ > We do **not** believe we can guarantee correct behavior using the integrated, stringified chat template hence
88
+ > mistral-common should be used as a reference. **However**, we strongly encourage the community members to use this GGUF checkpoint
89
+ > and mistral_common as a reference implementation to build a correct integrated, stringified chat template.
90
+
91
+ ### Install
92
+
93
+ 1. Install `llama.cpp` following their [guidelines](https://github.com/ggml-org/llama.cpp/blob/master/README.md#quick-start).
94
+
95
+ 2. Install `mistral-common >= 1.8.5` with its dependencies.
96
+
97
+ ```sh
98
+ pip install --upgrade mistral-common[server,hf-hub]
99
+ ```
100
+
101
+ 3. Download the weights from huggingface.
102
+
103
+ ```sh
104
+ pip install -U "huggingface_hub[cli]"
105
+
106
+ huggingface-cli download \
107
+ "mistralai/Magistral-Small-2509-GGUF" \
108
+ --include "Magistral-Small-2509-Q4_K_M.gguf" \
109
+ --local-dir "mistralai/Magistral-Small-2509-GGUF/"
110
+ ```
111
+
112
+ ### Launch the servers
113
+
114
+ 1. Launch the `llama.cpp` server
115
+
116
+ ```sh
117
+ llama-server -m mistralai/Magistral-Small-2509-GGUF/Magistral-Small-2509-Q4_K_M.gguf -c 0
118
+ ```
119
+
120
+ 2. Launch the `mistral-common` server and pass the url of the `llama.cpp` server.
121
+
122
+ This is the server that will handle tokenization and detokenization and call the `llama.cpp` server for generations.
123
+
124
+ ```sh
125
+ mistral_common serve mistralai/Magistral-Small-2509 \
126
+ --host localhost --port 6000 \
127
+ --engine-url http://localhost:8080 --engine-backend llama_cpp \
128
+ --timeout 300
129
+ ```
130
+
131
+ ### Use the model
132
+
133
+ 1. let's define the function to call the servers:
134
+
135
+ **generate**: call `mistral-common` that will tokenizer, call the `llama.cpp` server to generate new tokens and detokenize the output to an [`AssistantMessage`](https://mistralai.github.io/mistral-common/code_reference/mistral_common/protocol/instruct/messages/#mistral_common.protocol.instruct.messages.AssistantMessage) with think chunk and tool calls parsed.
136
+
137
+ ```python
138
+ from mistral_common.protocol.instruct.messages import AssistantMessage
139
+ from mistral_common.protocol.instruct.request import ChatCompletionRequest
140
+ from mistral_common.experimental.app.models import OpenAIChatCompletionRequest
141
+ from fastapi.encoders import jsonable_encoder
142
+ import requests
143
+
144
+ mistral_common_url = "http://127.0.0.1:6000"
145
+
146
+ def generate(
147
+ request: dict | ChatCompletionRequest | OpenAIChatCompletionRequest, url: str
148
+ ) -> AssistantMessage:
149
+ response = requests.post(
150
+ f"{url}/v1/chat/completions", json=jsonable_encoder(request)
151
+ )
152
+ if response.status_code != 200:
153
+ raise ValueError(f"Error: {response.status_code} - {response.text}")
154
+ return AssistantMessage(**response.json())
155
+ ```
156
+
157
+ 2. Tokenize the input, call the model and detokenize
158
+
159
+ ```python
160
+ from typing import Any
161
+ from huggingface_hub import hf_hub_download
162
+
163
+
164
+ TEMP = 0.7
165
+ TOP_P = 0.95
166
+ MAX_TOK = 131072
167
+
168
+ def load_system_prompt(repo_id: str, filename: str) -> dict[str, Any]:
169
+ file_path = hf_hub_download(repo_id=repo_id, filename=filename)
170
+ with open(file_path, "r") as file:
171
+ system_prompt = file.read()
172
+
173
+ index_begin_think = system_prompt.find("[THINK]")
174
+ index_end_think = system_prompt.find("[/THINK]")
175
+
176
+ return {
177
+ "role": "system",
178
+ "content": [
179
+ {"type": "text", "text": system_prompt[:index_begin_think]},
180
+ {
181
+ "type": "thinking",
182
+ "thinking": system_prompt[
183
+ index_begin_think + len("[THINK]") : index_end_think
184
+ ],
185
+ "closed": True,
186
+ },
187
+ {
188
+ "type": "text",
189
+ "text": system_prompt[index_end_think + len("[/THINK]") :],
190
+ },
191
+ ],
192
+ }
193
+
194
+ SYSTEM_PROMPT = load_system_prompt("mistralai/Magistral-Small-2509", "SYSTEM_PROMPT.txt")
195
+
196
+ query = "Use each number in 2,5,6,3 exactly once, along with any combination of +, -, ×, ÷ (and parentheses for grouping), to make the number 24."
197
+
198
+ messages = [SYSTEM_PROMPT, {"role": "user", "content": [{"type": "text", "text": query}]}]
199
+
200
+ request = {"messages": messages, "temperature": TEMP, "top_p": TOP_P, "max_tokens": MAX_TOK}
201
+
202
+ generated_message = generate(request, mistral_common_url)
203
+ print(generated_message)
204
+ ```
205
+