File size: 3,385 Bytes
c640337
 
63f68bd
 
c640337
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
This model is a straightforward copy of the [original 3B parameter model](https://huggingface.co/1bitLLM/bitnet_b1_58-3B/tree/main), but only with the following models:

Thanks to [Green-Sky](https://huggingface.co/Green-Sky/bitnet_b1_58-3B-GGUF) for also providing similar work.

* HF to GGUF converted model in `f16` precision -> `model_f16.gguf`
  * It was converted using `llama.cpp` with [this specific](https://github.com/ggerganov/llama.cpp/pull/8151/commits/45719a2472dd43bc3ba43d27d61fec34c6c14cb2) commit.
  * Command: `python3 path_to_llama_cpp/convert_hf_to_gguf.py --outfile ./model_f16.gguf --outtype f16`
* quantized (GGUF version) in [`Q1_3`](https://github.com/ggerganov/llama.cpp/pull/8151#issuecomment-2198043857) format
  * Quantization is done via `llama-quantize` on that same commit.
* quantized (GGUF version) in [`Q2_2`](https://github.com/ggerganov/llama.cpp/pull/8151#issuecomment-2198043857) format
  * Quantization is done via `llama-quantize` on that same commit.
 
Please keep in mind that if you want to test this model through `llama-cli` on Metal (e.g., MacBook Pro with M3 Pro, as I did) you would need to use the `--n-gpu-layers 0` flag, otherwise the following error will occur:
```text
/Users/basavyr/Repos/external/llama.cpp/llama-cli -m model_quant_Q2_2.gguf -p "hey there"
Log start
main: build = 3505 (45719a24)
main: built with Apple clang version 15.0.0 (clang-1500.3.9.4) for arm64-apple-darwin23.6.0
main: seed  = 1724230525
llama_model_loader: loaded meta data with 30 key-value pairs and 470 tensors from model_quant_Q2_2.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.

.........................................................................................
llama_new_context_with_model: n_ctx      = 2048
llama_new_context_with_model: n_batch    = 2048
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 1
ggml_metal_init: allocating
ggml_metal_init: found device: Apple M3 Pro
ggml_metal_init: picking default device: Apple M3 Pro
ggml_metal_init: using embedded metal library
ggml_metal_init: GPU name:   Apple M3 Pro
ggml_metal_init: GPU family: MTLGPUFamilyApple9  (1009)
ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_init: GPU family: MTLGPUFamilyMetal3  (5001)
ggml_metal_init: simdgroup reduction support   = true
ggml_metal_init: simdgroup matrix mul. support = true
ggml_metal_init: hasUnifiedMemory              = true
ggml_metal_init: recommendedMaxWorkingSetSize  = 12884.92 MB
llama_kv_cache_init:      Metal KV buffer size =   650.00 MiB
llama_new_context_with_model: KV self size  =  650.00 MiB, K (f16):  325.00 MiB, V (f16):  325.00 MiB
llama_new_context_with_model:        CPU  output buffer size =     0.12 MiB
llama_new_context_with_model:      Metal compute buffer size =   157.00 MiB
llama_new_context_with_model:        CPU compute buffer size =    62.50 MiB
llama_new_context_with_model: graph nodes  = 1124
llama_new_context_with_model: graph splits = 3
ggml/src/ggml-metal.m:1612: MUL MAT-MAT not implemented
ggml/src/ggml-metal.m:1612: MUL MAT-MAT not implemented[1]    26436 abort      /Users/basavyr/Repos/external/llama.cpp/llama-cli -m model_quant_Q2_2.gguf -p
```