llama.cpp Quantizations of DeepSeek-V3-0324 (MLA version)

Original model: Adopting BF16 & Imatrix from unsloth/DeepSeek-V3-0324-BF16.

All quants made with modification of llama.cpp based on Bobchenyx/llama.cpp.

- IQ1_S : 129.94 GiB (1.66 BPW)
- IQ1_M : 144.24 GiB (1.85 BPW)
- Q2_K_L : 222.01 GiB (2.84 BPW)
- Q4_K_L : 381.64 GiB (4.89 BPW)

Smallest Compression (103GB)

For our smallest compressed version. Please refer to tflsxyy/DeepSeek-V3-0324-E192 and bobchenyx/DeepSeek-V3-0324-508B-A32B-MLA-GGUF for more details.


Download Guide

# !pip install huggingface_hub hf_transfer
import os
os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"
from huggingface_hub import snapshot_download
snapshot_download(
    repo_id = "bobchenyx/DeepSeek-V3-0324-MLA-GGUF",
    local_dir = "bobchenyx/DeepSeek-V3-0324-MLA-GGUF",
    allow_patterns = ["*IQ1_M*"],
)
Downloads last month
554
GGUF
Model size
671B params
Architecture
deepseek2
Hardware compatibility
Log In to view the estimation

1-bit

2-bit

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for bobchenyx/DeepSeek-V3-0324-MLA-GGUF

Quantized
(22)
this model

Collection including bobchenyx/DeepSeek-V3-0324-MLA-GGUF