
AmpereComputing/mistral-small-3.2-2506-24b-instruct-gguf
24B
•
Updated
•
18
Ampere's quantization formats (Q4_K_4 / Q8R16) require Ampere optimized llama.cpp available here: https://hub.docker.com/r/amperecomputingai/llama.cpp