Quantized Hermes-4-14B Model
This repository provides a quantized GGUF version of the Hermes-4-14B model. The 4-bit and 5-bit quantized variants retains the model’s strengths in advanced reasoning tasks while reducing memory and compute requirements ideal for efficient inference on resource-constrained devices.
Model Overview
- Original Model: Hermes-4-14B
- Quantized Version:
- Q4_K_M (4-bit quantization)
- Q5_K_M (5-bit quantization)
- Architecture: Decoder-only transformer
- Base Model: Qwen3-14B-Base
- Modalities: Text only
- Developer: Nous Research
- License: Apache 2.0 License
- Language: English
Quantization Details
Q4_K_M Version
- Approx. ~69% size reduction
- Lower memory footprint (~9 GB)
- Slight performance degradation in complex reasoning scenarios
Q5_K_M Version
- Approx. ~64% size reduction
- Lower memory footprint (~10.5 GB)
- Better performance retention, recommended when quality is a priority
Key Features
- Reasoning that is top quality, expressive, improves math, code, STEM, logic, and even creative writing and subjective responses.
- Instruction-following model optimized for multi-turn scientific question answering
- Schema adherence & structured outputs: trained to produce valid JSON for given schemas and to repair malformed objects.
- Much easier to steer and align: extreme improvements on steerability, especially on reduced refusal rates
Usage Example
Text Inference:
./llama-cli -hf NousResearch/Hermes-4-14B-Q4_k_m.GGUF -p "Explain the Fourier Transform in simple terms"
Recommended Use Cases
Scientific reasoning & STEM domains: tasks requiring step-by-step logical reasoning, clean structure.
Coding & software-related tasks: code generation, explanation, debugging.
Chatbots/Assistants: where reasoning transparency is important (showing chain of thought).
Low-resource deployment / edge inference: use quantized variants.
Acknowledgments
These quantized models are based on the original work by the NousResearch development team.
Special thanks to:
The NousResearch team for developing and releasing the Hermes-4-14B model.
Georgi Gerganov and the entire
llama.cpp
open-source community for enabling efficient model quantization and inference via the GGUF format.
Contact
For any inquiries or support, please contact us at support@sandlogic.com or visit our Website.
- Downloads last month
- 52
4-bit
5-bit