Qwen2.5-Coder-1.5B

Model creator: Qwen
Original model: Qwen/Qwen2.5-Coder-1.5B
GGUF quantization: provided by olegshulyakov using llama.cpp

Special thanks

๐Ÿ™ Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible.

Use with Ollama

ollama run "hf.co/olegshulyakov/Qwen2.5-Coder-1.5B-GGUF:Q5_K_M"

Use with LM Studio

lms load "olegshulyakov/Qwen2.5-Coder-1.5B-GGUF"

Use with llama.cpp CLI

llama-cli --hf "olegshulyakov/Qwen2.5-Coder-1.5B-GGUF:Q5_K_M" -p "The meaning to life and the universe is"

Use with llama.cpp Server:

llama-server --hf "olegshulyakov/Qwen2.5-Coder-1.5B-GGUF:Q5_K_M" -c 4096
Downloads last month
399
GGUF
Model size
1.54B params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation

5-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for olegshulyakov/Qwen2.5-Coder-1.5B-GGUF

Base model

Qwen/Qwen2.5-1.5B
Quantized
(36)
this model