This is a packged Q8_0 only model from https://huggingface.co/mradermacher/ReSearch-Qwen-7B-GGUF that runs on 9-12GB VRAM without any quality loss.

weighted/imatrix quants are available at https://huggingface.co/mradermacher/ReSearch-Qwen-7B-i1-GGUF

Intro
Method

For this base model DONT apply the chat completion

Setup

Install ollama

curl -fsSL https://ollama.com/install.sh | sh

Go into your favourite folder

# make sure you hve Python 3.8+ 
# apt-get update && apt-get install libcurl build-essential curl
pip install huggingface-hub ollama
huggingface-cli download Manojb/Qwen-7B-toolcalling-ReSearch-gguf-Q8_0 --download-dir Qwen-7B-toolcalling-ReSearch-gguf-Q8_0
cd "$(find . -type d -iname '*Qwen-7B-toolcalling-ReSearch-gguf-Q8_0*' | head -n 1)"
source run_model.sh

Or

# Download and run instantly
ollama create qwen-7b:toolcall -f ModelFile
ollama run qwen-7b:toolcall # without chat completion

Basic Function Calling

for Base model (THIS):

curl http://localhost:11434/api/generate -H "Content-Type: application/json" -d '{
  "model": "qwen-7b:toolcall",
  "prompt": "Get the current weather in San Francisco and convert to Celsius",
  "stream": false
}'
# Load with Ollama
import requests

response = requests.post('http://localhost:11434/api/generate', json={
    'model': 'qwen-7b:toolcall',
    'prompt': 'Get the current weather in San Francisco and convert to Celsius',
    'stream': False
})

print(response.json()['response'])

for Instruct models:

curl http://localhost:11434/api/chat -d '{
  "model": "llama3.2",
  "stream": false,
  "messages": [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Why is the sky blue?"}
  ]
}'
from ollama import chat

# Your custom model name here
model_name = "qwen-7b:toolcall"

messages = [
    {"role": "system", "content": "You are an instruct model."},
    {"role": "user", "content": "Explain how to use this custom model in Python."}
]

response = chat(model=model_name, messages=messages)
print(response.message.content)

ReSearch, a novel framework that trains LLMs to Reason with Search via reinforcement learning without using any supervised data on reasoning steps. Our approach treats search operations as integral components of the reasoning chain, where when and how to perform searches is guided by text-based thinking, and search results subsequently influence further reasoning.

Usage

If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files.

Provided Quants

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Link Type Size/GB Notes
GGUF Q2_K 3.1
GGUF Q3_K_S 3.6
GGUF Q3_K_M 3.9 lower quality
GGUF Q3_K_L 4.2
GGUF IQ4_XS 4.4
GGUF Q4_K_S 4.6 fast, recommended
GGUF Q4_K_M 4.8 fast, recommended
GGUF Q5_K_S 5.4
GGUF Q5_K_M 5.5
GGUF Q6_K 6.4 very good quality
GGUF Q8_0 8.2 fast, best quality
GGUF f16 15.3 16 bpw, overkill

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

Downloads last month
556
GGUF
Model size
7.62B params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Manojb/Qwen-7B-toolcalling-ReSearch-gguf-Q8_0-codex

Base model

Qwen/Qwen2.5-7B
Quantized
(3)
this model

Dataset used to train Manojb/Qwen-7B-toolcalling-ReSearch-gguf-Q8_0-codex