Model request - Qwen3-235B-A22B-Instruct-2507-GPTQ-Int4

#1
by djdeniro - opened

This model launched with AMD + VLLM 100% works!

Can you create Qwen3-235B-A22B-Instruct-2507-GPTQ-Int4?

Thanks for confirming it works 100% on AMD + vLLM!
At the moment we don’t have a pure Qwen3-235B-A22B-Instruct-2507-GPTQ-Int4 build available. In the meantime, you can try our other 4-bit quantized models:

Qwen3-235B-A22B-Instruct-2507-AWQ: https://huggingface.co/QuantTrio/Qwen3-235B-A22B-Instruct-2507-AWQ

Qwen3-235B-A22B-Instruct-2507-GPTQ-Int4-Int8Mix: https://huggingface.co/QuantTrio/Qwen3-235B-A22B-Instruct-2507-GPTQ-Int4-Int8Mix

Int8Mix and AWQ not work with vllm in amd 7900xtx gpu cards, so this is reason why i am create request. anyway, i subscribed to you and thank you for the answer!

QuantTrio org

I don’t think this issue is caused by the model itself. I noticed that some AMD developers recently modified the qwen3_moe.py code in vLLM, which introduced certain problems. I’d recommend waiting until the next official vLLM release (v0.10.2) before trying again. For updates on AMD-related development, please follow this PR:https://github.com/vllm-project/vllm/pull/23994

and,We really appreciate your subscription!

djdeniro changed discussion status to closed

Sign up or log in to comment