HawkonLi/Hunyuan-A52B-Instruct-2bit

Introduction

This Model was converted to MLX format from tencent-community/Hunyuan-A52B-Instruct

mlx-lm version: 0.21.0

convert-parameter:

q_group_size: 128

q_bits: 2

Based on testing, this model can BARELY run local inference on a MacBook Pro 16-inch (M3 Max, 128GB RAM) . The following command must be executed before running the model:

 sudo sysctl iogpu.wired_limit_mb=105000  

This command requires macOS 15.0 or higher to work.

This model requires 104,259 MB of memory, which is close to the maximum recommended size of 98,384 MB on the M3 Max with 128GB RAM, but it does fit. Therefore, the command above is used to increase the system's wired memory limit. Please note, this may cause unexpected system lag or interruptions.

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("HawkonLi/Hunyuan-A52B-Instruct-2bit", tokenizer_config={"eos_token": "<|endoftext|>", "trust_remote_code": True},lazy=True)
prompt = "่“็‰™่€ณๆœบๅไบ†๏ผŒ่ฏฅๅŽป็œ‹็‰™็ง‘่ฟ˜ๆ˜ฏ่€ณ็ง‘"

if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, tokenize=False, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)
Downloads last month
1
Safetensors
Model size
30.4B params
Tensor type
FP16
ยท
U32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for HawkonLi/Hunyuan-A52B-Instruct-2bit

Quantized
(2)
this model