🧠 ZeroXClem-Qwen3-4B-Hermes-Axion-Pro

image/jpeg

Overview

ZeroXClem-Qwen3-4B-Hermes-Axion-Pro is a powerful, safety-conscious, and deeply intelligent merge crafted via Model Stock merging using MergeKit. This 4B-parameter model blends the best of Hermes-3, Axion-Thinking, and Qwen3-Pro, optimized for deep reasoning, safe generation, and dynamic roleplay.

It’s designed to excel in structured problem-solving, multi-turn dialogue, and creative writing, while maintaining safe behavior aligned through red teaming and post-training.

This model is VERY good with reasoning, and hard tasks! Use the default Template(Jinja) setting in LMStudio for best inference.


πŸ”§ Merge Details

YAML Configuration

name: ZeroXClem-Qwen3-4B-Hermes-Axion-Pro
base_model: bunnycore/Qwen3-4B-Pro
dtype: bfloat16
merge_method: model_stock
models:
  - model: ertghiu256/Qwen3-4b-tcomanr-merge-v2.2
  - model: ertghiu256/Qwen3-4B-Thinking-2507-Hermes-3
  - model: Qwen/Qwen3-4B-Thinking-2507
tokenizer_source: Qwen/Qwen3-4B-Thinking-2507

🧬 Models Merged

🧠 ertghiu256/Qwen3-4B-Thinking-2507-Hermes-3

Finetuned on the Hermes 3 dataset for instruction alignment and coherent multi-step thinking.

πŸ”’ AdvRahul/Axion-Thinking-4B

Safety-tested and enhanced via red teaming protocols. Based on Qwen3-4B-Thinking-2507, with refined behavior for ethical deployment.

🧰 ertghiu256/Qwen3-4b-tcomanr-merge-v2.2

Strong logic and instruction-following merge with emphasis on quality output in diverse domains.

πŸ’Ό bunnycore/Qwen3-4B-Pro

A professional-grade Qwen variant tuned for real-world applications like coding, RP, creative writing, and structured tasks.


✨ Features & Highlights

πŸ”Ή Deep Thinking & Problem Solving β€” Inspired by Hermes-3 and Axion, this model handles multi-step logical reasoning and instruction-following with clarity.

πŸ”Ή Safe, Aligned Outputs β€” Red team finetuning and post-training ensure behavior safety and moderation-ready generation.

πŸ”Ή Creative Writing & Roleplay β€” Retains high fluency and character immersion for natural roleplay and storytelling.

πŸ”Ή Coding & Engineering Tasks β€” Competent in code generation, debugging, and technical explanations.

πŸ”Ή Efficient & Lightweight β€” At just 4B parameters, it's easy to deploy locally or in constrained environments.


🎯 Use Cases

  • πŸ€– Conversational AI
  • ✍️ Creative Roleplay & Fiction Writing
  • 🧠 Reasoning & Problem-Solving Tasks
  • πŸ§‘β€πŸ’» Code Generation & Completion
  • πŸ” Safe AI Assistants with Aligned Behavior

πŸš€ Usage Instructions

For optimal inference, use a higher quant such as Q6 Here.

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "ZeroXClem/Qwen3-4B-Hermes-Axion-Pro"

tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)

prompt = "Describe the principles of quantum entanglement in simple terms."
messages = [{"role": "user", "content": prompt}]

text = tokenizer.apply_chat_template(



    messages,
    tokenize=False,
    add_generation_prompt=True,
    enable_thinking=True
)

inputs = tokenizer([text], return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

For LM Studio Users: When using this model in LM Studio, select the Qwen3-Chat Template from the template dropdown menu. This official template ensures proper prompt formatting for consistent multi-turn conversations and system instruction handling.


⚠️ Alignment & Ethics

  • πŸ” Safety Notice: While post-trained with red teaming protocols, this model still outputs raw generations. Always include content moderation for public deployments.
  • 🧠 Thinking Mode Support: Fully compatible with enable_thinking=True and /think prompt control.
  • πŸ“œ License: Apache 2.0 + governed by the licenses of upstream models.

πŸ’Œ Feedback & Collaboration

We welcome community feedback, prompts, benchmarks, and merge ideas! Reach out via HF comments or GitHub for collaboration.


ZeroXClem Team | 2025 Buy me a coffee β˜•

Downloads last month
48
Safetensors
Model size
4.02B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for ZeroXClem/Qwen3-4B-Hermes-Axion-Pro

Dataset used to train ZeroXClem/Qwen3-4B-Hermes-Axion-Pro

Collection including ZeroXClem/Qwen3-4B-Hermes-Axion-Pro