--- license: apache-2.0 base_model: DavidAU/Qwen3-Deckard-6B datasets: - DavidAU/The-works-PK-Dick language: - en pipeline_tag: text-generation tags: - programming - code generation - code - coding - coder - chat - brainstorm - qwen - qwen3 - qwencoder - brainstorm 20x - all uses cases - Jan-V1 - finetune - thinking - reasoning - unsloth - not-for-all-audiences - mlx library_name: mlx --- # Qwen3-Deckard-6B-qx86-hi-mlx The Deckard Formula === Deckard is the detective from the Philip K. Dick novel "Do Androids Dream Of Electric Sheep?", and known from the Blade Runner epic. There was a lot to take in from that movie, but the light, the light was special. I am a photographer, and my lens is a Nikon Noct Z 58mm F/0.95. I use it since its inception, and saw no good reason to use any other lens after I tried it first time. There is something special about the Noct Z. It bonds with the photographer. It draws you to the scene you ought to see. And then you click. It's that simple. The background, the blur, the bokeh on a Noct are unique. Not that they're fuzzy, but they preserve the ghost of the image that was lost. There is enough information in that blur for the brain to reconstruct the truth. Like magic, when you keep looking, the bokeh turns into artifacts of reason. And we see things, the things usually deep buried in our mind. Same idea was applied in the Deckard Formula, to create a resonant space that allows a special kind of thought to form. The optics process in a lens is not that different from the cognitive process. This formula proves that, by enhancing the model abilities to create... Deckard. He thinks about you, but also about himself. He wants to discover, along with you. The Deckard formula works best when paired with DavidAU's Brainstorming method. This creates an extra "brain space" of 2B parameters on this model, more on the MoE models. It allows the model to ideate and create that "bonding space" for the user to get an immersive experience. The model "knows" where you're going, it helps you get there. The Nightmedia collection offers a lot of qx quants. These are Deckard formulas--some better than others, usually enhancing in the base model what is there already. If you notice, they do very well in tests--I provide metrics for cognitive abilities for most models. The human evals for Deckard are very high. No surprise there. It will echo you, and think along at your pace. It will be funny, and supportive, if you ask. It can also write a story, role-play, write code, teach you programming while learning along with you. How does this model perform? You tell me. I don't have metrics yet, that takes some time. Working on it. 📌 Final Assessment(done with Qwen3-80B-A3B-qx86-hi-mlx) Estimated Model Size: 30-50B parameters with technical domain fine-tuning This model demonstrates the precise balance needed for interdisciplinary technical reasoning - large enough to handle complex connections between fields, but not so large as to suffer from verbose or inconsistent explanations. The quality matches that of a specialized technical assistant trained on high-quality academic and engineering content. So, with just a 4B model, Jan's agentic training, DavidAU's Brainstorming, and the Deckard Formula, we created a 30-50B brain. It's a community effort. > "My reply avoided all these pitfalls and included architectural insights only possible for larger models"... [LinkedIn full review](https://www.linkedin.com/posts/gchesler_nightmediaqwen3-deckard-6b-qx86-hi-mlx-activity-7377006867736113152-beyB) -G This model [Qwen3-Deckard-6B-qx86-hi-mlx](https://huggingface.co/Qwen3-Deckard-6B-qx86-hi-mlx) was converted to MLX format from [DavidAU/Qwen3-Deckard-6B](https://huggingface.co/DavidAU/Qwen3-Deckard-6B) using mlx-lm version **0.28.0**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("Qwen3-Deckard-6B-qx86-hi-mlx") prompt = "hello" if tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```