
95 Coder/Programming - MOE, Reasoning, Reg, Imatrix, Fused.
Models (0.8B to 87B) in regular, "reasoning", "Brainstorm", MOE (1x to 8x / 128 experts), and expanded to create better and stronger code, faster.
Text Generation • 39B • Updated • 11.1k • 3Note Repo with multiple (41) Coding models in 1 or 2 quants ; many of these models now have full repos and full quants - listed below. LISTINGS ORDER OF THIS COLLECTION: MOES: in terms of raw power / size. Brainstorm: An Adapter by DavidAU Standard Models in terms of raw power/size. QUANTS: I strongly suggest for complex coding / long coding projects you use the highest quant(s) you can in both Imatrix and reg; with Imatrix being preferred. Likewise; higher parameter count models AND/OR MOEs.
DavidAU/Qwen3-Coder-53B-A3B-Instruct-TOTAL-RECALL-v2-MASTER-CODER-L
Text Generation • 53B • Updated • 30 • 3Note 128 experts (MOE) - Mixture of experts. All experts are coders. 256K context ; using Brainstorm 40x to enhance performance. Non-Thinking model (you can activate this using a system prompt) Links to GGUFs on this page.
DavidAU/Qwen3-Coder-42B-A3B-Instruct-TOTAL-RECALL-MASTER-CODER-M
Text Generation • 42B • Updated • 17 • 1Note 128 experts (MOE) - Mixture of experts. All experts are coders. 256K context ; using Brainstorm 20x to enhance performance. Non-Thinking model (you can activate this using a system prompt) Links to GGUFs on this page.
DavidAU/Qwen3-Coder-42B-A3B-Instruct-TOTAL-RECALL-MASTER-CODER-M-512k-ctx
Text Generation • 42B • Updated • 27 • 1Note 128 experts (MOE) - Mixture of experts. All experts are coders. 512K context ; using Brainstorm 20x to enhance performance. Non-Thinking model (you can activate this using a system prompt) Links to GGUFs on this page. Special note: Even you do not need the context, try this model as context changes will change generation.
DavidAU/Qwen3-Coder-42B-A3B-Instruct-TOTAL-RECALL-MASTER-CODER-M-768k-ctx
Text Generation • 42B • Updated • 23 • 1Note 128 experts (MOE) - Mixture of experts. All experts are coders. 768K context ; using Brainstorm 20x to enhance performance. Non-Thinking model (you can activate this using a system prompt) Links to GGUFs on this page. Special note: Even you do not need the context, try this model as context changes will change generation.
DavidAU/Qwen3-Coder-42B-A3B-Instruct-TOTAL-RECALL-MASTER-CODER-M-1million-ctx
Text Generation • 42B • Updated • 23 • 2Note 128 experts (MOE) - Mixture of experts. All experts are coders. 1 million context ; using Brainstorm 20x to enhance performance. Non-Thinking model (you can activate this using a system prompt) Links to GGUFs on this page. Special note: Even you do not need the context, try this model as context changes will change generation.
DavidAU/Qwen3-53B-A3B-2507-THINKING-TOTAL-RECALL-v2-MASTER-CODER
Text Generation • 53B • Updated • 31 • 3Note 128 experts (MOE) - Mixture of experts. 256K context ; using Brainstorm 40x to enhance performance. Non-Thinking model (you can activate this using a system prompt) Links to GGUFs on this page.
DavidAU/Qwen3-53B-A3B-2507-TOTAL-RECALL-v2-MASTER-CODER
Text Generation • 53B • Updated • 60 • 5Note 128 experts (MOE) - Mixture of experts. 256K context ; using Brainstorm 40x to enhance performance. Links to GGUFs on this page. Non-thinking model => STRAIGHT to coding.
DavidAU/Qwen3-42B-A3B-2507-Thinking-TOTAL-RECALL-v2-Medium-MASTER-CODER
Text Generation • 42B • Updated • 27 • 2Note 128 experts (MOE) - Mixture of experts. 256K context ; using Brainstorm 20x to enhance performance. Links to GGUFs on this page. Enhanced Thinking model => Smarter thinking, fewer tokens, better code.
DavidAU/Qwen3-42B-A3B-2507-TOTAL-RECALL-v2-Medium-MASTER-CODER
Text Generation • 42B • Updated • 30 • 3Note 128 experts (MOE) - Mixture of experts. 256K context ; using Brainstorm 20x to enhance performance. Links to GGUFs on this page. Non-thinking model => STRAIGHT to coding.
DavidAU/Qwen3-53B-A3B-TOTAL-RECALL-MASTER-CODER-v1.4-256k-ctx
Text Generation • 53B • Updated • 35Note 128 experts MOE Model , with 256 k context. Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Uses Brainstorm adapter (40x) by DavidAU to extend model function/performance.
DavidAU/Qwen3-53B-A3B-TOTAL-RECALL-MASTER-CODER-v1.4-128k
Text Generation • 53B • Updated • 22 • 2Note 128 experts MOE Model , with 128 k context. Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Uses Brainstorm adapter (40x) by DavidAU to extend model function/performance.
DavidAU/Qwen3-53B-A3B-TOTAL-RECALL-MASTER-CODER-v1.4
Text Generation • 53B • Updated • 13 • 3Note 128 experts MOE Model. Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Uses Brainstorm adapter (40x) by DavidAU to extend model function/performance.
DavidAU/Qwen2.5-2X32B-CoderInstruct-OlympicCoder-87B-v1.2
Text Generation • 87B • Updated • 11Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Mixture of Experts model with 2 models + a shared expert.
DavidAU/Qwen2.5-2X32B-CoderInstruct-OlympicCoder-87B-v1.1
Text Generation • 87B • Updated • 14 • 1Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Mixture of Experts model with 2 models + a shared expert.
DavidAU/Mistral-2x24B-MOE-Power-CODER-Magistral-Devstral-Reasoning-Ultimate-NEO-MAX-44B-gguf
Text Generation • 44B • Updated • 5.78kNote Devstral (coder) with Reasoning, which can be turned on or off. 128 k context.
DavidAU/Mistral-2x24B-MOE-Power-Magistral-Devstral-Reasoning-Ultimate-44B
Text Generation • 44B • Updated • 91Note Devstral (coder) with Reasoning, which can be turned on or off. 128 k context.
DavidAU/Mistral-2x24B-MOE-Power-Devstral-Magistral-Reasoning-Ultimate-44B
Text Generation • 44B • Updated • 13Note Devstral (coder) with Reasoning, which can be turned on or off. 128 k context.
DavidAU/Mistral-2x22B-MOE-Power-Codestral-Ultimate-39B
Text Generation • 39B • Updated • 24 • 1Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here.
DavidAU/Qwen2.5-8x7B-Vee-Eight-Coder-Instruct-53B-128k-ctx
Text Generation • 53B • Updated • 22 • 1Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. 128 k context. Mixture of Experts model with 8 models + a shared expert.
DavidAU/Qwen2.5-8x7B-Vee-Eight-Coder-Instruct-53B
Text Generation • 53B • Updated • 45Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Mixture of Experts model with 8 models + a shared expert.
DavidAU/Qwen2.5-6x7B-Six-Pack-Coder-Instruct-42B-128k-ctx
Text Generation • 42B • Updated • 31Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. 128 k context. Mixture of Experts model with 6 models + a shared expert.
DavidAU/Qwen2.5-6x7B-Six-Pack-Coder-Instruct-42B
Text Generation • 42B • Updated • 15 • 1Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Mixture of Experts model with 6 models + a shared expert.
DavidAU/Qwen2.5-4x7B-Quad-Coder-Instruct-30B-128k-ctx
Text Generation • 30B • Updated • 13Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. 128 k context. Mixture of Experts model with 4 models + a shared expert.
DavidAU/Qwen2.5-4x7B-Quad-Coder-Instruct-30B
Text Generation • 30B • Updated • 24 • 1Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Mixture of Experts model with 4 models + a shared expert.
DavidAU/Qwen2.5-3X7B-CoderInstruct-OlympicCoder-MS-Next-Coder-25B-v1-128k-ctx
Text Generation • 25B • Updated • 12 • 1Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. 128k context. Mixture of Experts model with 3 models + a shared expert.
DavidAU/Qwen2.5-3X7B-CoderInstruct-OlympicCoder-MS-Next-Coder-25B-v1
Text Generation • 25B • Updated • 16 • 1Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Mixture of Experts model with 3 models + a shared expert.
DavidAU/Qwen2.5-2X7B-Coder-Instruct-OlympicCoder-19B
Text Generation • 19B • Updated • 23 • 1Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Mixture of Experts model with 2 models + a shared expert.
DavidAU/Qwen2.5-2X7B-Coder-CodeV-R1-Coder-Instruct-OlympicCoder-19B
Text Generation • 19B • Updated • 17 • 1Note Specialized 2 model with MOE with additional shared expert. Links to quants on this page -> GGUF, GGUF Imatrix and others are here.
DavidAU/Qwen2.5-2X7B-Coder-VisCoder-Coder-Instruct-OlympicCoder-19B
Text Generation • 19B • Updated • 13Note Specialized 2 model with MOE with additional shared expert. Links to quants on this page -> GGUF, GGUF Imatrix and others are here.
DavidAU/Qwen2.5-2X7B-Coder-Soar-qwen-Coder-Instruct-OlympicCoder-19B
Text Generation • 19B • Updated • 21 • 2Note Specialized 2 model with MOE with additional shared expert. Links to quants on this page -> GGUF, GGUF Imatrix and others are here.
DavidAU/Qwen2.5-2X11B-CODER-Dueling-Wolverines-V2-28B
Text Generation • 28B • Updated • 17Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here.
DavidAU/Qwen2.5-2X11B-CODER-Dueling-Wolverines-28B-gguf
Text Generation • 28B • Updated • 108Note Mixture of Experts model with 2 models + a shared expert.
DavidAU/Qwen2.5-2X11B-CODER-Dueling-Wolverines-28B
Text Generation • 28B • Updated • 8Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Mixture of Experts model with 2 models + a shared expert.
DavidAU/Qwen2.5-Godzilla-Coder-51B-128k
Text Generation • 51B • Updated • 12 • 1Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Context at 128k. Fusion of two coding models at 75%/75%.
DavidAU/Qwen2.5-Godzilla-Coder-51B-gguf
Text Generation • 51B • Updated • 862 • 3Note Fusion of two coding models at 75%/75%.
DavidAU/Qwen2.5-Godzilla-Coder-51B
Text Generation • 51B • Updated • 17 • 1Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Fusion of two coding models at 75%/75%.
DavidAU/Qwen2.5-Godzilla-Coder-V2-51B-128k
Text Generation • 51B • Updated • 13 • 1Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Context at 128k. Fusion of two coding models at 75%/75%.
DavidAU/Qwen2.5-Godzilla-Coder-V2-51B
Text Generation • 51B • Updated • 9Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Fusion of two coding models at 75%/75%.
DavidAU/Mistral-Magistral-Devstral-Instruct-FUSED-CODER-Reasoning-36B
Text Generation • 36B • Updated • 11Note Newest Devstral version (1.1), with even better coding abilities. Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Context at 128k. This is a fused model with 62 layers, 561 tensors. Short thinking blocks -> then straight to coding.
DavidAU/Mistral-Devstral-2507-CODER-Brainstorm40x-44B
Text Generation • 44B • Updated • 38 • 1Note Newest Devstral version, with even better coding abilities. Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Context at 128k. Uses Brainstorm adapter by DavidAU to extend model function/performance.
DavidAU/Mistral-Devstral-2507-CODER-Brainstorm20x-34B
Text Generation • 34B • Updated • 44 • 1Note Newest Devstral version, with even better coding abilities. Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Context at 128k. Uses Brainstorm adapter by DavidAU to extend model function/performance.
DavidAU/Mistral-Devstral-2505-CODER-Brainstorm40x-44B
Text Generation • 44B • Updated • 37 • 2Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Context at 128k. Uses Brainstorm adapter by DavidAU to extend model function/performance.
DavidAU/Mistral-Devstral-2505-CODER-Brainstorm20x-34B
Text Generation • 34B • Updated • 37 • 1Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Context at 128k. Uses Brainstorm adapter by DavidAU to extend model function/performance.
DavidAU/Qwen2.5-Microsoft-NextCoder-Brainstorm20x-128k-ctx-42B
Text Generation • 42B • Updated • 22Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Context at 128k. Uses Brainstorm adapter by DavidAU to extend model function/performance.
DavidAU/Qwen2.5-Microsoft-NextCoder-Brainstorm20x-42B
Text Generation • 42B • Updated • 22Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Uses Brainstorm adapter by DavidAU to extend model function/performance.
DavidAU/Qwen3-Esper3-Reasoning-CODER-Instruct-21B-Brainstorm20x-128k-ctx
Text Generation • 21B • Updated • 12Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Uses Brainstorm adapter by DavidAU to extend model function/performance. 128k context.
DavidAU/Qwen2.5-Microsoft-NextCoder-Brainstorm20x-128k-ctx-20B
Text Generation • 20B • Updated • 21Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Context at 128k. Uses Brainstorm adapter by DavidAU to extend model function/performance.
DavidAU/Qwen3-Esper3-Reasoning-CODER-Instruct-21B-Brainstorm20x
Text Generation • 21B • Updated • 18 • 2Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Uses Brainstorm adapter by DavidAU to extend model function/performance.
DavidAU/Qwen2.5-Microsoft-NextCoder-Brainstorm20x-20B
Text Generation • 20B • Updated • 27Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Uses Brainstorm adapter by DavidAU to extend model function/performance.
DavidAU/Qwen2.5-Microsoft-NextCoder-Brainstorm20x-128k-ctx-12B
Text Generation • 12B • Updated • 72 • 4Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Context at 128k. Uses Brainstorm adapter by DavidAU to extend model function/performance.
DavidAU/Qwen2.5-Microsoft-NextCoder-Brainstorm20x-12B
Text Generation • 12B • Updated • 36 • 1Note Uses Brainstorm adapter by DavidAU to extend model function/performance. Links to quants on this page -> GGUF, GGUF Imatrix and others are here.
DavidAU/Qwen3-Esper3-Reasoning-CODER-Instruct-12B-Brainstorm20x
Text Generation • 12B • Updated • 40Note Uses Brainstorm adapter by DavidAU to extend model function/performance. Links to quants on this page -> GGUF, GGUF Imatrix and others are here.
DavidAU/Qwen3-Esper3-Reasoning-CODER-Instruct-12B-Brainstorm20x-128k-ctx
Text Generation • 12B • Updated • 118Note Uses Brainstorm adapter by DavidAU to extend model function/performance. Links to quants on this page -> GGUF, GGUF Imatrix and others are here.
DavidAU/Qwen3-Jan-Nano-128k-6B-Brainstorm20x
Text Generation • 6B • Updated • 64 • 4Note Uses Brainstorm adapter by DavidAU to extend model function/performance. Links to quants on this page -> GGUF, GGUF Imatrix and others are here. This is a general use AND coder/programming model.
DavidAU/Qwen3-Blitzar-Coder-F1-6B-Brainstorm20x
Text Generation • 6B • Updated • 19 • 1Note Uses Brainstorm adapter by DavidAU to extend model function/performance. Links to quants on this page -> GGUF, GGUF Imatrix and others are here.
DavidAU/Qwen3-Polaris-Preview-128k-6B-Brainstorm20x
Text Generation • 6B • Updated • 19 • 1Note Uses Brainstorm adapter by DavidAU to extend model function/performance. Links to quants on this page -> GGUF, GGUF Imatrix and others are here. This is a general use AND coder/programming model.
DavidAU/Qwen3-Instruct-6B-Brainstorm20x
Text Generation • 6B • Updated • 16Note Uses Brainstorm adapter by DavidAU to extend model function/performance. Links to quants on this page -> GGUF, GGUF Imatrix and others are here. This is a general use AND coder/programming model.
DavidAU/Qwen3-Instruct-F16-6B-Brainstorm20x
Text Generation • 6B • Updated • 15Note Uses Brainstorm adapter by DavidAU to extend model function/performance. Links to quants on this page -> GGUF, GGUF Imatrix and others are here. This is a general use AND coder/programming model.
DavidAU/Qwen3-Instruct-6B-Brainstorm20x-128k-ctx
Text Generation • 6B • Updated • 38 • 1Note Uses Brainstorm adapter by DavidAU to extend model function/performance. Links to quants on this page -> GGUF, GGUF Imatrix and others are here. This is a general use AND coder/programming model. 128 k context.
DavidAU/Qwen3-Code-Reasoning-Instruct-6B-Brainstorm20x
Text Generation • 6B • Updated • 79 • 1Note Uses Brainstorm adapter by DavidAU to extend model function/performance. Links to quants on this page -> GGUF, GGUF Imatrix and others are here. This is a general use AND coder/programming model.
DavidAU/Qwen3-Instruct-F16-6B-Brainstorm20x-128k-ctx
Text Generation • 6B • Updated • 22Note Uses Brainstorm adapter by DavidAU to extend model function/performance. Links to quants on this page -> GGUF, GGUF Imatrix and others are here. This is a general use AND coder/programming model. 128k context.
DavidAU/Qwen3-Bootes-Quick-Coder-Instruct-6B-Brainstorm20x
Text Generation • 6B • Updated • 14Note Uses Brainstorm adapter by DavidAU to extend model function/performance. Links to quants on this page -> GGUF, GGUF Imatrix and others are here.
DavidAU/Qwen3-Esper3-Reasoning-CODER-Instruct-6B-Brainstorm20x
Text Generation • 6B • Updated • 16Note Uses Brainstorm adapter by DavidAU to extend model function/performance. Links to quants on this page -> GGUF, GGUF Imatrix and others are here.
DavidAU/Qwen3-Esper3-Reasoning-Instruct-6B-Brainstorm20x-Enhanced-E32
Text Generation • 6B • Updated • 15 • 1Note Uses Brainstorm adapter by DavidAU to extend model function/performance. Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Float 32 enhanced.
DavidAU/Qwen3-Esper3-Reasoning-Instruct-6B-Brainstorm20x-Enhanced-E32-128k-ctx
Text Generation • 6B • Updated • 15Note Uses Brainstorm adapter by DavidAU to extend model function/performance. Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Float 32 enhanced, and 128k context.
DavidAU/Qwen3-Esper3-Reasoning-Instruct-6B-Brainstorm20x-Enhanced-E32-192k-ctx
Text Generation • 6B • Updated • 14Note Uses Brainstorm adapter by DavidAU to extend model function/performance. Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Float 32 and 192k context.
DavidAU/Qwen2.5-Microsoft-NextCoder-Instruct-FUSED-CODER-Fast-22B
Text Generation • 22B • Updated • 16Note Two models fused together to make a stronger coder model. Float32 - 32 bit - to give the model extra power. This is an instant coder -> enter your prompt, get code.
DavidAU/Qwen2.5-Microsoft-NextCoder-Soar-Instruct-FUSED-CODER-Fast-11B
Text Generation • 11B • Updated • 19 • 1Note Two models fused together to make a stronger coder model. Float32 - 32 bit - to give the model extra power. This is an instant coder -> enter your prompt, get code.
DavidAU/Qwen2.5-Microsoft-NextCoder-Instruct-FUSED-CODER-Fast-11B
Text Generation • 11B • Updated • 15Note Two models fused together to make a stronger coder model. Float32 - 32 bit - to give the model extra power. This is an instant coder -> enter your prompt, get code.
DavidAU/Qwen2.5-Microsoft-NextCoder-Olympic-Instruct-FUSED-CODER-Fast-11B
Text Generation • 11B • Updated • 15Note Two models fused together to make a stronger coder model. Float32 - 32 bit - to give the model extra power. This is an instant coder -> enter your prompt, get code.
DavidAU/Qwen2.5-Wolverine-CODER-11B-V2-128k-ctx
Text Generation • 11B • Updated • 12Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. 128k context. Model is fused together from 2 coder models.
DavidAU/Qwen2.5-Wolverine-CODER-11B-V2
Text Generation • 11B • Updated • 13 • 1Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Model is fused together from 2 coder models
DavidAU/Qwen2.5-Wolverine-CODER-11B-128k-ctx
Text Generation • 11B • Updated • 12Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. 128k context. Fusion of two coding models at 75%/75%.
DavidAU/Qwen2.5-Wolverine-CODER-11B-gguf
Text Generation • 11B • Updated • 884 • 2Note Model is fused together from 2 coder models
DavidAU/Qwen2.5-Wolverine-CODER-11B
Text Generation • 11B • Updated • 18Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Model is fused together from 2 coder models
DavidAU/Qwen2.5-OpenCodeReasoning-Nemotron-1.1-7B-NEO-imatix-gguf
Text Generation • 8B • Updated • 1.12kNote Uses NEO Imatrix dataset (by DavidAU) to augment model performance.
DavidAU/Qwen3-Shining-Valiant-Instruct-Fast-CODER-Reasoning-2.4B
Text Generation • 2B • Updated • 33 • 1Note Model has full thinking/reasoning too. Model is fused together from 2 coder models. Source in Float 32 (32 bit) for stronger performance. Generally short thinking blocks or none at all. ("Fast") Suggest 2-4 generations.
DavidAU/Qwen3-Shining-Valiant-Instruct-CODER-Reasoning-2.7B
Text Generation • 3B • Updated • 17Note Model has full thinking/reasoning too. Model is fused together from 2 coder models. Source in Float 32 (32 bit) for stronger performance. Suggest 2-4 generations.
DavidAU/Qwen3-Shining-Lucy-CODER-3.4B-Brainstorm20x-e32
Text Generation • 3B • Updated • 9Note 40k context. Good for draft, simple, or code blocks - including complex. Model has full thinking/reasoning too. Model is fused together from 2 coder models. Source in Float 32 (32 bit) for stronger performance. This model will be stronger than the "reg" version. Brainstorm adapter (20x) will provide "out of the box" coding solutions. Suggest 2-4 generations to use this feature.
DavidAU/Qwen3-Shining-Lucy-CODER-2.4B-e32
Text Generation • 2B • Updated • 14Note 40k context. Good for draft, simple, or code blocks - including complex. Model has full thinking/reasoning too. Model is fused together from 2 coder models. Source in Float 32 (32 bit) for stronger performance. This model will be stronger than the "reg" version.
DavidAU/Qwen3-Shining-Lucy-CODER-2.4B-e32-mix2
Text Generation • 2B • Updated • 13Note 40k context. Good for draft, simple, or code blocks - including complex. Model has full thinking/reasoning too. Model is fused together from 2 coder models. Source in Float 32 (32 bit) for stronger performance. This model will be stronger than the "reg" version.
DavidAU/Qwen3-Shining-Lucy-CODER-2.4B
Text Generation • 2B • Updated • 20Note 40k context. Good for draft, simple, or code blocks - including complex. Model has full thinking/reasoning too. Model is fused together from 2 coder models
DavidAU/Qwen3-Shining-Lucy-CODER-2.4B-mix2
Text Generation • 2B • Updated • 13Note 40k context. Good for draft, simple, or code blocks - including complex. Model has full thinking/reasoning too. Model is fused together from 2 coder models
DavidAU/Qwen3-Zero-Coder-Reasoning-0.8B-NEO-EX-GGUF
Text Generation • 0.8B • Updated • 14.2k • 11Note Uses NEO Imatrix dataset (by DavidAU) to augment model performance. 40k context. Good for draft, simple, or code blocks - including complex. Model has full thinking/reasoning too. Model is fused together from 2 coder models
DavidAU/Qwen3-Zero-Coder-Reasoning-V2-0.8B-NEO-EX-GGUF
Text Generation • 0.8B • Updated • 3.35k • 1Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. 40k context. Good for draft, simple, or code blocks - including complex. Model has full thinking/reasoning too. Stronger than V1. Model is fused together from 2 coder models.
DavidAU/Qwen3-Zero-Coder-Reasoning-0.8B
Text Generation • 0.8B • Updated • 91Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. 40k context. Good for draft, simple, or code blocks - including complex. Model has full thinking/reasoning too. Model is fused together from 2 coder models
DavidAU/Qwen3-Zero-Coder-Reasoning-V2-0.8B
Text Generation • 0.8B • Updated • 128Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. 40k context. Good for draft, simple, or code blocks - including complex. Model has full thinking/reasoning too. Stronger than V1. Model is fused together from 2 coder models.
DavidAU/Openai_gpt-oss-20b-CODER-NEO-CODE-DI-MATRIX-GGUF
Text Generation • 21B • Updated • 2.9k • 3DavidAU/Openai_gpt-oss-20b-NEO-GGUF
Text Generation • 21B • Updated • 6.86k • 8DavidAU/Openai_gpt-oss-120b-NEO-Imatrix-GGUF
Text Generation • 117B • Updated • 5.43k