Text Generation
Transformers
Safetensors
English
qwen3
programming
code generation
code
coding
coder
chat
brainstorm
qwen
qwencoder
brainstorm 20x
creative
all uses cases
Jan-V1
Deep Space Nine
DS9
horror
science fiction
fantasy
Star Trek
finetune
thinking
reasoning
unsloth
conversational
text-generation-inference
Update README.md
Browse files
README.md
CHANGED
@@ -57,7 +57,84 @@ Example generations at the bottom of this page.
|
|
57 |
|
58 |
This is a Star Trek Deep Space Nine fine tune (11% of the model, close to 700 million parameters), 1 epochs on this model (a 4B model + Brainstorm 20x adapter)
|
59 |
|
60 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
61 |
|
62 |
---
|
63 |
|
|
|
57 |
|
58 |
This is a Star Trek Deep Space Nine fine tune (11% of the model, close to 700 million parameters), 1 epochs on this model (a 4B model + Brainstorm 20x adapter)
|
59 |
|
60 |
+
This model requires:
|
61 |
+
- Jinja (embedded) or CHATML template
|
62 |
+
- Max context of 256k.
|
63 |
+
|
64 |
+
Settings used for testing (suggested):
|
65 |
+
- Temp .8 to 2
|
66 |
+
- Rep pen 1.05 to 1.1
|
67 |
+
- Topp .8 , minp .05
|
68 |
+
- Topk 20
|
69 |
+
- Min context of 8k for thinking / output.
|
70 |
+
- No system prompt.
|
71 |
+
|
72 |
+
As this is an instruct model, it will also benefit from a detailed system prompt too.
|
73 |
+
|
74 |
+
---
|
75 |
+
|
76 |
+
<B>QUANTS:</b>
|
77 |
+
|
78 |
+
---
|
79 |
+
|
80 |
+
GGUF? GGUF Imatrix? Other?
|
81 |
+
|
82 |
+
Special thanks to Team Mradermacher, Team Nightmedia and other quanters!
|
83 |
+
|
84 |
+
See under "model tree", upper right and click on "quantizations".
|
85 |
+
|
86 |
+
New quants will automatically appear.
|
87 |
+
|
88 |
+
---
|
89 |
+
|
90 |
+
<H2>Help, Adjustments, Samplers, Parameters and More</H2>
|
91 |
+
|
92 |
+
---
|
93 |
+
|
94 |
+
<B>CHANGE THE NUMBER OF ACTIVE EXPERTS:</B>
|
95 |
+
|
96 |
+
See this document:
|
97 |
+
|
98 |
+
https://huggingface.co/DavidAU/How-To-Set-and-Manage-MOE-Mix-of-Experts-Model-Activation-of-Experts
|
99 |
+
|
100 |
+
<B>Settings: CHAT / ROLEPLAY and/or SMOOTHER operation of this model:</B>
|
101 |
+
|
102 |
+
In "KoboldCpp" or "oobabooga/text-generation-webui" or "Silly Tavern" ;
|
103 |
+
|
104 |
+
Set the "Smoothing_factor" to 1.5
|
105 |
+
|
106 |
+
: in KoboldCpp -> Settings->Samplers->Advanced-> "Smooth_F"
|
107 |
+
|
108 |
+
: in text-generation-webui -> parameters -> lower right.
|
109 |
+
|
110 |
+
: In Silly Tavern this is called: "Smoothing"
|
111 |
+
|
112 |
+
|
113 |
+
NOTE: For "text-generation-webui"
|
114 |
+
|
115 |
+
-> if using GGUFs you need to use "llama_HF" (which involves downloading some config files from the SOURCE version of this model)
|
116 |
+
|
117 |
+
Source versions (and config files) of my models are here:
|
118 |
+
|
119 |
+
https://huggingface.co/collections/DavidAU/d-au-source-files-for-gguf-exl2-awq-gptq-hqq-etc-etc-66b55cb8ba25f914cbf210be
|
120 |
+
|
121 |
+
OTHER OPTIONS:
|
122 |
+
|
123 |
+
- Increase rep pen to 1.1 to 1.15 (you don't need to do this if you use "smoothing_factor")
|
124 |
+
|
125 |
+
- If the interface/program you are using to run AI MODELS supports "Quadratic Sampling" ("smoothing") just make the adjustment as noted.
|
126 |
+
|
127 |
+
<B>Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers</B>
|
128 |
+
|
129 |
+
This a "Class 1" model:
|
130 |
+
|
131 |
+
For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) please see:
|
132 |
+
|
133 |
+
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
|
134 |
+
|
135 |
+
You can see all parameters used for generation, in addition to advanced parameters and samplers to get the most out of this model here:
|
136 |
+
|
137 |
+
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
|
138 |
|
139 |
---
|
140 |
|