merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the Linear merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - layer_range: [0, 12]
    model: aifffffffd/Gemma-Thinking-Test
    parameters:
      weight: 1
      density: 0.9
      gamma: 0.01
      normalize: true
      int8_mask: true
      random_seed: 0
      temperature: 0.5
      top_p: 0.65
      inference: true
      max_tokens: 999999999
      stream: true
      quantization:
        - method: int8
          value: 100
        - method: int4
          value: 100

  - layer_range: [0, 12]
    model: unsloth/gemma-3-270m-it
    parameters:
      weight: 1
      density: 0.9
      gamma: 0.01
      normalize: true
      int8_mask: true
      random_seed: 0
      temperature: 0.5
      top_p: 0.65
      inference: true
      max_tokens: 999999999
      stream: true
      quantization:
        - method: int8
          value: 100
        - method: int4
          value: 100


  - layer_range: [0, 12]
    model: murat/kyrgyz_umlaut_corrector
    parameters:
      weight: 1
      density: 0.9
      gamma: 0.01
      normalize: true
      int8_mask: true
      random_seed: 0
      temperature: 0.5
      top_p: 0.65
      inference: true
      max_tokens: 999999999
      stream: true
      quantization:
        - method: int8
          value: 100
        - method: int4
          value: 100

  - layer_range: [0, 12]
    model: unsloth/gemma-3-270m-it
    parameters:
      weight: 1
      density: 0.9
      gamma: 0.01
      normalize: true
      int8_mask: true
      random_seed: 0
      temperature: 0.5
      top_p: 0.65
      inference: true
      max_tokens: 999999999
      stream: true
      quantization:
        - method: int8
          value: 100
        - method: int4
          value: 100

  - layer_range: [0, 12]
    model: tjefferson401/MyGemmaNPC
    parameters:
      weight: 1
      density: 0.9
      gamma: 0.01
      normalize: true
      int8_mask: true
      random_seed: 0
      temperature: 0.5
      top_p: 0.65
      inference: true
      max_tokens: 999999999
      stream: true
      quantization:
        - method: int8
          value: 100
        - method: int4
          value: 100


  - layer_range: [0, 12]
    model: alakxender/gemma-3-270m-dhivehi-text-classifier
    parameters:
      weight: 1
      density: 0.9
      gamma: 0.01
      normalize: true
      int8_mask: true
      random_seed: 0
      temperature: 0.5
      top_p: 0.65
      inference: true
      max_tokens: 999999999
      stream: true
      quantization:
        - method: int8
          value: 100
        - method: int4
          value: 100
          



  - layer_range: [0, 12]
    model: huihui-ai/Huihui-gemma-3-270m-it-abliterated
    parameters:
      weight: 1
      density: 0.9
      gamma: 0.01
      normalize: true
      int8_mask: true
      random_seed: 0
      temperature: 0.5
      top_p: 0.65
      inference: true
      max_tokens: 999999999
      stream: true
      quantization:
        - method: int8
          value: 100
        - method: int4
          value: 100
          



  - layer_range: [0, 12]
    model: RohanSardar/mental-health-qa
    parameters:
      weight: 1
      density: 0.9
      gamma: 0.01
      normalize: true
      int8_mask: true
      random_seed: 0
      temperature: 0.5
      top_p: 0.65
      inference: true
      max_tokens: 999999999
      stream: true
      quantization:
        - method: int8
          value: 100
        - method: int4
          value: 100
          

  - layer_range: [0, 12]
    model: ShahzebKhoso/Gemma3_270M_FineTuned_XSUM
    parameters:
      weight: 1
      density: 0.9
      gamma: 0.01
      normalize: true
      int8_mask: true
      random_seed: 0
      temperature: 0.5
      top_p: 0.65
      inference: true
      max_tokens: 999999999
      stream: true
      quantization:
        - method: int8
          value: 100
        - method: int4
          value: 100
          
          

  - layer_range: [0, 12]
    model: xriminact/MyGemmaQuiz
    parameters:
      weight: 1
      density: 0.9
      gamma: 0.01
      normalize: true
      int8_mask: true
      random_seed: 0
      temperature: 0.5
      top_p: 0.65
      inference: true
      max_tokens: 999999999
      stream: true
      quantization:
        - method: int8
          value: 100
        - method: int4
          value: 100
          
          
  - layer_range: [0, 12]
    model: NukeverseAi/HQQ-270M
    parameters:
      weight: 1
      density: 0.9
      gamma: 0.01
      normalize: true
      int8_mask: true
      random_seed: 0
      temperature: 0.5
      top_p: 0.65
      inference: true
      max_tokens: 999999999
      stream: true
      quantization:
        - method: int8
          value: 100
        - method: int4
          value: 100
          
          
  - layer_range: [0, 12]
    model: clevrpwn/gemma-3-270m-codealpaca-finetune
    parameters:
      weight: 1
      density: 0.9
      gamma: 0.01
      normalize: true
      int8_mask: true
      random_seed: 0
      temperature: 0.5
      top_p: 0.65
      inference: true
      max_tokens: 999999999
      stream: true
      quantization:
        - method: int8
          value: 100
        - method: int4
          value: 100



merge_method: linear
weight: 1
density: 0.9
gamma: 0.01
normalize: true
int8_mask: true
random_seed: 0
temperature: 0.5
top_p: 0.65
inference: true
max_tokens: 999999999
stream: true
quantization:
- method: int8
  value: 100
- method: int4
  value: 100
parameters:
  weight: 1
  density: 0.9
  gamma: 0.01
  normalize: true
  int8_mask: true
  random_seed: 0
  temperature: 0.5
  top_p: 0.65
  inference: true
  max_tokens: 999999999
  stream: true
  quantization:
    - method: int8
      value: 100
    - method: int4
      value: 100
dtype: float16
Downloads last month
83
Safetensors
Model size
268M params
Tensor type
F16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Ignatfhc/Cc