Nexesenex's picture
Update README.md
7b447d0 verified
|
raw
history blame
2.4 kB
metadata
base_model:
  - Nexesenex/Llama_3.x_70b_L3.3_Dolphin_128K_v1.02
  - migtissera/Tess-3-Llama-3.1-70B
  - huihui-ai/Llama-3.1-Nemotron-70B-Instruct-HF-abliterated
  - WhiteRabbitNeo/Llama-3.1-WhiteRabbitNeo-2-70B
  - mlabonne/Hermes-3-Llama-3.1-70B-lorablated
  - hitachi-nlp/Llama-3.1-70B-FLDx2
library_name: transformers
tags:
  - mergekit
  - merge

about

A pretty tame merge, good for SFW stuff.


benchs

ARC-C: 60.87 ARC-E: 83.86 PPL-512: 3.05


merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the Model Stock merge method using Nexesenex/Llama_3.x_70b_L3.3_Dolphin_128K_v1.02 as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

merge_method: model_stock
models:
  - model: Nexesenex/Llama_3.x_70b_L3.3_Dolphin_128K_v1.02
    parameters:
      weight: 1.0
  - model: huihui-ai/Llama-3.1-Nemotron-70B-Instruct-HF-abliterated
    parameters:
      weight: 1.0
  - model: mlabonne/Hermes-3-Llama-3.1-70B-lorablated
    parameters:
      weight: 1.0
  - model: hitachi-nlp/Llama-3.1-70B-FLDx2 
    parameters:
      weight: 1.0
  - model: WhiteRabbitNeo/Llama-3.1-WhiteRabbitNeo-2-70B
    parameters:
      weight: 1.0
  - model: migtissera/Tess-3-Llama-3.1-70B
    parameters:
      weight: 1.0
base_model: Nexesenex/Llama_3.x_70b_L3.3_Dolphin_128K_v1.02
dtype: bfloat16
out_dtype: bfloat16
parameters:
  int8_mask: true
  normalize: true
  rescale: false
  filter_wise: false
  smooth: false
  allow_negative_weights: false
chat_template: auto
tokenizer:
  source: union