ZeroXClem's picture
Adding Evaluation Results (#1)
2986048 verified
---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- merge
- mergekit
- lazymergekit
base_model:
- bunnycore/Llama-3.1-8B-TitanFusion-Test
- vicgalle/Roleplay-Hermes-3-Llama-3.1-8B
- vicgalle/Humanish-Roleplay-Llama-3.1-8B
- bunnycore/Llama-3.1-8B-TitanFusion-Mix
- kromeurus/L3.1-Siithamo-v0.4-8B
pipeline_tag: text-generation
model-index:
- name: Llama-3.1-8B-SpecialTitanFusion
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 74.02
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ZeroXClem/Llama-3.1-8B-SpecialTitanFusion
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 34.82
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ZeroXClem/Llama-3.1-8B-SpecialTitanFusion
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 23.34
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ZeroXClem/Llama-3.1-8B-SpecialTitanFusion
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 6.6
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ZeroXClem/Llama-3.1-8B-SpecialTitanFusion
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 7.49
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ZeroXClem/Llama-3.1-8B-SpecialTitanFusion
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 29.12
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ZeroXClem/Llama-3.1-8B-SpecialTitanFusion
name: Open LLM Leaderboard
---
# πŸ† ZeroXClem-Llama-3.1-8B-SpecialTitanFusion πŸ†
*A powerful fusion of Titan-level models, designed for enhanced roleplay, creativity, and intelligence.*
![Model Fusion](https://huggingface.co/front/assets/huggingface_logo-noborder.svg)
## πŸ“Œ Overview
ZeroXClem-Llama-3.1-8B-SpecialTitanFusion is a meticulously crafted model merge leveraging **state-of-the-art transformer architectures**. Using `mergekit`, we combined multiple high-performance Llama-3.1 models to enhance **context retention, creativity, and nuanced text generation**.
This model is based on **[kromeurus/L3.1-Siithamo-v0.4-8B](https://huggingface.co/kromeurus/L3.1-Siithamo-v0.4-8B)**, with carefully selected models merged using the `model_stock` method.
## πŸ›  Merge Details
### πŸ”„ **Merge Method:** `model_stock`
This model was merged using the **model_stock** method, ensuring a balanced and optimized blend of all contributing architectures.
### πŸ“‘ **Models Merged**
The following models contributed to this fusion:
- πŸ”· **[kromeurus/L3.1-Siithamo-v0.4-8B](https://huggingface.co/kromeurus/L3.1-Siithamo-v0.4-8B)**
- 🦾 **[bunnycore/Llama-3.1-8B-TitanFusion-Test](https://huggingface.co/bunnycore/Llama-3.1-8B-TitanFusion-Test)**
- 🎭 **[vicgalle/Roleplay-Hermes-3-Llama-3.1-8B](https://huggingface.co/vicgalle/Roleplay-Hermes-3-Llama-3.1-8B)**
- πŸ’‘ **[vicgalle/Humanish-Roleplay-Llama-3.1-8B](https://huggingface.co/vicgalle/Humanish-Roleplay-Llama-3.1-8B)**
- πŸ”₯ **[bunnycore/Llama-3.1-8B-TitanFusion-Mix](https://huggingface.co/bunnycore/Llama-3.1-8B-TitanFusion-Mix)**
### βš™ **Configuration**
```yaml
name: ZeroXClem-Llama-3.1-8B-SpecialTitanFusion
base_model: kromeurus/L3.1-Siithamo-v0.4-8B
dtype: bfloat16
merge_method: model_stock
models:
- model: bunnycore/Llama-3.1-8B-TitanFusion-Test
- model: vicgalle/Roleplay-Hermes-3-Llama-3.1-8B
- model: vicgalle/Humanish-Roleplay-Llama-3.1-8B
- model: bunnycore/Llama-3.1-8B-TitanFusion-Mix
tokenizer_source: kromeurus/L3.1-Siithamo-v0.4-8B
```
## 🌟 Features & Capabilities
πŸ”Ή **Highly dynamic writing** – Perfect for storytelling, world-building, and creative applications.
πŸ”Ή **Refined roleplay abilities** – Enhanced persona handling, deep emotional responses, and immersive dialogue generation.
πŸ”Ή **Better structured recall** – Improved consistency across large-context conversations.
πŸ”Ή **Balanced & non-restrictive responses** – Adaptable across different use cases.
## πŸ›  How to Use
### πŸ”₯ Ollama (Quick Inference)
You can run the model using **Ollama** for direct testing:
```bash
ollama run hf.co/ZeroXClem-Llama-3.1-8B-SpecialTitanFusion
```
### πŸ€— Hugging Face Transformers (Python)
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
import torch
model_name = "ZeroXClem-Llama-3.1-8B-SpecialTitanFusion"
# Load tokenizer & model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
device_map="auto"
)
# Initialize text generation pipeline
text_generator = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
device_map="auto"
)
# Example prompt
prompt = "Describe the significance of AI ethics in modern technology."
# Generate output
outputs = text_generator(
prompt,
max_new_tokens=200,
do_sample=True,
temperature=0.7,
top_k=50,
top_p=0.95
)
print(outputs[0]["generated_text"])
```
---
## πŸ”§ Recommended Usage
### πŸ“œ **Prompting Style**
For best results, **use system prompts similar to Llama-3.1 Instruct**.
Example system message:
```text
Think step by step with a logical reasoning and intellectual sense before you provide any response.
```
For enhanced creativity in roleplay, try:
```text
### Instruction:
You are an advanced roleplaying assistant. Maintain deep character consistency and immersive storytelling.
```
### πŸ— **Model Settings**
For **optimal output quality**, use the following settings:
```yaml
Temperature: 1.2
Min P: 0.1
Repeat Penalty: 1.05
Repeat Penalty Tokens: 256
Smooth Sampling: 0.18
```
## πŸ”₯ Disclaimer
**πŸ”Ή Use responsibly!**
This model follows **Meta’s Llama-3.1 Community License Agreement**. It is an **uncensored** model, meaning that alignment should be implemented based on individual use cases.
**πŸ”Ή You are responsible for the content you generate.**
Please ensure compliance with ethical AI guidelines when deploying this model in production environments.
## πŸ’¬ Feedback & Contributions
If you have suggestions or improvements, feel free to **open a discussion on Hugging Face**! Let's continue improving the **Llama-3.1 merging meta-game!** πŸš€
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/ZeroXClem__Llama-3.1-8B-SpecialTitanFusion-details)
| Metric |Value|
|-------------------|----:|
|Avg. |29.23|
|IFEval (0-Shot) |74.02|
|BBH (3-Shot) |34.82|
|MATH Lvl 5 (4-Shot)|23.34|
|GPQA (0-shot) | 6.60|
|MuSR (0-shot) | 7.49|
|MMLU-PRO (5-shot) |29.12|